data quality

Part 4 - Bad records path

Part 4 - Bad records path

4 min

In part 4, the final part of this beginner’s mini-series of how to handle bad data, we will look at how we can retain flexibility to capture bad data and proceed uninterrupted.

We’ll look to use specifically, the “badRecordsPath” option in Azure Databricks, which has been available since Azure Databricks runtime 3.0.

Part 3 - Permissive

Part 3 - Permissive

3 min

In the 3rd instalment of this 4-part mini-series, we will look at how we can handle bad data using PERMISSIVE mode. It is the default mode when reading data using the DataFrameReader but there’s a bit more to it than simply replacing bad data with NULLs.

Part 1 - Failfast

Part 1 - Failfast

4 min

Receiving bad data is often a case of “when” rather than “if”, so the ability to handle bad data is critical in maintaining the robustness of data pipelines.

In this beginners 4-part mini-series, we’ll look at how we can use the Spark DataFrameReader to handle bad data and minimise disruption in Spark pipelines. There are many other creative methods outside of what will be discussed and I invite you to share those if you’d like.