
Part 2 - Dropmalformed
In the second part, we’ll continue to focus on the DataFrameReader class and look at the option, DROPMALFORMED to remove bad data.
In the second part, we’ll continue to focus on the DataFrameReader class and look at the option, DROPMALFORMED to remove bad data.
Receiving bad data is often a case of “when” rather than “if”, so the ability to handle bad data is critical in maintaining the robustness of data pipelines.
In this beginners 4-part mini-series, we’ll look at how we can use the Spark DataFrameReader to handle bad data and minimise disruption in Spark pipelines. There are many other creative methods outside of what will be discussed and I invite you to share those if you’d like.