Why are there still so many data defects in data transactions.
Time, misunderstandings and clumsy fingers are largely to blame for a number of them. Pushing the responsibility out to the end user also has negative consequences.
Even systems designed to reduce errors end up filled with bad data.
And cleaning up bad data is hard. I have always been a little OCD about data. I want it to be tidy and fit into the categories. I want the data to be consistent and follow the right format…I also wanted all data to be complete. I am old to remember looking printed zip code directories to complete an address.
Duplicate records often lead to ambiguity and leave it up to a human to make judgement call. Single users with multiple email addresses cause different problems.
Unfortunately, this has been a source of frustration for me for a long time. Maybe it is getting better, but with more data swirling around out there are that many more defect opportunities. In a related way, I am surprised there has not been that much improvement in import software. (We are still fiddling with CSV files and saving them in the right manner to be imported properly doesn’t work.)
There is a reason for this. It is because all of this is actually hard to do. It still requires more data to resolve the issue and the best way to do that is to push it back to the human.
But business management systems, CRM systems and accounting systems all play by their own rules. Humans inside businesses interpret and try to enforce those rules consistently. The business might manage OK for a while until something comes up that requires an exception.
At that point, it is almost game over. Once there are one or two exceptions out there in the wild database, it is hard to rein it back in.
You build your model one way and the model needs to shift and you don’t. You can shift your model but not the underlying systems until later and eventually it all breaksdown or you can build all new system that co-exists with your existing systems.
There is no easy answer.