Everything is a lens problem. Sure, it is an information problem too, but the real issue is how everyone takes in and processes information.
Everyone presents information differently too. This makes a big difference in how information is received. It sticks differently to everyone that hears it.
Information is not neutral. Information comes pre-configured with the bias of the presenter and the bias of the receiver. In theory an open mind can overcome a bias, but this takes time and an open mind. An abundance of information and the need to process information quickly means we are taking all kinds of cognitive shortcuts with information.
In our own internally rational world, we like to think that everyone else is rational too and sees the same data in the way that we do. The problem is that is not the world that we live in, and I am not really sure that is the world that we want to live in. The non-rational, non-fact-based choice can often lead to unexpected and better outcomes.
With artificial intelligence, it may be possible to strip out some of the biases – at least the differences in perception of presentation style and look at the data. Other biases and world-views may be baked into the code and we really won’t know what that bias is. We are not able to see or understand what is happening in the proprietary code even if we do get to see it.
Unchecked Growth of AI
The scarier thing is that AI grows over time. It’s data builds over time and its bias builds over time completely outside the control of coders. While we may be able to build some kind of rules to limit some kinds of overt bias, we will have no idea what kind of bias will be growing inside the AIs. Will we even be able to recognize it.
How many AIs will we have? Who will control these AIs? Are they privately held? Are they publicly held? Either way, this presents all kinds of issues.
Privacy is an issue, but the larger issue is the mass of data that can be manipulated. In our happier moments we can see all the positives that can come from more collective data. Maybe better healthcare…maybe better understanding of the world, but the dark side is really dark.
The reality is that there are multiple competing AIs out there, and it is going to be more even damaging than misinformation in social media posts. Google, Facebook, Amazon, Microsoft, Apple, Oracle and more are sitting on treasure trove of data. This is data that they own. Most likely, they are using the data for their own business needs, but…. When you stop to think about it, AI proliferation may be even more dangerous than nuclear proliferation. And there is no one that can control it. It is public, it is private, it is global and it is inevitable. It is cheap and it is clandestine…
Remarkable computing power is accessible to just about anyone. There are public AI engines out there that anyone can use cheaply. AI proliferation is not solved easily by regulation and the implementation of rules.
Information processing is largely invisible and while forensics will reveal something after the fact. It is what happens when we are not looking that is the most dangerous.
AI is not the customer service and productivity panacea that we want ideally want to believe. It is a hidden danger that we are inviting into our systems that may be worse than the Lens problem it may address.