Black Box: Right and Wrong
While much of the technology and business world exhibits an almost irrational exuberance for artificial intelligence (AI) and, in particular, machine learning, some are voicing their concerns over the potential negative effects. Apart from the familiar doomsday scenario of our AI servants becoming our overlords, a more practical and near-term problem that has been raised is the relatively black-box nature of machine learning-based AI. These concerns can be exemplified with one question: “if we cannot understand why an AI made a decision, how will we know when the decision is wrong or right?”
These concerns already present themselves as problems in real life, but are mostly benign. For instance, we often receive inquiries regarding why one document is successfully processed when another document, even if it is largely identical, is not. With handwriting recognition, the amount on one check may be perfectly recognized while another was not. Even more confounding, the check amount that was not recognized might be more clearly legible to the human eye than the other. Since we use machine learning to train our products, a simple answer is not always possible. Most of the time, it comes down to the sample set that was used to train the system. Maybe the check that is legible had some sort of unique quality that was not represented in the sample set or maybe there were too few samples to be noticeable.
AI Systems: Fairness and Bias
On the nefarious side, the most commonly-cited concern is on decision-making systems that result in unfairly treating some cases due to unseen bias. For example, if an AI system were used to determine credit worthiness and rejected one customer while accepting another, how would we ensure that it is being fair? If an AI system were used to determine sentencing in criminal cases, how would we know if it applied the appropriate judgement? The inability to completely understand the internals of the AI systems is the root cause of these concerns.
Is this concern really new and is it valid? Certainly there are examples of human-based decision-making systems that have been deemed problematic or even illegal. So why is there all this hand-wringing about AI?
Establishing Safeguards
The reality is that there are ways to build-in “instrumentation” or at the very least, provide guidance for the key factors that are used to make decisions. Providing these capabilities allow us to peer into the black box and get a better understanding of what input is important and what input is irrelevant. In doing so, we can create learning machines that can be queried just like questioning a human when the need to investigate a certain decision arises.
If you found this article interesting, you might also find this useful: