Section Four - Towards transparent AI
A story about the answer being 42, explainable AI and computers in the loop (30 minutes)
Okay, so in this crash course we stated two important things so far.
- One, technology and technology companies should be transparant;
- Two, advanced AI (like machine learning) functions best if it is not transparent.
So what now? Why do we care about transparency if the AI can do fantastic things? Why is that a problem? What are solutions?
Why care about transparency?
If we have this great, advanced AI. If we have this mysterious software that is called machine learning, deep learning or neural networks. If this technology can do amazing things, why care about transparency?
Well, let's look at a few reasons:
First, we humans want an explanation.
Quick question: If you are denied for a job, a reservation in a restaurant or a loan, because some computer said 'no,' would you like to know why?
You probably would like to know why. After all, if you know why you are being rejected for something, you can adapt. You know the rules and you can learn to play by them. If you don't know the reason for the rejection, how can you improve?
Second, suppose an AI concludes that it is better not to start a study, because people like you do not pass the study in 95% of the cases. Are you grateful? Are you glad the AI saved you from making a mistake? Or, do you think you belong to that 5%?
An AI that in 99% of the cases gives good advice about whether someone should get a loan, is still wrong in 1% of the cases. And that can be a lot of people. And what if AI makes life and death decisions like with medical advice or autonomous cars?
Third, as we have said before in this and other crash courses, data is subjective. Chances are that the AI is trained on biased data. There is a famous example of an AI recruiting system of Amazon that favoured white applicants. That is because Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry. This was a pattern that was discovered, but what if the bias is hidden somewehere in the black box?
Finally, AI, even advanced AI with fancy names like deep learning, often gets it wrong. That means that AI can do really strange things or that people who understand how the AI works, can rig the system. Check out this great (and fun) video (11 minutes):
Explain yourself
Present day AI is plenty weird, that's what we learned from the video above. If we want to trust this weird AI, it is important that the AI can explain itself. This way we can learn how to work with the AI.
First, let's look at a small introduction of what explainable AI (XAI) is (2 minutes).
First, it is mentioned that ‘Simple’ AI models can be easily explained (things like decision trees). However, earlier we explained that there is a trade-off between accuracy and transparency. So, these simple models cannot be as accurate as, for example, neural networks. The AI that can do really cool things is hard to explain.
Second, the video hints to building AI that can explain AI. Right, but who is going to explain the AI that explains the AI? This sounds like a solution devised by techies. How do we solve a technology problem? With more technology, of course!
In 1978 Douglas Adams, brilliant writer of The Hitchhiker's Guide to the Galaxy already knew that this was not going to work. Watch this brilliant and prophetic video (3 minutes) about an AI that cannot explain itself, but proposes to solve that with an even better AI.
So, technology is not always a solution. Of course, there are technological solutions that can help (we show some in section six) but sometimes the better solution is not using (a certain) AI.
Finally, in the explainable AI video, it is stated that there always should be a human in the loop. This sounds good, but also scary. Perhaps it is more sensible to state that there always should be a computer in the loop.
The need for explainable, transparent AI is very much depending on the scope of the AI. If the AI can determe with an app, if people in rural India with no easy access to a doctor have an eye disease, then 95% accuracy without explanation is a blessing. If AI fights spam, that is a blessing. However, if you apply for a job, and a black box AI determines if you are accepted then 95% is a blessing for the employer, but can be a disaster for the applicant that is part of the 5%. If an opaque AI determines your sentence or medical treatment than it becomes really sketchy.
Excercise
Open the PowerPoint template (CC5_Transparency_Exercise.pptx) and read the case for a black box AI with a lot of benefits. What do you think? Should the city of Eindhoven implement the AI as proposed?
Answer: CC5_Transparency_Exercise_Answered.pptx
GDPR
Finally ,there is another reason to make sure decisions made by an AI can be explained, namely the law!
Under the European Union’s General Data Protection Regulation (GDPR), a business using personal data for automated processing must be able to explain how the system makes decisions. The individual data subject (for example the person who was rejected for the loan) has the right to ask the business why it made the decision it did. The business must then explain how the system came to its decision. If the company can’t explain the decision in response to an individual’s request, it would not be compliant with the GDPR.
At present, the U.S. doesn’t have a similar specific law or set of policies requiring explainability or human intervention, though significant discussion has begun at the federal level in the wake of the GDPR. It’s likely this GDPR requirement will cause other countries to adopt similar policies.
In sum, it seems wise, regardless of the law, to understand how AI systems do what they do.
Take aways from section four:
- Transparency is important so people know how to improve their chances;
- It is important because AI is never 100% perfect;
- It is important because the input is often biased;
- Transparency is important because even advanced AI often gets it wrong;
- Ai is weird;
- Advanced AI is hard to explain;
- Explainable AI is required by European law