
Even experts often do not understand how an artificial intelligence system makes a decision. Since this is not only unsatisfactory, but can also be dangerous, Fraunhofer researcher Marco Huber wants to change it. The interview was conducted by RALF BUTSCHER.
Mr. Huber, you want to make artificial intelligence robust, transparent and explainable. What is the problem?
So far, there have been several shortcomings of smart systems – especially in the area of machine learning. In this important area of artificial intelligence, AI, large amounts of data are used to make automated decisions. You feed an algorithm with data on which it independently builds a model. The model can then also be used with other, new data. The disadvantage: Most of the models created in this way are a “black box”: Even the experts who developed the algorithm do not ultimately know how the results come about. This applies particularly to machine learning in the area of artificial neural networks. Some of them consist of hundreds of layers with billions of parameters. This is simply no longer understandable even for experts.
Would that be necessary?
In some applications this is not a problem. For example, if artificial intelligence is used to recommend music or videos or to translate foreign language texts. If the result is right, everything is fine for the users. But there are also areas of application where it is not so easy, for example when it comes to discrimination. A large American online retailer has developed a tool to support the selection process for job applicants and to automatically sort out unsuitable applicants. But it turned out that the algorithm discriminates against women and minorities. The reason: You have trained the algorithm with your own data – and since the company has mainly worked with men so far, there are also a particularly large number of data records. This ultimately led to the algorithm favoring male applicants. Another issue is legal issues.
Does it matter?
Yes, for example where the European General Data Protection Regulation applies. As an operator of an AI system, you cannot and should not simply pull yourself out of the affair. Articles 12 and 13 of this regulation require: If personal data are processed, the data subjects have the right to an easily understandable and understandable explanation of how automatically generated results from the use of their data come about.
What does that mean in a specific case?
We are currently researching a project together with a company from the financial sector that deals with “credit scoring”: assessing the creditworthiness of applicants for a loan. There you can clearly feel the conflict between performance and transparency. The company currently uses fairly simple algorithms that are easy to understand – in order to explain to customers why they are rated good or bad. There would be an algorithm with which scoring with a higher hit rate could be achieved. But it cannot be used because it is non-transparent.
How can this dilemma be resolved?
There is a separate research field for this: “Explainable Artificial Intelligence” – currently one of the most active sub-areas of AI. This is an attempt to find a compromise that makes it possible to use powerful algorithms such as neural networks, but at the same time to create a high degree of explainability using additional methods.
What does it look like then?
With my team, I research and develop algorithms that can do this. We presented an example of this in 2019. The process we use is based on the so-called proxy approach. This means that a neural network learns and decides on the one hand as usual, but at the same time extracts a so-called white box model that creates explainability. For example, this can be a decision tree that illustrates very well how a decision was made.
Can such approaches already be used?
Partly, partly. There are relatively simple explanation algorithms that are already being used productively. But essentially they can only say which characteristics of the data are important for a decision and which are not. In principle, this is the simplest type of explanation that can be provided. However, interrelationships, e.g. between different characteristics, cannot be provided. But our algorithm is able to do that. For example, it reveals which combinations of characteristics play a role and how important they are. However, this approach is still in the research stage.
When will more complex explanation algorithms be ready for use?
I don’t think it will take long. Presumably there will be tools with a certain market maturity in the next one to two years. A number of startup companies are currently trying to develop appropriate products – especially in the United States. We at Fraunhofer IPA are also thinking about starting a start-up out of the institute.
What if an AI algorithm is used in medicine for support, for example for diagnoses or therapy recommendations? It can be a matter of life and death.
That’s right. In principle, applications in medicine are a very sensitive topic. This is demonstrated, for example, by the Watson tool from IBM. Several clinics tested it for automated decisions, but it wasn’t reliable enough. Technology is getting better and better, and medicine remains an important field of application. We are talking to radiologists at large clinics in Stuttgart and Tübingen, among others.
What exactly is it about?
It is about evaluating computed tomography images of Covid-19 patients. An experienced radiologist can use such CT images to assess whether pneumonia is the result of a corona virus infection or has arisen from another infection. And an expert can also estimate quite well how the course of the disease will develop. However, not every clinic has such experts in a team or has a lot of experience with severe illnesses induced by the corona virus. It is therefore important to consider whether it is not worth developing automated algorithms that support doctors. In such delicate applications, it is crucial that the tool not only assess the disease, but also understand how it comes to this assessment.
Can transparent AI systems fundamentally change the way people deal with intelligent machines?
Yes, they can create a basis of trust. I see considerable potential in this. Because especially in Germany, many people are rather skeptical about this technology. I think an algorithm that can explain why it decides how helps a lot. We often do the same thing: we justify our decisions, and a good and understandable reason is usually accepted – even if you do not agree with the decision.
Can more transparency inspire the application of artificial intelligence?
Yes, I see it that way. Especially in areas where the requirements are high, more transparency will push artificial intelligence.
Where could that be the case?
For example, in the interaction between humans and robots. A robot that explains why it acts the way it does ensures acceptance and trust. Or in the production environment, where a lot of money can be at stake: this area is dominated by engineers. And because of their training, they attach great importance to understanding why something happens. They are often initially skeptical when an artificial intelligence makes decisions independently. It is important to create trust through explainability.
What could go wrong in production?
For example, if you use an algorithm for quality assurance, it can accidentally happen that a large number of defective parts are not recognized. This can result in recourse claims from customers. If it can be explained how the error originated, it can help in the event of a dispute.
How is it with autonomous driving? Intelligent systems for the health or even the life of road users have to make high-risk decisions at lightning speed. Can explainability help to make the technology safer there?
In this scenario, it is important to quickly clear up an accident, for example. Who is to blame if an autonomous vehicle has overlooked another car, pedestrian or cyclist? Was it the algorithm? Or was it a situation in which a person would not have reacted differently? In the event of a fault, a traceable system offers the prospect of improving the algorithms – and thus helping to prevent similar accidents from happening in the future.
One last question, which aims in a different direction: neural networks are in a way a kind of representation of the human brain. If you have a technique that you understand how it works, is it conceivable to learn something new about the brain from it?
I’m careful. Artificial neural networks are a very rough abstraction of the human brain. The natural neuron has served as a model for the artificial neuron – but the way the human brain works is not currently being built. It is structured much more complicated and networked to a higher degree. That is why it is not so easy to conclude from one to the other. Nevertheless, findings from artificial intelligence have sometimes advanced brain research.
Can you give an example?
So-called reinforcement learning: a sub-area of machine learning that deals with learning optimal actions. The Alpha Go system – an artificial intelligence that masterfully plays Go – is a prime example of this. The principle of reinforcement learning comes from nature. It is a human learning paradigm, which is also called trial-and-error learning. If we humans learn something new, it goes like this: In the beginning we fail, but over time we get better and better. This is roughly how reinforcement learning works. When the technology for this was developed, it turned out: Special algorithms for reinforcement learning – the so-called temporal difference learning – help to explain certain thinking processes in the brain. So it is quite possible that there is an interplay of knowledge between biology and technology.
About our conversation partner:
Prof. Dr. After completing his studies in computer science at the University of Karlsruhe, Marco Huber headed a research team at the Karlsruhe Fraunhofer Institute for Optronics, System Technology and Image Analysis IOSB. This was followed by work at companies and at the Karlsruhe Institute of Technology (KIT). Since 2018 he has been a professor for cognitive production systems at the University of Stuttgart. He heads the Image and Signal Processing department and the Center for Cyber Cognitive Intelligence at the Fraunhofer Institute for Manufacturing Engineering and Automation IPA in Stuttgart. His research focuses on machine learning, explainable artificial intelligence, image processing and robotics in production. He is co-organizer of the “Smart Machines in Use” congress of the Konradin Media Group and Fraunhofer IPA on December 1st. More on this at: www.industrie.de/kuenstliche-intelligenz-2020.