Explainable AI: Requirements, Use Cases and Solutions
Is explainability the key for building trust in Artificial Intelligence? Where does the application of explainable AI tools stand today from a scientific and industrial perspective? The study provides answers to these and other questions - including practical use cases.
The enormous potential of artificial intelligence has now become evident in the form of numerous products and services in all areas of life. One of the key success factors in the market is the explainability of decisions made by AI applications: Explainability strengthens the acceptance of the technology among users. However, it is also often a key prerequisite for approval and certification procedures and for fulfilling transparency obligations. This study was conducted as part of the accompanying research for the innovation competition "Artificial Intelligence as a Driver for Economically Relevant Ecosystems" (AI Innovation Competition) on behalf of the German Federal Ministry of Economics and Climate Action. It examines the current state of the art in explainable AI and illustrates its benefit through practical use cases to provide AI developers with practical guidance on available explanatory strategies.
- Recommend this page:
- Print view