Course Filter

Course type
Duration
Hours
Target
Topics
Language
Proficiency
Certificate selection
Instructor organization
Price
Eur

Explainability in Artificial Intelligence Systems

By Faculty of Engineering of University of Porto

Type of course:

Digital learning, Path

Language:

EN

Duration:

3 hours

Workload:

8 hours

Proficiency:

Intermediate

Target:

Manager, Professionals, Workers

This course is officially recognised and labelled by the European Institute of Innovation and Technology (EIT). EIT Label is a quality mark awarded to programmes demonstrating outstanding innovation, educational excellence and societal impact.

Explainability in AI systems refers to the ability to understand and explain how a particular decision or output was generated by an artificial intelligence system. It is becoming increasingly important as AI systems are being used in critical applications such as healthcare, finance, and autonomous vehicles, where the decisions made by these systems can have a significant impact on human lives. There are several reasons why explainability is important in AI systems:

1. Trust: By providing explanations for the decisions made by AI systems, users can build trust in the system and be more confident in the decisions it makes.

2. Compliance: Many regulatory frameworks require that AI systems be explainable, particularly in industries such as healthcare and finance.

3. Debugging: By understanding how a system arrived at a particular decision, developers can more easily identify and correct any errors or biases in the system.

4. Insights: Explainability can also provide insights into how an AI system is working and help identify areas for improvement.

There are several techniques for achieving explainability in AI systems, including:

1. Rule-Based Systems: These systems use a set of predefined rules to make decisions, which can be easily understood and explained.

2. Interpretable Models: Models such as decision trees and linear regression are inherently interpretable, meaning that their decision-making process can be easily understood.

3. Model-Agnostic Methods: Techniques such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) can be used to generate explanations for the decisions made by any type of model, even those that are not inherently interpretable.

4. Human-in-the-Loop: By involving human experts in the decision-making process, AI systems can be made more transparent and explainable.

In summary, explainability is an important consideration when designing and implementing AI systems, particularly in applications where the decisions made by these systems can have a significant impact on human lives. There are several techniques available for achieving explainability, and the choice of technique will depend on the specific requirements and constraints of the application.

Learning outcomes

  1. At the end of "Explainability in AI Systems" learning path, the learner will be able to analyze what explainable AI (XAI) introduces in an AI model and compare different techniques, according to the nature of the model, the goal of the explanation and how the XAI technique interacts with the model.
  2. At the end of "Explainability in AI Systems" learning path, the learner will be able to map appropriate XAI tools to each AI system, based on the system's specific characteristics and business requirements.
  3. At the end of "Explainability in AI Systems" learning path, the learner will be able to design XAI tools to support AI explainability in a manufacturing scenario, and contribute to improve informed decision-making.

Topics

Digital Transformation, Artificial Intelligence (AI)

Provided by

Content created in 2023
+266 enrolled
Take the next step toward your learning goals

Course Includes

  • 1 Quiz
  • 1 Certificate of achievement

Related