AI has become an integral part of our daily lives through facial recognition systems on our phones, movie and song recommendations, conversational applications in customer care systems, machine learning powered analytics, autonomous machines that involve self-driving cars, and so on. Today the need to trust these AI systems is becoming most important. Thus, AI is finding its way in all industries as AI-enabled systems are being used in a wide range of applications – be it for entertainment or life-critical cases.
In the industries which are highly risky and valuable, the stakes are exorbitant to trust the decisions taken by machines. A conventional AI today does not explain the reasoning behind the conclusions or recommendations; hence it is largely a ‘Black Box AI’. Humans operating independently in high-stake decision-making, take actions based on the expectation that are justifiable and reasonable. Humans take decisions and provide conclusions with significant amount of evidence and reasoning supporting the conclusion. This is consistent with a desire for an ‘Explainable AI’. For AI systems to become trusted advisors to humans, they should answer the question ‘Why?!’.
“No, no! The adventures first. Explanations take such a dreadful time!”
– Lewis Carroll, “Alice in Wonderland”
To put things in perspective, let us take a closer look at a few industries and use cases where ‘Explainable AI’ will play a very crucial role. Any decision which is increasingly made by machines that impacts peoples’ lives, could be soiled by bias and would fundamentally require auditability and trust, making it a potential use case.
- Healthcare – Physicians today take help of AI Analytics to read various diagnostic reports and interpret them. AI systems act as virtual assistants, saves a lot of time for the medical staff, allowing them to focus on interpretive tasks rather than repetitive ones. ‘Explainable AI’ would give doctors the decision lineage to understand ‘how’ the conclusion was reached for detecting cancerous lesions from the MRI image or might even help reach a different decision where human expertise and intuition is required.
- Insurance – AI today is being used for underwriting, customer service, claims, compliance, policy adjustment, and so on. AI is expected to completely revamp the insurance industry in the next few years, especially when we consider the impacts of life insurance, homeowners, compensation, and so on. If AI would be able to answer questions such as, ‘what’ models and considerations were used, ‘what’ was the rationale behind the insight, ensure regulators understand the AI working and processes, and more, it would positively impact the adoption of analytics and AI recommendations for the industry.
- Autonomous Vehicles – In the era where self-driving cars are no more a distant dream and has come very close to becoming an everyday reality, the decisions that AI takes could mean life or death. Explainability is a must for every stakeholder – be it a pedestrian, driver, law enforcement authority, automaker, or public safety personnel. With sufficient governance and reasoning, automakers can assess ‘what’ led the car to go from point A to point B. Similarly, passengers can take an informed decision if they will be comfortable traveling by a vehicle that is designed to take decisions on their behalf.
Explainable by Design
Having considered the ‘why’, let us now focus on ‘how’ we can develop AI systems to be more explainable by design? A question which we can ask ourselves while designing AI products would be – does AI make it easier for humans to perceive, detect and understand the decision-making process? ‘What’ are the principles that can be considered during the designing process?
- Explanation – The systems should provide reasoning, support or evidence for every output.
- Meaningful – The explanations provided by the AI system should be easily comprehendible for the users. The explanations should be tailored according to the user groups.
- Accuracy – The system should efficiently and correctly describe ‘how’ it came to a certain decision. Explanation accuracy is not to be confused with decision accuracy. Regardless of the accuracy with which the system takes a decision, it should be able to appropriately provide a corresponding explanation on ‘how’ it was derived.
- Knowledge Limits – The system should be able to identify the situations where it is not designed to operate or does not have enough confidence level to conclude on a particular outcome. This will prevent the system from providing misleading, bogus answers which can be dangerous or unjust and leads to mistrust. System should be able to delegate or bring humans in the loop and learn from them, which can be applied in future situations.
A good illustration of the above principles in action would be the ignioTM triaging process – where it identifies the root cause of the incident, triages and applies solution in an iterative manner. The objective is to be ‘first time right’ and leverage the knowledge that is already known and apply the solutions that are either modelled or self-learnt, and heuristics based. All this is achieved while being transparent and demonstrating the complexities through effective visualization.
If ignio is unable to resolve any incident, it initiates a collaboration room and notifies an expert resolver on the issue along with all the relevant contexts. This brings in agility, improves resolution time and expands continual learning.
It is certain that AI will increasingly be an integral part of human decision making in the future. Human knowledge and experience can help technology learn and vice versa. The continual feedback loop, explainability and transparency will help AI models move from current black box to glass box. This will prove to be a huge dynamic asset for all kinds of industries and businesses.