The race has suddenly heated up on AI. Knowingly or unknowingly, every one of us interacts with Artificial intelligence (AI) – it is now an integral part of our lives. Soon, AI- Human interactions will be normalised – as AI assisted daily decisions will be a common practice in our lives. However, today we don’t understand how AI is making all this happen. How the decisions and recommendations are being made. The curious human mind wants to know a bit more to see the outcome and understand what caused it. We humans find it easy to trust people and relationships where actions are explained, which is true for the new member in our lives—AI. And at the same time, we need processes and regulations to control it.
We want accountability. Knowing it, the brains behind the AI are talking of Explainable AI (XAI), making it easy to understand why AI acted the way it did and maybe help us zero down on who to blame for things we don’t like.
WHY WE NEED XAI
In its early days, AI was rule-based, following an “if this, then that” logic—transparent and predictable. But soon, AI evolved into a web of neural networks and deep learning; it became the new black box. Its complexities are amplified to such an extent that even its creators are sometimes left wondering how it works.
The logic-oriented humans riding the emotional biases have tried to control the thoughts and in turn the masses through constraints and rules, justice and penalty. However, they realised they were losing their grip when it came to AI.
Naturally, voices started debating and asking for someone to explain what was happening. Soon, subjects like ethical dilemmas and regulatory frameworks were added to the list of issues with AI potential and future.
The naysayers questioned what would happen if AI started correcting itself and defining new logic and processes without human interference. Fears of possibility and the pace of evolution ensured that trust wasn’t optional; it became non-negotiable as the stakes grew exponentially. The world needs AI to explain itself, and that too in simple language.
EXPLAINABLE AI
Explainable AI, or XAI- aims to bridge the gap between mind-boggling computations and human understanding. It works on the foundations of Interpretability. Breaking down the cause-and-effect within AI models. It promotes Transparency – a window into how the model works and what data it relies on. Finally, it works with the Power to Explain – and share the logical, easy-to-understand reasons for AI’s decisions.
XAI is expected to enhance human trust in AI recommendations across fields. Knowing the reasons and how AI worked to make that recommendation will add another layer of ease and comfort.
It may lead the human race to a more lethargic life and less use of their facility for discussing, debating, and logic—in turn less use of their brains. And that may lead to a more dumb generation.
Maybe the spiral will go out of control at some point. We don’t know, and we will never know till the spiral crosses the point of no return.
Model-agnostic methods like LIME (Local Interpretable Model-agnostic Explanations) can interpret predictions from any machine-learning model. Model-specific techniques offer insights into how neural networks focus on data, helping the XAI process and presenting it in a way humans can grasp.
NEW CHALLENGES WITH XAI.
XAI, like any initiative, is not perfect. And it is evolving. The truth is that we can never predict where the new technology will move. We often create things purposely or accidentally by discovering them and then trying to find ways to use them. So, though XAI promises fairness; it can inadvertently introduce biases if not implemented carefully. After all, we all suffer from Conformity Bias and how could machines designed by us could be different?
Meanwhile, the existing regulatory landscape is complex and evolving with more voices asking for better regulations and monitoring of the development and deployment. But that is never going to be enough. Regulations always work with a lag effect with development. The current technology is developing so fast that reining it with a series of globally agreed regulations is going to be tough.
There is a lot of noise about where technology is moving and how to channel it for human benefit without risk. It is a tightrope walk—one wrong step, and the whole system could collapse or speed up towards undesirable consequences, which we don’t know.
NTE NET.
XAI is a necessity. Again a necessity born out of our uncontrolled technological advancement. Hopefully, it will ensure that AI decisions don’t just happen to us but happen with us. One expects XAI to improve things. It may help prevent AI from making autocratic, almost dictatorial decisions and recommendations. It may explain things to us. Maybe it will lead to more accountable machines, processes, and systems.
Things are not so simple. XAI needs to balance performance with its capability and desirability to explain. There is a lingering ethical doubt that when a machine can learn and self-correct, it would also be capable of creating and influencing the outcome or explanation with biases, defeating the whole purpose of fairness.
There is a need for a League of Nations format—global norms, rules, and regulations. One thing is sure: we are in for interesting times full of new technology and dilemmas on how to rein it for human benefit.
If XAI will explain how AI works, who will explain how XAI works? Are we already in a loop with a Bhasmasura that needs a Mohni to kill?
BLOG/007/2025 If the content interests you, please subscribe to my weekly update. Follow on Twitter S_kotnala. And if you wish to connect, email me.