[caption id="attachment_148" align="alignnone" width="640"]
AI Black Box is a system whose inputs and operations are not visible to the user or any other interested party. (Image: Pixabay)[/caption]
An artificial intelligence software started speaking in a tongue it had never been taught. Emergent characteristics refer to these enigmatic behaviors, in which an AI unpredictably picks up a new talent. An AI algorithm that recently learned Bengali based on a few suggestions would be a recent example. AI is now capable of accurately translating Bengali with only a few cues.
What is this phenomenon?
There are various reasons why an AI software might acquire a language for which it was never trained, so why is this happening, and what do we term this phenomenon? Recent findings indicate that the phenomena known as AI black box is the only explanation that makes sense.
An AI black box is a system whose inputs and activities are hidden from the user or any other interested party, to put it simply. It is well known that the system is impregnable. The startling aspect of this is that black box AI models make decisions without giving any explanation for why they did so.
We must first understand how human or artificial intelligence functions in order to comprehend the AI black box. The majority of intelligence, whether it comes from humans or machines, is learned by doing. A kid might learn to recognize letters or various animals, for instance. They can quickly recognize letters and animals by simply showing them samples of each.
A human brain is essentially a trend-finding machine, according to Professor Samir Rawashdeh from the University of Michigan-Dearborn. When exposed to examples, the brain can recognize characteristics and ultimately classify them automatically and unconsciously. AI expert Rawashdeh claims that while this is simple, it is nearly impossible to explain how it is carried out.
They are trained in much the same manner as children are trained, and deep learning operates in much the same way. These systems are fed accurate representations of things they ought to be able to recognize. Soon after, a neural network will have been evaluated by its own trend-finding method to classify the associated object. The same object or image is accurately displayed when searched for in their search box. We don't truly understand how deep learning systems arrive at their conclusions, just like with human intelligence.
What Sundar Pichai said about Black Box
“There is an aspect of which all of us in the field call it a black box. You don’t fully understand and you can’t quite tell why it said this or why it got wrong. We have some ideas and our ability to understand this gets better over time, but that is where the state of the art is,” Google CEO Sundar Pichai told Scott Pelley from 60 Minutes in January this year.
When Pelley interjected by saying – “You don’t fully understand how it works and yet you’ve turned it loose on society?” Pichai responded by saying, “Let me put it this way, I don’t think we fully understand how a human mind works either.”
Why is the black box problem a matter of concern?
While AI can perform many tasks that humans are unable to, the issue of the AI black box can breed mistrust and skepticism about products powered by AI. AI black boxes can be challenging for data scientists and programmers because they are self-directed and have no internal data.
The most obvious problem would be AI prejudice. While developers' conscious or unconscious prejudice may inject bias into algorithms, black boxes may allow it to happen undetectedly. Currently, decisions concerning human beings in medical treatments, loan eligibility, or who should receive a certain job are made by deep learning systems. These situations have already shown prejudice in AI systems. And, with the black box problem, this could be aggravated making it difficult for many to avail certain services.
Numerous problems can also arise from a lack of accountability and openness. Black-box neural networks' complexity will prevent appropriate auditing of these systems. In industries like healthcare, banking and financial services, and criminal justice, this might be a concern. They also have a number of security weaknesses that leave them open to attacks from different threat actors. This can be illustrated by a scenario in which a malicious person modifies the model's input data to sway its judgment and cause it to make potentially harmful conclusions.
What can be done to counter the threat of AI black boxes?
Experts say there are two ways to solve the issue of black boxes: one is to develop a mechanism to peer deeply within the box; the other is to create a regulatory framework. A more thorough investigation of the inner workings may help to lessen difficulties because the output and the judgment behind them are impenetrable. This is where Explainable AI, a young branch of AI, enters the picture. It aims to make deep learning transparent and responsible.
Despite the numerous difficulties presented by AI black boxes, systems that employ such a design have already shown their value in numerous applications. Such systems may nevertheless accurately and reliably detect complex patterns in data. They use less computational power and reach their conclusions very quickly. The only issue is that it can be challenging to comprehend how precisely they arrive at those judgments.
Source: Indian Express