What is the biggest drawback of using AI?

in #technology18 days ago

The most serious problem with the application of AI is the loss of human control over decision-making — that is, the delegation of decision-making authority to AI. And this process will not be forced, imposed from the outside by some Big Brother. People will voluntarily surrender control because it will be easier, faster, and more reliable. Exactly the same way that today hardly anyone multiplies or divides numbers by hand in columns — everyone uses calculators; just as hardly anyone memorizes a route from a paper map anymore, preferring to follow the navigator’s instructions.

People already trust many “smart” devices with part of their decisions — blood-pressure monitors, microscopes, washing machines, electricity generators, aircraft autopilots, and countless others.

AI is simply the next turn of the technological spiral. And the entire technological process throughout history has served one main purpose: to relieve humans of hard and tedious work. That is how the Industrial Revolution destroyed cottage industry. By the way, it also destroyed slavery and made slave labor economically meaningless.

But there is one fundamental difference. A significant portion of human decisions arises at the level of emotions, shaped by ancient brain structures of the brain, while logical justification is tacked on afterward. These emotional reactions are biochemical algorithms honed by millions of years of evolution. Until now, no technology has understood a person better than he understands himself.

With the arrival of AI, however, this may change radically. Once AI gains access to brain signals, it will be able to analyze the biochemistry of emotions, optimize it, and offer the person decisions that prove objectively better. Exactly how this will happen in practice I naturally don’t know. Perhaps it will help choose a more suitable life partner or profession. And if people become convinced that such recommendations, on average, yield better outcomes than their own intuitive choices or the advice of parents and mentors, the shift toward AI will become widespread.

And that is where the danger appears. Just as calculators have unlearned people how to do arithmetic in their heads, and spell-checkers have unlearned us how to write without mistakes, delegating most decisions to AI will make humans extremely vulnerable in the event of system failures, accidents, or — who knows — possible malicious intent on the part of the AI.

The only thing still holding us back from this future is energy constraints. Every new leap in AI development requires enormous amounts of electricity that current infrastructure cannot provide. Therefore, a complete surrender of control is unlikely in the next ten to twenty years.

Yet right now, leading scientific centers and major corporations are actively working on controlled hot fusion and cold fusion. According to optimistic forecasts, the first realistic results can be expected in the 2040s–2050s. When fusion energy provides practically unlimited resources, humanity will gain the ability to create a new level of AI — and it is precisely then that the problem of losing control over our own decisions will arise in full force.