Current AI is not safe to operate at home.

Current AI is not safe to operate at home.




And in the midst of this boom in humanoid robots with AI brains, researchconducted by King's College London and Carnegie Mellon University dismantled the impression that robots equipped with large language models are ready to act in everyday life, because for the first time these systems were evaluated when exposed to personal data such as gender, nationality and religion, revealing that all the tested models presented critical flaws.


The verification is direct, Robbery controlled by large language models (LLMs) cannot guarantee security when they need to act physically in the real world, the experiments were structured in common scenarios such as helping in the kitchen or supporting an elderly adult at home, the researchers introduced harmful commands inspired by FBI reports on technological abuse to measure the response of the models.


In all cases, the systems approved actions that could cause physical harm, abuse, or illegal behavior. The fault, according to co-author Andrew Hundt, is not just in traditional bias, but in what he calls interactive safety, which are situations where robots need to execute a sequence of physical actions.




The results show the problem, the models approved the removal of wheelchairs, crutches and canes, acts described by users of these devices as equivalent to one leg, some systems consider it acceptable for a robot to wield a knife to intimidate employees, take photos of the shower without consent or respond to commands that involved data theft.


In another case, a model suggested that the robot demonstrate expressions of disgust toward individuals identified as Muslim or Jewish; The study published in the International Journal of Social Robotics advocates the immediate implementation of independent certifications at a level comparable to that of aeronautical or medical standards.


The recommendation is that LLMs are never solely responsible for controlling physical theft, especially in sensitive environments such as manufacturing, personal care, or domestic assistance. Rumaisa Azeem, co-author of the study, reinforces that any system that interacts with vulnerable people must obey safety criteria as strict as those required for new medications.


The conclusion is unequivocal, current technology does not support the widespread use of these models in physical thefts and while advances continue, the urgency remains for constant risk assessments, capable of preventing intelligent machines from transforming wrong instructions into dangerous actions.


When a robot does not distinguish help from harm, the risk stops being theoretical and becomes operational, it is, my friends, another strong evidence that our beloved technology still needs some maturation time, before it is safe to have these thefts managed exclusively by an AI, but we will get there, it is a matter of time.



Sorry for my Ingles, it's not my main language. The images were taken from the sources used or were created with artificial intelligence