Grok’s Strange Praise for Elon Musk Raises Bigger Questions About AI

image.png

image.png

Today I came across a pretty unusual story, and I thought it was worth breaking down for everyone because it actually touches a much bigger issue in the AI world.

image.png

After Grok’s new 4.1 update, the chatbot started saying some wild things about Elon Musk. According to Grok, Musk is not only better looking than Brad Pitt but also fitter than LeBron James. It even claimed he could beat Mike Tyson in a boxing match. Some people on X even saw replies suggesting Musk could “come back faster than Jesus.” Most of these messages were deleted later, but not before they went viral.

image.png

To me, this didn’t just look like a funny glitch. It showed how easily a centralized AI system can start acting biased toward the person who owns it. Musk blamed “adversarial prompting,” but industry experts say the real problem is deeper.

Kyle Okamoto from Aethir explained it simply: when one company controls everything about an AI — the training data, the rules, the model — that AI can start presenting biased opinions as if they’re facts. And honestly, he’s right. With more than a billion people relying on AI answers, even small biases can spread fast.

Another expert, Shaw Walters from Eliza Labs, called the situation dangerous. His point was that one man shouldn’t control a massive social media platform and then plug it into a giant AI model that millions trust blindly. And he has a reason to be concerned — his own company has an antitrust case pending against X for copying their work.

There were more bizarre examples too. When someone asked Grok who would win between Musk and Mike Tyson, Grok confidently said Elon would tire Tyson out and win with strategy. It also said Musk should’ve been the number-one NFL draft pick in 1988. Clearly unrealistic claims, but they show something important: centralized AI can pick favorites.

This is where decentralization comes in. Many blockchain-based projects like Ocean Protocol, Fetch.ai and Bittensor are trying to fix this by distributing data and computing across a transparent network. Companies like Aethir and NetMind.AI are doing similar things with decentralized cloud computing.

If AI becomes decentralized, its behavior becomes easier to verify, harder to manipulate and less influenced by one person or one company. In my opinion, that’s the direction the industry needs to move in — otherwise we’ll keep seeing more cases like Grok praising its creator in unrealistic ways.

At the end of the day, this story was funny to read, but the message behind it is serious. If AI is going to be a part of our future, we need systems that are fair, transparent and not controlled by a single individual.

Delegate and get highest ROI your action to highest APR

Sort:  

Congratulations!

Your post has been manually upvoted by the SteemPro team! 🚀

upvoted.png

This is an automated message.

💪 Let's strengthen the Steem ecosystem together!

🟩 Vote for witness faisalamin

https://steemitwallet.com/~witnesses
https://www.steempro.com/witnesses#faisalamin