Tesla CEO Elon Musk has once again highlighted the concerns about the risks posed by artificial intelligence. Musk believes that AI technology can become destructive if it is not guided properly. Speaking at a recent podcast interview with Zerodha co-founder Nikhil Kamath, Musk noted that there are three important ingredients needed to make sure that AI contributes positively to human civilisation. The three ingredients are truth, beauty and curiosity. The SpaceX CEO has cautioned that the future of AI is not guaranteed to be positive. “There’s some danger when you create a powerful technology, that a powerful technology can be potentially destructive,” he said. In the past also, Musk has described AI as one of the biggest risks to humanity. He argued the rapid rate of advancement in AI could pose greater dangers than cars, planes, or medicines.
Elon Musk lists three things AI
During the interview, Musk emphasised that AI must be built to pursue truth, warning that exposure to false information could destabilize its reasoning. “You can make an AI go insane if you force it to believe things that aren’t true because it will lead to conclusions that are also bad,” he explained.He also stressed on the importance of beauty, describing it as something humans instinctively recognize and which AI should learn to appreciate. Finally, Musk said AI must embody curiosity, seeking to understand the nature of reality and valuing the continuation of humanity. “It’s more interesting to see the continuation if not the prosperity of humanity than to exterminate humanity,” he added.
Broader Debate on AI Safety
For the uninitiated, Elon Musk co-founded OpenAI in 2015, however, he left the company board in 2018. Musk was later seen criticising the company for giving up its non-profit mission after it launched its popular chartbot ChatGPT in 2022. Then in 2023, Elon Musk launched its own chatbot — Grok via his AI company xAI.Other experts share Musk’s concerns. Geoffrey Hinton, often called the “Godfather of AI,” said earlier this year there is a 10–20% chance AI could wipe out humanity, citing risks such as hallucinations and job automation. “The hope is that if enough smart people do enough research with enough resources, we’ll figure out a way to build them so they’ll never want to harm us,” Hinton said.
