Elon Musk published the code of the artificial intelligence chatbot Grok. Here’s why it’s important

By | March 19, 2024

Some of the world’s biggest companies and richest people are fighting over a question that will help shape the future of artificial intelligence: Should companies disclose exactly how their products work?

Elon Musk, CEO of Tesla and SpaceX, has further fueled the debate in recent days by choosing to publish the computer code behind the artificial intelligence chatbot Grok.

The move contrasts with the approach of OpenAI, the company behind the popular AI text bot ChatGPT. OpenAI, part of tech giant Microsoft, has chosen to release relatively few details about the latest algorithm behind its products.

Elon Musk did not respond to ABC News’ request for comment. It didn’t happen with OpenAI either.

In a statement earlier this month, OpenAI denied allegations that the company keeps its AI models secret.

“We advance our mission by creating useful tools that can be widely used. Including open source contributions, we make our technology broadly available to empower people and improve their daily lives,” the company said. “We provide broad access to today’s most powerful AI, including a free version used by hundreds of millions of people every day.”

Here’s what you need to know about Grok, why Elon Musk disclosed computer code, and what it means for the future of artificial intelligence:

What is Grok, Musk’s artificial intelligence chatbot?

Last year, Musk launched an artificial intelligence company called xAI, which promises to develop a productive AI program that can compete with established offerings like ChatGPT.

MORE: Dispute over threat of extinction posed by AI looms over booming industry

Musk has warned on several occasions against the risks of political bias in AI chatbots that help shape public opinion and the risk of spreading misinformation.

However, content moderation itself has become a polarizing issue, and some experts have previously told ABC News that Musk has voiced views that place his approach in this important political context.

In November, xAI launched the first version of its first product, Grok, which responds to user requests with humorous comments modeled on the classic science fiction novel “The Hitchhiker’s Guide to the Galaxy.”

Grok is powered by Grok-1, a large language model that generates content based on statistical probabilities learned from scanning huge amounts of text.

To access Grok, users must first purchase a premium subscription to Musk’s social media platform X.

“We believe it is important to design AI tools that are useful to people of all backgrounds and political views. We also want to empower our users with our AI tools, subject to the law,” xAI said in a blog post in November. “Our goal with Grok is to explore and demonstrate this approach to the public.”

Why did Musk publicly release the code?

The decision to release the code behind Grok touches on two issues important to Musk: the threat posed by artificial intelligence and the ongoing battle with rival company OpenAI.

Musk has been warning for years that artificial intelligence poses the risk of serious social harm. in 2017 he tweeted: “If you’re not worried about the safety of AI, you should be.” And more recently, in March 2023, he signed an open letter warning of the “profound risks to society and humanity” posed by artificial intelligence.

In his remarks on Sunday, Musk framed the open source decision as a way to ensure transparency, protect against bias and minimize the danger posed by Grok.

“There is still work to be done, but this platform is already by far the most transparent and truth-seeking platform,” Musk said. aforementioned In a post on X.

PHOTO: OpenAI CEO Sam Altman attends a session of the World Economic Forum (WEF) meeting in Davos, Switzerland, on January 18, 2024.  (Fabrice Coffrini/AFP via Getty Images)

PHOTO: OpenAI CEO Sam Altman attends a session of the World Economic Forum (WEF) meeting in Davos, Switzerland, on January 18, 2024. (Fabrice Coffrini/AFP via Getty Images)

The move is also directly related to a public fight between Musk and OpenAI.

Musk, who co-founded OpenAI but left the organization in 2018, filed a lawsuit against OpenAI and its CEO, Sam Altman, earlier this month, claiming the company abandoned its mission to benefit humanity with a profit-driven rush.

Days after filing the lawsuit, Musk aforementioned He told X that he would drop the case if OpenAI changed its name to “ClosedAI.”

OpenAI said in a statement earlier this month that it plans to take action to dismiss all of Musk’s legal claims.

“When we discussed a for-profit structure to advance the mission, Elon wanted us to merge with Tesla or wanted full control. Elon left OpenAI, saying it needed to be a viable competitor to Google/DeepMind and that he would leave. OpenAI said it would “support us in finding our own path he said.

What are the consequences of the fight between open source and closed source AI?

The debate over whether to release the computer code behind AI products is split between two competing visions of how to limit harm, remove bias, and optimize performance.

On the one hand, open source advocates say that publicly available code allows a broad community of AI engineers to detect and fix flaws in a system or adapt the system for a purpose other than its originally intended function.

In theory, open source code offers programmers the opportunity to improve the security of a particular product while ensuring accountability by making everything publicly visible.

MORE: Is TikTok different in China? Here’s what you need to know

“When someone creates software, there may be bugs that can be exploited in ways that can cause vulnerabilities,” Sauvik Das, a professor at Carnegie Mellon University who focuses on artificial intelligence and cybersecurity, told ABC News. “It doesn’t matter if you’re the brightest programmer in the world.”

“If you open up the source, you have a community of enforcers who are poking holes and slowly building up patches and defenses over time,” Das added.

In contrast, supporters of closed source argue that the best way to protect AI is to keep the computer code secret, thus keeping it out of the hands of bad actors who could reuse it for malicious purposes.

PHOTO: SpaceX, Twitter and Tesla CEO Elon Musk arrive at the US Capitol in Washington for the US Senate's bipartisan Artificial Intelligence (AI) Insight Forum on September 13, 2023.  (Stefani Reynolds/AFP via Getty Images, FILE)PHOTO: SpaceX, Twitter and Tesla CEO Elon Musk arrive at the US Capitol in Washington for the US Senate's bipartisan Artificial Intelligence (AI) Insight Forum on September 13, 2023.  (Stefani Reynolds/AFP via Getty Images, FILE)

PHOTO: SpaceX, Twitter and Tesla CEO Elon Musk arrive at the US Capitol in Washington for the US Senate’s bipartisan Artificial Intelligence (AI) Insight Forum on September 13, 2023. (Stefani Reynolds/AFP via Getty Images, FILE)

Closed-source AI also provides support to companies that may want to leverage advanced products that are not more widely available.

“Closed source systems are harder to redeploy for bad reasons because they already exist and there are only certain things you can do with them,” Kristian Hammond, a computer science professor at Northwestern University who studies artificial intelligence, told ABC News. .

Last month, the White House announced it was seeking public comment on the benefits and dangers of open-source artificial intelligence systems. The move comes as part of sweeping AI rules issued by the Biden administration via executive order in October.

Carnegie Mellon’s Das said Musk’s open-source release may be motivated by both public and private interests, but the move sparks a much-needed conversation about this aspect of AI security.

“While the rationales are not entirely pure, raising public awareness around the idea of ​​open versus closed and the benefits and risks of both is exactly what we need in society right now.” To raise public awareness,” said Das.

Elon Musk published the code of the artificial intelligence chatbot Grok. This is why it first appeared on abcnews.go.com

Leave a Reply

Your email address will not be published. Required fields are marked *