From ChatGPT to AI Security Summit: Year in AI

By | December 24, 2023

Artificial intelligence has become one of the biggest challenges in technology in 2023 with the rise of applications such as generative AI and ChatGPT.

Since OpenAI released ChatGPT to the public in late 2022, awareness of the technology and its potential has skyrocketed, from being discussed in parliaments around the world to being used to write TV news segments.

Public interest in generative AI models has also prompted many of the world’s largest tech companies to introduce their own chatbots or talk more openly about how they plan to use AI in the future, while regulators have stepped up debate about how countries can and should approach it. Opportunities and potential risks of artificial intelligence.

In 12 months, conversations around AI have changed from concerns about how AI could be exploited by schoolchildren to do their homework, to Chancellor Rishi Sunak hosting the first AI safety summit attended by countries and tech companies to discuss how to prevent AI from surpassing AI. progressed. It poses a human and even existential threat.

In short, 2023 has become the year of artificial intelligence.

Like the technology itself, AI-related product launches have progressed rapidly over the past 12 months; Google, Microsoft, and Amazon followed OpenAI in announcing generative AI products following the success of ChatGPT.

Google has introduced Bard, an app it says will give it an edge over any of its rivals in the new AI chatbot space because it’s powered by data from Google’s industry-leading search engine and Google Assistant virtual assistant available on smartphones and smartphones. smart speakers

Similarly, Amazon used its big product launch of the year to talk about how it uses AI to make its virtual assistant Alexa respond in a more human way and be able to understand context and react more seamlessly to follow-up questions. .

And Microsoft has begun rolling out the new Copilot, which aims to combine generative AI with a virtual assistant in Windows, allowing users to ask for help with any task they’re doing, from writing reports to organizing open windows on their screens.

Elsewhere, Elon Musk announced the creation of xAI, a new start-up focused on work in artificial intelligence.

The initiative’s first product has already emerged in the form of Grok, a conversational AI available to paying subscribers of Musk-owned X, formerly known as Twitter.

Such large-scale developments in the industry could not be ignored by governments and regulators, and discussions on regulating the AI ​​sector also intensified during the year.

In March the Government published its White Paper on Artificial Intelligence, which proposes using existing regulators across different sectors to drive AI governance, rather than handing responsibility to a new single regulator.

However, no Artificial Intelligence Bill has been put forward yet; The delay has been criticized by some experts, who warn that it risks allowing the technology to go unchecked while the use of AI tools is booming.

The government has said it does not want to rush into legislating while the world is still trying to grasp the potential of artificial intelligence and that its approach is more agile and allows for innovation.

In contrast, earlier this month the EU agreed on its own rules on AI oversight; However, it seems unlikely that these rules will become law before 2025. This will give regulators the power to review AI models and provide details about how the models are trained.

But Mr Sunak’s desire for the UK to be a key player in AI regulation was highlighted when he hosted world leaders and industry figures at Bletchley Park for the world’s first AI Security Summit in November.

Mr Sunak and Technology Minister Michelle Donelan used the two-day summit to discuss threats from so-called “edge AI”, cutting-edge aspects of technology that could be used for malicious purposes when in the wrong hands.

The summit saw all international participants, including the US and China, sign the Bletchley Declaration, which acknowledges the risks of AI and pledges to develop safe and responsible models.

The Prime Minister announced the launch of the UK’s AI Safety Institute, as well as a voluntary agreement with leading companies such as OpenAI and Google DeepMind to allow the institute to test new AI models before they are released.

While not a binding agreement, it laid the groundwork for AI safety to become an increasingly prominent part of the discussion going forward.

Elsewhere, the AI ​​industry witnessed some major boardroom soap operas at the end of the year, with ChatGPT maker OpenAI sensationally sacking its CEO Sam Altman in late November.

However, this caused a reaction among the staff; Nearly all signed a letter pledging to leave the company and join a proposed new AI research team at Microsoft if Altman is not reinstated.

Within days, Altman became head of OpenAI and the board was restructured; The logic behind this saga was still unclear.

Since then, the UK Competition and Markets Authority (CMA) has sought industry input on Microsoft’s partnership with OpenAI; The partnership saw the tech giant invest billions of dollars in the AI ​​firm and include an observer on its board.

The CMA said it was considering looking at the partnership partly because of the Altman saga.

It’s another sign that scrutiny of the AI ​​industry will continue to intensify in the coming year.

Leave a Reply

Your email address will not be published. Required fields are marked *