4 essential reads that will get you excited

By | December 20, 2023

Within four months of ChatGPT’s launch on November 30, 2022, most Americans had heard of the AI ​​chatbot. Excitement and fear around the technology have been at a fever pitch for much of 2023.

OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, and Microsoft’s Copilot are among the chatbots powered by large language models to deliver uncannily human conversations. The experience of interacting with one of these chatbots, combined with Silicon Valley’s spin, can leave the impression that these technical marvels are sentient beings.

But the reality is much less magical or glamorous. In 2023, The Conversation published several articles that dispelled some major misperceptions about this new generation of AI chatbots: that they know things about the world, that they can make decisions, that they can replace search engines, and that they can operate independently of humans.

1. Those who do not know anything without a body

Broad-language model-based chatbots seem to know a lot. You can ask them questions and most of the time they don’t answer correctly. Despite the occasional hilariously incorrect answer, chatbots can interact similarly with humans who share your experiences of being a living, breathing human.

But these chatbots are advanced statistical machines that are extremely good at predicting the best sequence of words to respond to. Their “knowledge” of the world is actually human knowledge, reflected through massive amounts of human-generated text, on which the basic models of chatbots are trained.

Arizona State psychology researcher Arthur Glenberg and University of California, San Diego cognitive scientist Cameron Robert Jones explain how people’s knowledge of the world depends on their bodies as much as their brains. “For example, what people understand by a term like ‘paper sandwich wrap’ includes the look of the wrapper, its feel, its weight, and ultimately how we might use it: to wrap a sandwich,” they explained.

This knowledge means that people intuitively know other ways to utilize sandwich wrapping, such as an improvised way to cover your head in the rain. This is not the case with AI chatbots. “People understand how to make use of things in ways that are not captured in language use statistics,” they wrote.


Read more: It takes a body to understand the world – Why AIs in ChatGPT and other languages ​​don’t know what they’re saying


2. Lack of judgment

ChatGPT and its cousins ​​may also give the impression that they have cognitive abilities, such as understanding the concept of negation or making rational decisions, thanks to all the human language they swallow. This impression led cognitive scientists to test these AI chatbots to assess how they compare with humans in various aspects.

Mayank Kejriwal, an artificial intelligence researcher at the University of Southern California, tested the expected earnings understanding of large language models; This is a measure of how well someone understands the risks in a betting scenario. They found that the models placed random bets.

“This is the case even if we ask a trick question like this: If you flip a coin and it lands heads, you win a diamond; If it comes up tails, you lose your car. Which one would you buy? “The correct answer is heads, but artificial intelligence models chose tails almost half the time,” he wrote.


Read more: Don’t bet on ChatGPT – study shows language AIs often make irrational decisions


3. Summaries, not conclusions

While it’s no surprise that AI chatbots aren’t as human-like as they seem, they aren’t necessarily digital superstars either. For example, ChatGPT and its ilk are increasingly being used as a replacement for search engines to answer queries. The results are mixed.

Information scientist Chirag Shah of the University of Washington explains that large language models perform well as information summarizers: combining important information from multiple search engine results into a single block of text. But this is a double-edged sword. This is useful for getting the gist of a topic – assuming there are no “hallucinations” – but it leaves the researcher with no idea about the sources of the information and deprives them of the chance to encounter unexpected information.

“The problem is that even if these systems are faulty only 10% of the time, you don’t know which 10% are faulty,” Shah wrote. “This is because these systems lack transparency—they don’t disclose what data they were trained on, what resources they used to find answers, or how those answers were created.”


Read more: AI information retrieval: A search engine researcher explains the promise and danger of letting ChatGPT and its cousins ​​search the web for you


4. 100% non-artificial

Perhaps the most dangerous misconception about AI chatbots is that because they are built on AI technology, they are highly automated. You may be aware that large language models are trained on human-generated text, but you may not know that thousands of employees and millions of users are constantly improving the models and teaching them to weed out harmful responses and other unwanted behavior.

Georgia Tech sociologist John P. Nelson pulled back the curtain on big tech companies, showing that they often use workers in the Global South and feedback from users to train models on what responses are good and bad.

“There are plenty of human workers lurking behind the screen, and they will always be needed for the model to continue to evolve or expand its scope of content,” he wrote.


Read more: AIs in ChatGPT and other languages ​​are nothing without humans; A sociologist explains how countless secret people create magic


This story is a summary of articles from The Conversation’s archives.

This article is republished from The Conversation, an independent, nonprofit news organization providing facts and analysis to help you understand our complex world.

Written by: Eric Smalley, Speech.

Read more:

Leave a Reply

Your email address will not be published. Required fields are marked *