The biggest risks from technology by 2040

By | January 23, 2024

    <açıklık sınıfı=4Stock Team” src=”https://s.yimg.com/ny/api/res/1.2/1oPx6Rxg6EKRn3sbJPWLHw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/bc82e5a5d49b933 b2ecdd6adefd968d1″ data-src= “https://s.yimg.com/ny/api/res/1.2/1oPx6Rxg6EKRn3sbJPWLHw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/bc82e5a5d49b933b2ecdd 6adefd968d1″/>

There are surprisingly rapid changes in technology and the accessibility of computer systems. Exciting advances are happening in artificial intelligence, small interconnected device stacks and wireless connectivity that we call the “Internet of Things.”

Unfortunately, these developments bring potential dangers as well as benefits. To achieve a secure future, we need to predict what may happen in IT and intervene early. So what do experts think will happen and what can we do to prevent big problems?

To answer this question, our research team from universities in Lancaster and Manchester turned to the science of looking into the future called ‘forecasting’. No one can predict the future, but we can put together predictions: statements about what might happen based on current trends.

Indeed, long-term predictions of trends in technology can be extremely accurate. A great way to get an estimate is to combine the opinions of many different experts and find out where they agree.

We consulted 12 expert “futurists” for a new research article. These are people whose roles involve long-term forecasts of the effects of changes in computer technology until 2040.

Using a technique called a Delphi study, we combined futurists’ predictions with a set of risks and their recommendations for addressing those risks.

Software concerns

Experts have predicted that rapid progress in artificial intelligence (AI) and connected systems will lead to a world that is much more computer-driven than today. But surprisingly, they expected two much-hyped innovations to have little impact: Blockchain, a way to record information that makes the system impossible or difficult to manipulate, was mostly irrelevant to today’s problems, they argued; and Quantum computing is still in its infancy and may have little impact in the next 15 years.

Futurists have highlighted three major risks associated with advances in computer software:

Artificial intelligence competition creates problems

Our experts suggested that many countries’ attitude towards artificial intelligence as an area in which they want to gain competitiveness and achieve technological superiority will encourage software developers to take risks in the use of artificial intelligence. This, combined with the complexity of AI and its potential to surpass human capabilities, could lead to disasters.

Imagine, for example, that shortcuts in testing lead to a bug in the control systems of cars produced after 2025, and this goes unnoticed through all the complex programming of artificial intelligence. It can even be associated with a specific date, causing a large number of cars to start behaving erratically at the same time, resulting in the deaths of many people around the world.

Generative AI

Generative AI could make it impossible to determine the truth. Since it has been very difficult to fake photos and videos for years, we have been waiting for them to be real. Generative AI has already radically changed this situation. We expect the ability to produce convincing fake media to evolve, so it will be extremely difficult to tell whether some images or videos are real.

Let’s say someone in a position of trust (a respected leader or celebrity) uses social media to display real content, but occasionally includes convincing fake content. For those who follow them, there is no way to determine the difference; It will be impossible to know the truth.

Invisible cyber attacks

Finally, there is an unexpected consequence of the sheer complexity of the systems to be built—networks of systems owned by different organizations, each interconnected. It will become difficult, if not impossible, to get to the root of what caused things to go wrong.

Imagine a cybercriminal hacks an app used to control devices such as an oven or refrigerator, causing all devices to turn on at the same time. This causes an increase in electricity demand on the grid, resulting in major power outages.

Power company experts will have difficulty identifying which devices caused the spike, let alone determining that they are all controlled by the same application. Cyber ​​sabotage will become invisible and impossible to distinguish from normal problems.

Pylon.

software jujitsu

The purpose of such predictions is not to raise alarms, but to allow us to begin to solve problems. Perhaps the simplest suggestion experts offered was a kind of software jujitsu: using software to protect itself and protect against it. We can make computer programs perform their own security checks by creating extra code that verifies the programs’ output (effectively self-checking code).

Similarly, we can insist that the methods currently used to ensure the safe operation of software continue to be applied to new technologies. And the newness of these systems should not be used as an excuse to overlook good security practices.

Strategic solutions

However, experts agree that technical answers alone will not be enough. Instead, solutions will be found in the interactions between people and technology.

We need to develop the skills of human beings to deal with these technology challenges and new forms of interdisciplinary education. Governments also need to establish security guidelines for their own AI procurement and legislate for AI security across the sector by encouraging responsible development and deployment methods.

These predictions provide us with a set of tools to find solutions to possible problems of the future. Let’s embrace these tools to realize the exciting promise of our technological future.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SpeechSpeech

Speech

This research was funded by the UK North West Security and Trust Partnership, funded through GCHQ. Funding regulations required this article to be reviewed to ensure that its contents do not breach the UK Official Secrets Act or disclose sensitive, confidential or personal information.

Leave a Reply

Your email address will not be published. Required fields are marked *