There is already a way to share technology freely and stop it being misused

By | February 9, 2024

    <açıklık sınıfı=Google DeepMind / Unsplash, Author provided” src=”https://s.yimg.com/ny/api/res/1.2/NrUBkeoFepUxFcdagyaLXw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYxNA–/https://media.zenfs.com/en/the_conversation_464/b58c8b3cc89298dc3300eb d99d60ba9d” data-src= “https://s.yimg.com/ny/api/res/1.2/NrUBkeoFepUxFcdagyaLXw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYxNA–/https://media.zenfs.com/en/the_conversation_464/b58c8b3cc89298dc3300ebd99d 60ba9d”/>

There are many proposed ways to impose limits on artificial intelligence (AI) due to its potential to harm society as well as its benefits.

For example, the EU’s Artificial Intelligence Law imposes greater restrictions on systems depending on whether the systems fall into the category of general-purpose and productive AI or pose limited risk, high risk or unacceptable risk.

This is a bold new approach to mitigating any ill effects. What if we could adapt some of the tools that already exist? Software licensing is a well-known model that can be adapted to meet the challenges posed by advanced AI systems.

Open responsible AI licenses (OpenRails) could be part of the answer. AI licensed through OpenRail is similar to open source software. A developer can publicly release his system under license. This means anyone is free to use, adapt and reshare what is originally licensed.

The difference with OpenRail is the addition of conditions for the responsible use of AI. These include breaking the law, impersonating people without their consent, or discriminating against people.

Besides mandatory conditions, OpenRails can be adapted to include other conditions directly related to the specific technology. For example, if an AI was created to categorize apples, the developer might state that it should never be used to categorize oranges, as that would be irresponsible.

The reason this model can be useful is that many AI technologies are very general and can be used for many things. It’s really hard to predict the bad ways these could be exploited.

Therefore, this model helps developers advance open innovation while reducing the risk of their ideas being used in irresponsible ways.

Open but responsible

In contrast, proprietary licenses are more restrictive on how software can be used and adapted. They are designed to protect the interests of content creators and investors, and have helped tech giants like Microsoft build vast empires by charging for access to their systems.

Because of its broad reach, one might think that AI requires a different, more nuanced approach that could foster the openness that drives progress. Many large companies currently operate proprietary – closed – artificial intelligence systems. However, this may change as there are many examples of companies using the open source approach.

Meta’s generative AI system Llama-v2 and image generator Stable Diffusion are open source. French AI startup Mistral, founded in 2023 and currently valued at $2bn (£1.6bn), will openly launch its latest model soon, said to have performance comparable to GPT-4 (the model behind Chat GPT).

However, due to the potential risks associated with AI, openness needs to be balanced with a sense of responsibility to society. These include the potential for algorithms to discriminate against people, change jobs, and even pose existential threats to humanity.

hugging face

HuggingFace is the world’s largest AI developer hub. Jesse Joshua Benjamin, Four works of the author

We must also consider the more mundane and everyday uses of artificial intelligence. Technology will increasingly become part of our social infrastructure; It will become a central part of how we access information, form opinions and express ourselves culturally.

A technology of such universal importance carries its own risk, different from the robot apocalypse but still worth considering.

One way to do this is to compare what AI can do in the future with what free speech can do now. The free sharing of ideas is not only important for promoting democratic values, it is also the engine of culture. It facilitates innovation, encourages diversity, and helps us distinguish truth from lies.

The AI ​​models being developed today will likely become the primary means of accessing information. They will shape what we say, what we see, what we hear and, by extension, how we think.

In other words, they will shape our culture much like freedom of expression. So there’s a good argument that the fruits of AI innovation should be free, shareable and open. And most things are anyway.

Boundaries are needed

The HuggingFace platform, the world’s largest AI developer hub, currently has more than 81,000 models published using “permissive open source” licenses. Just as the right to free speech provides enormous benefits to society, this open sharing of artificial intelligence is the engine of progress.

However, freedom of expression has necessary ethical and legal limits. Making false claims that harm others or expressing hate based on ethnicity, religion or disability are widely accepted restrictions. What OpenRails does is provide a tool for innovators to find that balance in the AI ​​innovation space.

For example, deep learning technology is applied in many important areas, but it also forms the basis of deepfake videos. The developers probably didn’t want their work to be used to spread misinformation or create non-consensual pornography.

OpenRail would allow them to share their work with restrictions that would, for example, prohibit anything that would violate the law, cause harm or cause discrimination.

legally enforceable

Can OpenRAIL licenses help us avoid the inevitable ethical dilemmas AI will pose? Licensing can only go so far; the only limitation is that licenses are only as good as the ability to enforce them.

Currently, enforcement will likely be similar to sanctions for music copying and software piracy and will include the issuance of cease and desist letters, with the possibility of court action. Although such measures do not stop piracy, they act as a deterrent.

Despite the limitations, there are many practical benefits; licenses are well understood by the tech community, easily scalable, and adopted with little effort. This has been accepted by developers, and to date over 35,000 models hosted on HuggingFace have adopted OpenRails.

Ironically, given the company name, OpenAI, the company behind ChatGPT, does not openly license its most powerful AI models. Instead, with its flagship language models, the company employs a closed approach that gives access to AI to anyone willing to pay, while preventing others from developing or adapting the underlying technology.

As with the free speech analogy, the freedom to openly share AI is a right we should uphold, but perhaps not absolutely. While not a panacea, licensing-based approaches like OpenRail look like a promising piece of the puzzle.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SpeechSpeech

Speech

Joseph’s research is currently supported by Design Research Works (https://designresearch.works) under UK Research and Innovation (UKRI) grant reference MR/T019220/1. Both are members of the steering committee of the Responsible AI Licenses initiative (https://www.licenses.ai/).

Jesse’s research is currently supported by Design Research Works (https://designresearch.works) under UK Research and Innovation (UKRI) grant reference MR/T019220/1. Both are members of the steering committee of the Responsible AI Licenses initiative (https://www.licenses.ai/).

Leave a Reply

Your email address will not be published. Required fields are marked *