Small federal agency develops standards to make AI safe, secure and reliable

By | January 22, 2024

BOSTON (AP) — No technology since nuclear fission will shape our collective future like artificial intelligence; therefore, the most important AI systems are safe, secure, reliable and socially responsible.

But unlike the atomic bomb, this paradigm shift was driven almost entirely by the private technology sector, which was at least resistant to regulation. Billions of dollars are at stake, making the Biden administration’s task of setting standards for AI security a major challenge.

He used the National Institute of Standards and Technology, a small federal agency, to define the parameters. NIST’s tools and measures identify products and services from atomic clocks to election security technology to nanomaterials.

Elham Tabassi, NIST’s chief artificial intelligence consultant, is leading the agency’s artificial intelligence efforts. He spearheaded the AI ​​Risk Management Framework, which was released 12 months ago and formed the basis of Biden’s Oct. 30 AI executive order. It cataloged risks such as bias against people of color and threats to privacy.

Born in Iran, Tabassi came to the United States in 1994 to pursue a master’s degree in electrical engineering and joined NIST soon after. He is the chief architect of the standard the FBI uses to measure fingerprint image quality.

This interview with Tabassi has been edited for length and clarity.

Q: Emerging AI technologies have capabilities that even their creators don’t understand. There isn’t even an agreed upon vocabulary, the technology is so new. You emphasized the importance of creating a glossary on artificial intelligence. From where?

A: Most of my work has been in computer vision and machine learning. There, too, we needed a common dictionary to avoid quickly degenerating into disagreement. A single term can mean different things to different people. Talking behind each other’s backs is especially common in interdisciplinary fields such as artificial intelligence.

Q: You’ve said that for your work to be successful, you need to seek input not only from computer scientists and engineers, but also from lawyers, psychologists, and philosophers.

A: AI systems are socio-technical in nature and are affected by environments and conditions of use. These need to be tested in real-world conditions to understand the risks and impacts. So we need cognitive scientists, social scientists, and yes, philosophers.

Q: This mission is a tall order for a small agency within the Commerce Department that the Washington Post called “underfunded and understaffed.” How many people are working on this at NIST?

A: First, I would like to say that at NIST, we have a great track record of engaging with the broader community. As we put together the AI ​​risk framework, we heard from more than 240 different organizations and received nearly 660 public comments. We don’t look small in terms of output quality and impact. We have more than a dozen people on our team and we continue to expand.

Q: Will NIST’s budget increase from the current $1.6 billion given its AI mission?

A: Congress writes the checks for us and we are grateful for its support.

Q: The executive order gives you until July to create a toolset to ensure the security and reliability of AI. I understand you described this as “an almost impossible deadline” at a conference last month.

A: Yes, but I quickly added that this is not the first time we have faced such challenges, we have a great team, we are determined and excited. When it comes to deadlines, it’s not like we’re starting from scratch. In June, we created a public working group focusing on four different guidelines, including verification of synthetic content.

Question: Members of the House Science and Technology Committee said in a letter last month that they had learned that NIST plans to award grants or awards through a new AI security institute; This points to a lack of transparency. A: In fact, we are exploring options for a competitive process to support collaborative research opportunities. Our scientific independence is really important to us. While we operate a process of great participation, we are the ultimate authors of everything we produce. We never transfer it to anyone else.

Q: A consortium formed to assist the AI ​​security institute tends to spark controversy due to industry involvement. What do consortium members need to agree to?

A: We published the template for this agreement on our website at the end of December. Openness and transparency are indispensable for us. The template is obvious.

Q: The AI ​​risk framework was optional, but the executive order imposes certain obligations on developers. This includes submitting large-scale models to government red teaming (testing for risks and vulnerabilities) once they reach a certain threshold in size and computing power. Will NIST be responsible for determining which models will be red teamed?

A: Our job is to develop the measurement science and standards required for this work. There will be some evaluations in this. This is something we do for facial recognition algorithms. As for tasking (red teaming), NIST won’t do any of that. Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory, impartial and objective body.

Q: How AIs are trained and the guardrails placed on them can vary greatly. Sometimes features like cybersecurity become an afterthought. How can we ensure that risk is accurately assessed and identified, especially when we don’t know what publicly released models have been trained on?

A: Within the framework of AI risk management, we have created a taxonomy of sorts for reliability and emphasize the importance of addressing this during design, development and deployment, including regular monitoring and evaluations during the lifecycle of AI systems. Everyone has learned that we cannot afford to fix AI systems once they are out of use. It needs to be done as early as possible.

And yes, a lot depends on the use case. Take facial recognition. That’s one thing if I’m using it to unlock my phone. For example, when law enforcement uses it to solve a crime, completely different security, privacy, and accuracy requirements come into play. Trade-offs between convenience and security, bias and privacy all depend on the context of use.

Leave a Reply

Your email address will not be published. Required fields are marked *