Artificial intelligence is permeating daily life with almost no oversight. States are struggling to catch up

By | March 5, 2024

DENVER (AP) — While artificial intelligence has made headlines with ChatGPT, behind the scenes the technology has quietly permeated daily life; it screens job resumes, apartment rental applications, and even determines medical care in some cases.

While a number of AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, government oversight is inadequate.

Lawmakers in at least seven states are filling the gap left by Congress’ inaction by making major legislative changes to regulate bias in artificial intelligence. These proposals are some of the first steps in a decades-long debate about balancing the benefits of this uncertain new technology with its widely documented risks.

“Artificial intelligence is actually affecting every aspect of your life, whether you know it or not,” said Suresh Venkatasubramanian, a Brown University professor who co-authored the White House’s Draft Artificial Intelligence Bill of Rights.

“Now, you don’t care if they all work well. But they don’t.”

Success or failure will depend on lawmakers working through complex issues while negotiating with an industry worth hundreds of billions of dollars and growing at a pace best measured in light years.

Of nearly 200 AI-related bills introduced by government agencies last year, only about a dozen became law, according to BSA The Software Alliance, which advocates on behalf of software companies.

These bills, along with more than 400 AI-related bills being debated this year, were largely aimed at regulating smaller slices of AI. This includes targeting nearly 200 deepfakes, including proposals to block pornographic deepfakes like Taylor Swift’s that have flooded social media. Others are trying to rein in chatbots like ChatGPT to ensure they don’t blurt out bomb-making instructions, for example.

These are separate from seven state bills being debated from California to Connecticut that would apply across industries to regulate AI discrimination, one of tech’s most perverse and complex problems.

Those who study AI’s tendency to discriminate say states are already behind in establishing guardrails. The use of artificial intelligence to make results-oriented decisions, referred to in bills as “automated decision tools,” is widespread but largely secretive.

It is estimated that 83% of employers use algorithms to assist in recruiting; For Fortune 500 companies, the rate is 99%, according to the Equal Employment Opportunity Commission.

But regardless of whether the systems are biased, the majority of Americans are unaware that these tools are being used, Pew Research’s survey found.

An AI can learn bias through the data it is trained on; this data is often historical data, which can contain the Trojan Horse of past discrimination.

Amazon canceled its hiring algorithm project about a decade ago when it was revealed that it preferred male candidates. The AI ​​is trained to evaluate new resumes by learning from past resumes, which are mostly male candidates. Although the algorithm didn’t know the applicants’ gender, it downgraded resumes that included the word “female” or listed women’s colleges, in part because they weren’t represented in the historical data it learned about.

“If you allow AI to learn from decisions that current executives have made historically, and those decisions have historically been in favor of some people and against others, then that’s what the technology will learn,” said class action attorney Christine Webber. The lawsuit alleges that an artificial intelligence system that scores rental applicants discriminates against those who are Black or Hispanic.

Court documents describe how one of the plaintiffs in the case, Mary Louis, a Black woman, applied to rent an apartment in Massachusetts and received a cryptic response: “The third-party service we use to screen all prospective tenants has denied your tenancy.”

Court records say when Louis submitted two landlord references showing he had paid rent early or on time for 16 years, he received another response: “Unfortunately, we do not accept objections and cannot override the result of the Tenant Screening.”

This lack of transparency and accountability is partly what the bills are targeting, following California’s failed bid last year — the first comprehensive attempt to regulate AI bias in the private sector.

Under the bill, companies using these automated decision tools would be required to conduct “impact assessments,” including descriptions of how AI was involved in a decision, data collected and analysis of discrimination risks, as well as a description of the company’s measures. Depending on the bill, these assessments will be submitted to the state or regulators may request them.

Some bills would also require companies to tell customers that AI will be used to make decisions and allow them to opt out with certain caveats.

Craig Albright, senior vice president of U.S. government relations for the industry lobbying group BSA, said his members were generally in favor of some of the proposed steps, such as impact assessments.

“Technology is advancing faster than the law, but there are actually benefits to the law catching up with this speed. Because then (companies) will understand what their responsibilities are and consumers will be able to trust the technology more,” Albright said.

But it was a poor start in terms of legislation. A bill in Washington state has already floundered in committee, and a California proposal introduced in 2023 on which many of the current proposals are modeled has also died.

California Assemblywoman Rebecca Bauer-Kahan renewed her failed bill last year, with the support of some tech companies like Workday and Microsoft, after removing the requirement that companies routinely submit impact assessments. Other states where bills have been introduced or are expected to be introduced are Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.

While these bills are a step in the right direction, their impact assessments and ability to catch biases remain unclear, said Brown University’s Venkatasubramanian. Without greater access to reporting, which most bills limit, it’s also difficult to know whether a person has been discriminated against by AI.

A more intense but accurate way to detect discrimination would be to mandate bias checks (tests to determine whether an AI is discriminating) and make the results public. This is where the industry backtracked, arguing that trade secrets would be exposed.

Requirements to routinely test an AI system are missing from most legislative proposals, and nearly all still have a long road ahead. Yet this is the beginning of lawmakers and voters grappling with what has become and will remain an ever-present technology.

“It covers everything in your life. Just for that reason you should care,” Venkatasubramanian said.

——-

Associated Press reporter Trân Nguyễn in Sacramento, Calif., contributed.

Leave a Reply

Your email address will not be published. Required fields are marked *