What AI Policymakers Need to Be Thinking About in California and Beyond


AI

Examples of harms from AI already exist, such as Californians being wrongfully denied unemployment insurance during the pandemic.

Levi Sumagaysay What AI Policymakers Need to Be Thinking About in California and Beyond Left to right, CalMatters reporter Khari Johnson moderates a panel about AI accountability featuring Ashkan Soltani, executive director of the California Privacy Protection Agency, Gerard de Graaf, senior envoy for digital to the U.S. and head of the EU office in San Francisco, Samantha Gordon, chief program officer of TechEquity, Secretary of Government Operations Amy Tong and State Sen. Steve Padilla at CalMatters’ Ideas Festival at the Sheraton Grand Hotel in Sacramento on June 5, 2024. Photo by Fred Greaves for CalMatters

Panel of lawmakers and experts said discussions about regulating AI should center people first and foremost.

As California considers policies and regulations around artificial intelligence, the conversation should center on its effects on people, experts and policymakers agree.

The state has gotten started: It released guidelines for state use of AI earlier this year. Amy Tong, secretary of the state’s Government Operations Agency, was tasked with assessing the effect of generative AI on vulnerable communities; its effect on workforce development for existing and future workforces; and recommending state procurement guidelines. 

“(Talking about AI) shouldn’t be tech-driven,” Tong, who’s the former director of the state’s Technology Department, said on a panel at the CalMatters Ideas Festival in Sacramento on Wednesday, which was moderated by CalMatters reporter Khari Johnson. “People should be at the center of it. What is the impact on individuals?”

Examples of harms from AI already exist, such as Californians being wrongfully denied unemployment insurance during the pandemic.

Another panel speaker, Samantha Gordon, is chief program officer at TechEquity Collaborative. The group urges the tech industry to address inequality. Gordon said it’s important not only to focus on AI’s effects on people, but also to make sure they are part of the conversation about it and the policies seeking to address it.

“The tech industry has been so convincing that they’re the only ones who can talk about it,” Gordon said. “That’s just not true. You don’t need to be an expert to understand that my data shouldn’t be handed to anyone who wants it.”

Sen. Steve Padilla — who has proposed a couple of bills that address AI, including one on state procurement, Senate Bill 892 — agreed that the conversation should be “socially directed,” but said it also needs to be “multidisciplinary.” The San Diego Democrat said the state also needs to figure out how to attract technologists who can be involved in making policy: “We need the expertise.”

Ashkan Soltani, executive director of the California Privacy Protection Agency, mentioned California’s renowned academic institutions and said “we have (tech expertise) in abundance in this state. The question is how to access it.”

Whatever AI policies California puts in place, the European Union — which has been proactive in regulating and trying to rein in the tech industry — is paying close attention.

Gerard de Graaf, senior envoy for digital to the U.S. and head of the European Union office in San Francisco, said he has been spending a lot of time in Sacramento lately because he knows state officials and lawmakers are considering policies and legislation around the effects of generative AI.

He said on the panel that considering the “California effect (of leading policy) in the U.S., and the Brussels effect globally, if (the two) could meet, we could set the standard for the world.”

This article was published on CalMatters on June 6, 2024, and is reprinted with permission. CalMatters and Broadband Breakfast collaborated on the California Broadband Summit at the CalMatters Ideas Festival.

Source