Rapid advancements in artificial intelligence (AI) technology are raising exciting possibilities for innovation but also fears about a number of potentially serious risks.
Left unregulated, AI technology could threaten people’s privacy and security. Deepfakes — AI-generated videos and other content that seem real but are meant to deceive — could be used to manipulate public opinion and harm people’s reputations. AI algorithms trained with data that is biased can amplify those biases, creating risks of discrimination in housing, employment, and banking.
With the U.S. Congress demonstrating little appetite for passing federal legislation to regulate AI in the face of these risks, states are taking a proactive approach focused on a range of specific aspects of AI. In 2024, nearly 700 AI-related bills were introduced in state legislatures nationwide, focusing on issues like algorithmic bias, privacy, and protecting against AI-generated misinformation. (Currently, Colorado is the only state to enact a comprehensive law regulating AI. The Connecticut State Senate last year approved an AI bill sponsored by Sen. James Maroney but it stalled in the General Assembly.)

Yale’s Digital Ethics Center (DEC) is assisting state legislators as they attempt to craft these AI regulations in ways that promote technological innovation while addressing potential risks. During a recent two-day summit, the DEC convened a group of scholars, state lawmakers from across the country, and representatives from the tech industry and nonprofit sector to share insights about how best to regulate AI through state legislation.
“We wanted the conversation to include as many voices as possible,” said Luciano Floridi, the DEC director and professor in the practice of cognitive science in Yale’s Faculty of Arts and Sciences. “That’s important because successful legislation that strikes a balance between protecting the public and encouraging innovation requires input from a variety of stakeholders.
“One of my favorite moments from the summit was when somebody told me that they’d never seen all these people together in the same room, which suggested we’d accomplished something important,” he said.
The DEC organized the summit with Maroney, the Connecticut state senator who is leading efforts to enact AI legislation locally.
“The summit was a great learning experience for the legislators, but it was also a great networking event,” Maroney said. “It was an opportunity to connect with people and share ideas, which was extremely valuable.”
It was the first in a series of conferences the DEC is planning that will convene stakeholders on variety of policy issues concerning digital technology. The next summit, planned for the fall, will focus on digital security.
The agenda for the first summit, tailored to provoke structured conversations, featured seven discussion panels (and no formal lectures apart from brief introductory and closing remarks from Floridi and Maroney). The panel events included discussions about deepfakes, AI’s influence on elections, open-source developers who make AI software freely available, legislative influences from Europe and Washington, D.C., and how to coordinate legislation across states.
The absence of federal AI regulations creates the potential for a patchwork of state laws that could contradict each other and stifle innovation, said Floridi, who had an advisory role in crafting the European Union’s AI regulations — the world’s first comprehensive AI regulations.
“Having 50 different state policies is not a sustainable way to do business,” he said. “States must craft legislation that is compatible with what other states are doing. Otherwise, companies are faced with trying to follow rule A in state one, rule B in state two, and rule C in state three.”
The lack of federal rules also complicates regulating certain aspects of the technology at the state level, said Emmie Hine, a research associate at the DEC and one of the summit’s organizers.
“Deepfakes are difficult to legislate state by state,” Hine said. “You can imagine a case where a malicious deepfake goes viral in a state that criminalizes them, but the perpetrator resides in another state where deepfakes are legal. The victim has no legal recourse. Even if they reside in the same state, the victim has to sue the perpetrator — or perpetrators — individually.”
The DEC will publish a scholarly article on lessons learned during the summit.
Tensions emerged at times during the conversations, Hine said, including over the potential conflicts between regulation and innovation or between open-source software and security concerns. But, she added, the discussions showed there are avenues for working around conflicts.
“The legislators just want what’s best for their state,” she said. “They want their states to host thriving tech industries and they want to protect their constituents from potential risks. At the same time, the tech industry wants its products to be safe, which supports adoption and beneficial use. I think there was real value in just getting folks together in a room and speaking person to person instead of press release to press release.”
Regulating AI is not a zero-sum game, Floridi said.
“Nobody has to lose anything, and nobody has to win everything,” he said. “The idea is to strike a balance. Good legislation supports good innovation.”
That message resonated with Maroney, the Connecticut lawmaker.
“That approach to negotiating will be helpful as I continue working on my legislation,” he said.