The release of chatbot ChatGPT last year has fueled debate about AI and the government’s role in regulating the new technology.
WASHINGTON — Vice President Kamala Harris will meet on Thursday with the CEOs of four major emerging companies artificial intelligence as the Biden administration rolls out a series of initiatives that mean ensure that rapidly developing technologies improve lives without jeopardizing people’s rights and safety.
The Democratic administration plans to announce a $140 million investment to create seven new artificial intelligence research institutes.
In addition, the White House Office of Budget Management is expected to release guidance in the next few months on how federal agencies can use artificial intelligence tools. Also, leading AI developers will make an independent commitment to participate in a public evaluation of their systems in August at the DEF CON hacker convention in Las Vegas.
On Thursday, Harris and administration officials plan to discuss the risks they see in current AI development with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI. The message from government leaders to companies is that they have a role to play in reducing risks and that they can work with government.
President Joe Biden last month noted that artificial intelligence could help fight disease and climate change, but could also harm national security and destabilize the economy.
The release of chatbot ChatGPT this year has fueled debate about AI and the role of government in technologies. As artificial intelligence can create human captions and fake images, ethical and societal issues arise.
OpenAI, which developed ChatGPT, has been secretive about the data on which its AI systems were trained. This makes it difficult for people outside the company to understand why ChatGPT provides biased or false responses to inquiries, or to address concerns about the theft of copyrighted works.
Companies worried about liability for something in their training data may also have no incentive to properly track it, said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.
“I think it might not be possible for OpenAI to actually drill down all of its training data to a level of detail that would be really useful in terms of some of the consent, privacy and licensing issues,” Mitchell said in an interview with Tuesday. “From what I know of tech culture, it just doesn’t get done.”
In theory, at least, some sort of disclosure law could force AI vendors to open up their systems to closer third-party scrutiny. But when AI systems are built on previous models, it will be difficult for companies to provide greater transparency after the fact.
“I think it’s up to governments to decide whether that means you have to throw away all the work you’ve done or not,” Mitchell said. “Certainly, I imagine that, at least in the US, the decisions will lean toward corporations and support the fact that it’s already been done. It would have such a huge impact if all these companies had to essentially scrap all that work and start over.”