Biden, Harris meet with CEOs to discuss artificial intelligence risks

On Thursday, Vice President Kamala Harris held a meeting with the leaders of Google, Microsoft, and two other companies that are working on developing artificial intelligence. This meeting comes as the Biden administration unveils initiatives aimed at ensuring that this rapidly advancing technology enhances people’s lives while also protecting their rights and safety. During the […]

Author
Himani Faujdar
Follow us:

On Thursday, Vice President Kamala Harris held a meeting with the leaders of Google, Microsoft, and two other companies that are working on developing artificial intelligence. This meeting comes as the Biden administration unveils initiatives aimed at ensuring that this rapidly advancing technology enhances people’s lives while also protecting their rights and safety.

During the meeting held in the Roosevelt Room of the White House, President Joe Biden made a brief appearance and expressed his hope that the group could provide insights on what measures are necessary to safeguard and promote society through the use of AI.

According to a video posted on President Biden’s Twitter account, he told the CEOs during the meeting that what they are doing with AI has both great potential and great risks. The use of AI chatbot ChatGPT, which even President Biden has tried, has led to a surge in commercial investment in AI technology that can create text, images, music, and code that closely resemble those made by humans.

The ability of AI technology to closely imitate human behaviour and actions has raised concerns among governments globally regarding the potential for job displacement, deception of individuals, and dissemination of false information.

The Biden administration has revealed plans to invest $140 million in the creation of seven new research institutes focused on advancing AI technology. Furthermore, the White House’s Office of Management and Budget is set to release guidelines within the next few months on the use of AI tools by federal agencies. Additionally, prominent AI developers have committed to participating in a public evaluation of their systems at the Las Vegas hacker convention DEF CON in August.

According to Adam Conner from the liberal-leaning Center for American Progress, while the Biden administration is taking steps to regulate the use of AI, stronger action needs to be taken. This is because AI systems built by these companies are already being integrated into thousands of consumer applications. He added that the next few months will be crucial in determining whether the US can lead the way in regulating AI or cede that leadership to other parts of the world, as has happened in other areas such as privacy or regulating large online platforms.

The aim of the meeting was to provide a platform for Harris and other officials of the administration to converse with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella, and executives of two leading start-ups, Google-supported Anthropic and Microsoft-supported OpenAI, which is the producer of ChatGPT, and discuss the dangers in the present AI development. Harris further added in a statement after the private meeting that she informed the executives that it is their responsibility, both legally and ethically, to ensure the security and safety of their products.

The emergence of new “generative AI” tools, including ChatGPT, has raised ethical and societal concerns about automated systems that are trained on large amounts of data. Some of the companies, such as OpenAI, have not been transparent about the data their AI systems have been trained on, making it difficult to determine why a chatbot might be providing biassed or inaccurate responses or to address concerns about potential copyright infringement.

Margaret Mitchell, the chief ethics scientist at AI startup Hugging Face, suggests that companies may not have a strong incentive to closely track their training data, which could be problematic in terms of concerns around consent, privacy, and licensing. She also notes that it is not common in tech culture to rigorously track this data. Some experts have proposed that disclosure laws be implemented to require AI providers to open their systems to more third-party scrutiny. However, it may be difficult to provide greater transparency for AI systems that are built on top of previous models.

AI technology has advanced so fast that lawmakers in Brussels are struggling to update their proposals to include general-purpose AI systems like those developed by OpenAI. Provisions added to the bill would require disclosing copyrighted material used to train foundation AI models, according to a recent partial draft of the legislation obtained by The Associated Press. While a European Parliament committee is due to vote on the bill next week, it may take years before the AI Act becomes effective. Meanwhile, Italy temporarily banned ChatGPT for violating strict European privacy rules, and the UK’s competition watchdog announced on Thursday that it’s launching an investigation into the AI market.

Heather Frase, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, suggests that while presenting AI systems to the public for inspection at the DEF CON hacker conference could be a new approach to testing risks in the US, it may not be as thorough as a more extended audit. Several companies, including Google, Microsoft, OpenAI, Anthropic, Hugging Face, Nvidia, and Stability AI, have reportedly agreed to participate in this evaluation.