According to OpenAI CEO Sam Altman, OpenAI is collaborating with the U.S. AI Safety Institute, a government organization focused on evaluating and addressing risks associated with AI platforms. They are working on an agreement to grant early access to OpenAI’s upcoming major generative AI model for safety testing.
The announcement made by Altman on X late Thursday was somewhat vague, but it signifies a step towards prioritizing AI safety. This collaboration follows a similar agreement with the AI safety organization in the U.K. established earlier this year. These partnerships aim to counter the perception that OpenAI has neglected AI safety in favor of advancing generative AI technologies.
In response to criticism, OpenAI has taken steps to address concerns regarding its commitment to AI safety. This includes removing restrictive non-disparagement clauses, forming a safety commission, and allocating computing resources for safety research. Although these efforts have not appeased all critics, OpenAI remains dedicated to implementing rigorous safety protocols in its AI development processes.
OpenAI’s recent agreement with the U.S. AI Safety Institute coincides with their support for the Future of Innovation Act, a Senate bill that would establish the Safety Institute as a regulatory body for AI models. These actions have raised concerns about potential regulatory capture or undue influence on AI policy at the federal level.
It is worth noting that Altman serves on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which advises on AI development in critical infrastructures. Furthermore, OpenAI has significantly increased its lobbying expenditures in Washington, signaling a proactive approach to engaging with policymakers on AI-related issues.
The U.S. AI Safety Institute, based within the Commerce Department’s National Institute of Standards and Technology, collaborates with a consortium of companies including Anthropic, Google, Microsoft, Meta, Apple, Amazon, and Nvidia. This industry group is focused on implementing actions outlined in President Biden’s AI executive order, such as developing guidelines for AI red-teaming, risk management, safety protocols, and watermarking synthetic content.