Apple has officially committed to developing safe, secure, and trustworthy AI by signing the White House’s voluntary agreement, as announced in a recent press release. The tech giant will introduce Apple Intelligence, its generative AI offering, to its core products, reaching an audience of 2 billion users.
Joining forces with 15 other tech companies, including Amazon, Google, Microsoft, and Meta, Apple aligns with the White House’s guidelines for the development of generative AI since July 2023. The company’s commitment to integrating AI into its iOS ecosystem became evident at WWDC, emphasizing a partnership that leverages ChatGPT on the iPhone. With a history of regulatory scrutiny, Apple’s early compliance with the White House’s AI standards signals a proactive approach to potential future regulatory challenges in the AI space.
Although Apple’s voluntary pledges may lack immediate enforcement, they represent an important initial step towards ensuring the safety, security, and trustworthiness of AI. Following President Biden’s AI executive order in October, various federal and state legislative initiatives are underway to enhance the oversight of AI models.
As part of the commitment, AI companies agree to rigorously assess their AI models through red-teaming, share findings publicly, handle unreleased model weights confidentially, and implement content labeling mechanisms for user transparency. Furthermore, the Department of Commerce is set to release a report on the implications of open-source foundation models, a critical issue in the evolving AI regulatory landscape.
The White House’s emphasis on enhancing AI governance extends to federal agencies’ progress in fulfilling the October executive order. Noteworthy achievements include over 200 AI-related hires, grants of computational resources to 80 research teams, and the development of frameworks for AI development.