A hacker breached OpenAI’s internal messaging system last year, stealing information about the company’s AI design, as reported by the New York Times on Thursday. The cyberattack targeted an online forum where OpenAI employees discussed upcoming technologies and features for their chatbot, but the systems storing the actual GPT code and user data remained unaffected.
Although the breach was disclosed to employees and board members in April 2023, OpenAI opted not to publicly announce it or inform the FBI, citing that no user or partner data was compromised. The company does not view the incident as a national security threat and believes the attacker was an individual unaffiliated with foreign entities.
Prior to this breach, former OpenAI employee Leopold Aschenbrenner had raised concerns about the company’s security and the potential accessibility of its systems to foreign intelligence services like China. Despite his concerns, Aschenbrenner was dismissed by OpenAI, with company spokesperson Liz Bourgeois stating the termination was unrelated to his memo.
This incident is not the first security breach OpenAI has faced. ChatGPT, introduced in November 2022, has been targeted by cybercriminals multiple times, resulting in various data leaks and security vulnerabilities. OpenAI has since implemented stricter security measures, such as additional safeguards to prevent unauthorized access and misuse of their models, as well as establishing a Safety and Security Committee to address future issues.