Last year, a hacker accessed OpenAI's internal messaging systems and stole details about the company's A.I. technologies. The hacker got information from online forums where employees discussed OpenAI's latest technologies, but did not access the systems housing the artificial intelligence.
OpenAI executives told employees about the incident but chose not to share the news publicly because no customer or partner information was stolen. The breach raised concerns about the security of OpenAI's technology and exposed internal divisions about the risks of artificial intelligence. After the breach, a program manager voiced concerns to OpenAI's board about the company's security strength.
OpenAI dismissed the program manager for leaking other information outside the company. The company disagreed with the concerns raised by the former employee. There were fears that the breach could be linked to foreign adversaries, but legal restrictions prevent companies like OpenAI from barring foreign talent due to their nationality.
OpenAI's head of security stressed the need for the best talent in AI technology and acknowledged the associated risks. OpenAI is not the only company creating powerful AI systems. Some companies, like Meta, are sharing their designs as open-source software, believing the risks are minimal.
Today's AI systems can spread disinformation online and may impact job opportunities. However, studies indicate that AI technology does not pose a significant national security risk.
Post a Comment