OpenAI Establishes Safety Committee Amidst New AI Model Training




OpenAI's new safety committee discussing AI model training in a modern conference room with data on a large screen.

OpenAI has announced the formation of a Safety and Security Committee as it embarks on training its next artificial intelligence model. This move aims to address growing safety concerns surrounding powerful AI technologies.

Key Takeaways

  • OpenAI forms a Safety and Security Committee led by CEO Sam Altman and board members.
  • The committee will make safety and security recommendations for OpenAI’s projects.
  • The first task is to evaluate and develop existing safety practices within 90 days.
  • OpenAI is training a new AI model to advance its capabilities towards AGI.

Formation of the Safety and Security Committee

OpenAI, backed by Microsoft, has established a Safety and Security Committee to oversee the safety measures of its AI projects. The committee will be led by CEO Sam Altman and board members Bret Taylor, Adam D’Angelo, and Nicole Seligman. This initiative comes as OpenAI begins training its next AI model, which aims to bring the company closer to achieving Artificial General Intelligence (AGI).

Responsibilities and Initial Tasks

The newly formed committee will be responsible for making safety and security recommendations to the board. Their first task is to evaluate and further develop OpenAI’s existing safety practices over the next 90 days. After this period, the committee will share its recommendations with the board, and OpenAI will publicly update on the adopted measures.

Leadership and Expert Consultation

In addition to the board members, the committee includes newly appointed Chief Scientist Jakub Pachocki and Matt Knight, head of security. OpenAI will also consult with external experts such as Rob Joyce, a former U.S. National Security Agency cybersecurity director, and John Carlin, a former Department of Justice official.

Transition to a Commercial Entity

According to D.A. Davidson managing director Gil Luria, the formation of the safety committee signifies OpenAI’s transition from a non-profit-like entity to a more defined commercial entity. This change is expected to streamline product development while maintaining accountability.

Recent Changes and Future Plans

Earlier this month, former Chief Scientist Ilya Sutskever and Jan Leike, leaders of OpenAI’s Superalignment team, left the company. The Superalignment team, which was responsible for ensuring AI alignment with intended objectives, was disbanded in May, with some members reassigned to other groups.

OpenAI has not disclosed specific details about the new AI model it is training, but it has hinted that the model will significantly advance its systems’ capabilities. In May, OpenAI announced a new AI model capable of realistic voice conversation and interaction across text and image.

As OpenAI continues to push the boundaries of AI technology, the establishment of the Safety and Security Committee marks a crucial step in ensuring the responsible development and deployment of its innovations.


Share this content

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest posts