Ex-OpenAI chief scientist launches THIS new AI company: Key details revealed

Ilya Sutskever, a key figure in the creation of ChatGPT and co-founder of OpenAI, has embarked on a new endeavour. He has established Safe Superintelligence Inc., a company focused on the safe and responsible development of superintelligent AI systems.

Follow us:

Ilya Sutskever, a key figure in the development of OpenAI, has stepped down from his role as Chief Scientist to embark on a new journey. He has founded Safe Superintelligence Inc., a company dedicated to the safe development of superintelligent AI systems. This announcement came shortly after his departure from OpenAI, where he was instrumental in creating advanced AI models such as ChatGPT.

Safe Superintelligence Inc. aims to lead in the secure advancement of AI systems that exceed human intelligence, commonly known as superintelligence. Sutskever, alongside co-founders Daniel Gross and Daniel Levy, has emphasised their commitment to AI safety and security. They have explicitly stated that their new enterprise will be insulated from typical commercial pressures and management distractions, allowing them to prioritise these critical issues.

Strategic Approach

Headquartered in Palo Alto, California, and Tel Aviv, Israel, Safe Superintelligence Inc. will utilize these tech hubs to attract top-tier technical talent. Sutskever, Gross, and Levy have chosen these locations for their deep connections and strategic advantages in accessing the best minds in AI research and development.

Shift Towards Safety

Sutskever's departure from OpenAI marks a significant shift in his career, following a turbulent period at the company. He was part of a controversial attempt to oust CEO Sam Altman, a move he later expressed regret over. This incident highlighted internal conflicts at OpenAI regarding the prioritisation of AI safety over business opportunities. His exit, along with the resignation of Jan Leike, who co-led the safety team, underscored growing concerns about the direction OpenAI was taking.

Independence from Commercial Pressures

Sutskever and his co-founders have made it clear that Safe Superintelligence Inc. will not be influenced by the need for immediate product cycles or profit motives. Their goal is to ensure that the development of superintelligent AI adheres strictly to safety and ethical guidelines, free from the constraints that often accompany traditional business models.