Data Science

“Ilya Sutskever’s New Venture, Safe Superintelligence, Secures $1 Billion for Safe AI Development”

OpenAI
2 min read

Safe Superintelligence (SSI), a new AI company co-founded by Ilya Sutskever, the former chief scientist at OpenAI, has successfully raised $1 billion in funding. This significant investment aims to support the development of advanced artificial intelligence systems designed to exceed human capabilities safely.

Currently composed of a compact team of 10, SSI plans to use the funding to enhance its computing infrastructure and attract leading AI researchers and engineers. The company will operate from its offices in Palo Alto, California, and Tel Aviv, Israel. Although SSI’s exact valuation remains undisclosed, estimates suggest it is valued at around $5 billion. This notable investment reflects ongoing confidence in exceptional AI talent, despite a general decline in funding for foundational AI research. The trend of AI startup founders being recruited by major tech companies has contributed to this decline.

Prominent Investors Back SSI’s Vision

The funding round attracted contributions from top venture capital firms, including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. NFDG, an investment group led by Nat Friedman and SSI’s CEO Daniel Gross, also participated.

Daniel Gross emphasized the importance of having investors who understand and support SSI’s mission: “It’s crucial for us to have investors who respect and back our goal of advancing to safe superintelligence. We plan to dedicate a few years to research and development before introducing our product to the market.”

Addressing AI Safety Concerns

AI safety, a critical focus for SSI, aims to ensure that AI systems do not cause harm or act against human interests. The importance of AI safety has grown amidst concerns over the potential risks of rogue AI posing existential threats.

Sutskever, an influential figure in AI, co-founded SSI in June with Gross, the former head of AI initiatives at Apple, and Daniel Levy, a former researcher at OpenAI. The team is committed to assembling a small, highly skilled group of researchers and engineers who align with their values and culture.

A New Direction for Sutskever

Sutskever, known for his role in developing OpenAI’s powerful AI models, shared his motivation for starting SSI: “I saw a new challenge that differed from my previous work.”

His departure from OpenAI came after a tumultuous period involving the attempted removal of CEO Sam Altman, a decision Sutskever initially supported but later reversed. Following his departure, the “Superalignment” team, which focused on aligning AI with human values, was disbanded.

Sutskever, an early advocate of the “scaling hypothesis” — the idea that vast computing power drives improvements in AI models — revealed that SSI will explore a different approach to scaling. He noted, “The scaling hypothesis is often discussed, but people rarely question what exactly we are scaling. Instead of merely working longer hours down the same path, we aim to explore new approaches that allow us to achieve something truly special.”