"YOLO": OpenAI CEO breaks his own AI safety rule
OpenAI CEO Sam Altman admitted he broke his own AI safety rule just two hours after implementing it.
OpenAI CEO Sam Altman admitted he broke his own AI safety rule just two hours after implementing it.
OpenAI CEO Sam Altman admitted he broke his own AI safety rule just two hours after implementing it.
Speaking at a developer Q&A session, he revealed that he gave an AI agent full access to his computer for convenience’s sake — and he said he believes many users are doing the same.
According to Altman, the main danger lies in how the power and convenience of AI systems push people to hand over more control without having proper safety infrastructure in place. «The probability of serious failures is low, but the consequences could be catastrophic. We’re just sliding into this state with the attitude of 'well, you only live once — I hope everything will be fine,'» he said.
Altman said he initially hesitated about giving the agent full access, but he quickly changed his mind because the model «behaved reasonably.» He warned society could «sleepwalk» through the moment when trust in complex AI models becomes widespread while protections and control systems are still lacking.
As AI capabilities grow, Altman noted, security vulnerabilities or goal alignment problems might remain undetected for weeks or months. Meanwhile, major safety infrastructure for such systems does not yet exist.He added that creating such solutions could become a promising niche for startups.
Additionally, Altman acknowledged that GPT-5 falls short of GPT-4.5 in editorial and literary writing. He explained that with the emergence of reasoning models, the focus has shifted toward logic and programming. He emphasized that OpenAI aims to develop future, unified models that combine strong reasoning abilities with high-quality writing.
Релоцировались? Теперь вы можете комментировать без верификации аккаунта.