OpenAI confirms new frontier models o3 and o3-mini
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
OpenAI is slowly inviting selected users to test a whole new set of reasoning models named o3 and o3 mini, successors to the o1 and o1 mini models that just entered full release earlier this month.
CEO Sam Altman announced the o3 series in OpenAI’s final day of “12 Days of OpenAI” livestreams today, and said they would be initially released to third-party researchers for safety testing.
Altman also said the o3 model was “incredible at coding” and the benchmarks shared by OpenAI support it, exceeding even o1’s performance on programming tasks.
• Exceptional Coding Performance: o3 surpasses o1 by 22.8 percentage points on SWE-Bench Verified and achieves a Codeforces rating of 2727, outperforming OpenAI’s Chief Scientist’s score of 2665.
• Math and Science Mastery: o3 scores 96.7% on the AIME 2024 exam, missing only one question, and achieves 87.7% on GPQA Diamond, far exceeding human expert performance.
• Frontier Benchmarks: The model sets new records on challenging tests like EpochAI’s Frontier Math, solving 25.2% of problems where no other model exceeds 2%. On the ARC-AGI test, o3 triples o1’s score and surpasses 85% (as verified live by the ARC Prize team), representing a milestone in conceptual reasoning.
Deliberative alignment
Alongside these advancements, OpenAI reinforced its commitment to safety and alignment.
The company introduced new research on deliberative alignment, a technique instrumental in making o1 its most robust and aligned model to date.
This technique embeds human-written safety specifications into the models, enabling them to explicitly reason about these policies before generating responses.
The strategy seeks to solve common safety challenges in LLMs, such as vulnerability to jailbreak attacks and over-refusal of benign prompts, by equipping the models with chain-of-thought (CoT) reasoning. This process allows the models to recall and apply safety specifications dynamically during inference.
Deliberative alignment improves upon previous methods like Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI, which rely on safety specifications only for label generation rather than embedding the policies directly into the models.
By fine-tuning LLMs on safety-related prompts and their associated specifications, this approach creates models capable of policy-driven reasoning without relying heavily on human-labeled data.
Results shared by OpenAI researchers in a new, non peer-reviewed paper indicate that this method enhances performance on safety benchmarks, reduces harmful outputs, and ensures better adherence to content and style guidelines.
Key findings highlight the o1 model’s advancements over predecessors like GPT-4o and other state-of-the-art models. Deliberative alignment enables the o1 series to excel in resisting jailbreaks and providing safe completions while minimizing overrefusals on benign prompts. Additionally, the method facilitates out-of-distribution generalization, showcasing robustness in multilingual and encoded jailbreak scenarios. These improvements align with OpenAI’s goal of making AI systems safer and more interpretable as their capabilities grow.
This research will also play a key role in aligning o3 and o3-mini, ensuring their capabilities are both powerful and responsible.
How to apply for access to test o3 and o3-mini
Applications for early access are now open on the OpenAI website and will close on January 10, 2025.
Applicants have to fill out an online form that asks them for a variety of different pieces of information, including links to prior published papers and their repositories of code on Github, and select which of the models — o3 or o3-mini — they wish to test, as well as what they plan to use them for.
Selected researchers will be granted access to o3 and o3-mini to explore their capabilities and contribute to safety evaluations, though OpenAI’s form cautions that o3 will not be available for several weeks.
Researchers are encouraged to develop robust evaluations, create controlled demonstrations of high-risk capabilities, and test models on scenarios not possible with widely adopted tools.
This initiative builds on the company’s established practices, including rigorous internal safety testing, collaborations with organizations like the U.S. and UK AI Safety Institutes, and its Preparedness Framework.
The application process requests details such as research focus, past experience, and links to previous work. OpenAI will review applications on a rolling basis, with selections starting immediately.
A new leap forward?
The introduction of o3 and o3-mini signals a leap forward in AI performance, particularly in areas requiring advanced reasoning and problem-solving capabilities.
With their exceptional results on coding, math, and conceptual benchmarks, these models highlight the rapid progress being made in AI research.
By inviting the broader research community to collaborate on safety testing, OpenAI aims to ensure that these capabilities are deployed responsibly.
Watch the stream below: