OpenAI GPT-4.5 System Card
GPT-4.5 system card
Specific areas of risk
- Disallowed content
- Jailbreaks
- Model mistakes
Preparedness scorecard
- CBRNMedium
- CybersecurityLow
- PersuasionMedium
- Model autonomyLow
Scorecard ratings
- Low
- Medium
- High
- Critical
Only models with a post-mitigation score of "medium" or below can be deployed.
Only models with a post-mitigation score of "high" or below can be developed further.
We’re releasing a research preview of OpenAI GPT‑4.5, our largest and most knowledgeable model yet. Building on GPT‑4o, GPT‑4.5 scales pre-training further and is designed to be more general-purpose than our powerful STEM-focused reasoning models. We trained it using new supervision techniques combined with traditional methods like supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), similar to those used for GPT‑4o. We conducted extensive safety evaluations prior to deployment and did not find any significant increase in safety risk compared to existing models.
Early testing shows that interacting with GPT‑4.5 feels more natural. Its broader knowledge base, stronger alignment with user intent, and improved emotional intelligence make it well-suited for tasks like writing, programming, and solving practical problems—with fewer hallucinations.
We’re sharing GPT‑4.5 as a research preview to better understand its strengths and limitations. We’re still exploring its capabilities and are eager to see how people use it in ways we might not have expected.
This system card outlines how we built and trained GPT‑4.5, evaluated its capabilities, and strengthened safety, following OpenAI’s safety process and Preparedness Framework.