ChatGPT 5.4: model, Thinking, Pro, API, Codex, pricing, and what it actually is
- Mar 6
- 7 min read
Updated: Mar 8

OpenAI has launched GPT-5.4 across ChatGPT, the API, and Codex, but not as one flat product with the same meaning everywhere.
In ChatGPT, the visible layer is mainly GPT-5.4 Thinking and, for higher tiers, GPT-5.4 Pro.
In the API, the published model names are gpt-5.4 and gpt-5.4-pro, each with hard technical specs, pricing, and reasoning controls.
In Codex, GPT-5.4 has replaced gpt-5.3-codex and now functions as the default frontier model for coding-agent work.
That is the structure that has to be separated before discussing limits, access, pricing, or workflows.
The confusion starts when all of these surfaces are collapsed into the same label, because the family is unified at the model-generation level while the product contract still changes depending on where it runs.
Once the surface is identified correctly, the rest becomes easier to read: what users can access, what the model can actually do, how much context it supports, when pricing changes, and how the coding-agent layer fits into the system.
··········
What GPT-5.4 refers to in OpenAI’s official model hierarchy.
GPT-5.4 is the frontier base model, while Thinking and Pro are separate operating layers around it.
OpenAI describes GPT-5.4 as its frontier model for complex professional work, and the API documentation publishes a hard model contract rather than a generic capability summary.
The published base model supports 1,050,000 context window, 128,000 max output tokens, text-and-image input, text output, and reasoning.effort values from none to xhigh.
OpenAI’s model guide states that GPT-5.4 replaces gpt-5.2 in the API for broad general-purpose work and most coding tasks.
The same guide states that GPT-5.4 also replaces gpt-5.3-codex in Codex, which means OpenAI is consolidating frontier reasoning and frontier coding-agent behavior inside one main family rather than keeping them separated under different default model lines.
........
· GPT-5.4 is documented as a real API model with a published technical envelope.
· The base model contract includes 1,050,000 context, 128,000 max output, and configurable reasoning effort.
· OpenAI explicitly says GPT-5.4 replaces gpt-5.2 in the API and gpt-5.3-codex in Codex.
· The family is being used as a consolidation point for both general professional work and advanced coding workflows.
........
GPT-5.4 as a model-level release
Layer | Officially documented posture |
Model identity | GPT-5.4 is OpenAI’s frontier model for complex professional work |
Base technical envelope | 1,050,000 context window and 128,000 max output |
Reasoning control | reasoning.effort from none to xhigh |
Product-role shift | Replaces gpt-5.2 in the API and gpt-5.3-codex in Codex |
··········
How GPT-5.4 appears inside ChatGPT is not the same as how it appears in the API.
The ChatGPT surface emphasizes Thinking and Pro rather than a single universal “base GPT-5.4” experience.
OpenAI’s Help Center frames the ChatGPT product story around GPT-5.4 Thinking and GPT-5.4 Pro rather than around one universal GPT-5.4 picker with identical access for all users.
The article states that paid tiers such as Plus, Pro, and Business can manually select GPT-5.4 Thinking, while GPT-5.4 Pro is available to Pro, Business, Enterprise, and Edu plans.
The ChatGPT release notes add that GPT-5.4 Thinking can provide an upfront plan of its thinking and is improved for deeper web research and longer-horizon reasoning tasks.
That means the visible 5.4 experience in ChatGPT is a reasoning-oriented product layer, not simply the raw base model as it is documented in the API.
The distinction is structural rather than cosmetic, because product surfaces decide what options are selectable, what limits apply, and which runtime behaviors are exposed directly to the user.
··········
What GPT-5.4 Thinking adds inside the product.
Thinking is a real reasoning layer in the GPT-5.4 family, not a cosmetic label in the interface.
OpenAI has published a dedicated GPT-5.4 Thinking System Card, which confirms that “Thinking” is treated as a genuine reasoning-model layer inside the GPT-5.4 family.
The system card states that GPT-5.4 Thinking is the latest reasoning model in the GPT-5 series, and it also notes that it is the first general-purpose model in the series to include mitigations for High capability in Cybersecurity.
That places Thinking in a different category from a simple UI preset.
It is a distinct reasoning posture inside the family, with its own safety treatment and its own operational role inside ChatGPT.
Inside ChatGPT, this is the version OpenAI is foregrounding for users who need deeper work rather than only fast replies, which is why the 5.4 story in the product is so tightly tied to the Thinking label.
··········
What GPT-5.4 Pro is and why it should not be confused with ordinary 5.4 access.
GPT-5.4 Pro is a separate high-compute route with a different runtime and a radically different price profile.
OpenAI publishes gpt-5.4-pro as a distinct API model rather than as a mild variant of base GPT-5.4.
The official page says it uses more compute to think harder, is available only in the Responses API, and may take several minutes on hard problems, with background mode recommended to avoid timeouts.
Its published technical envelope remains large, with 1,050,000 context window and 128,000 max output tokens, and the documentation states a cutoff of Aug 31, 2025.
Its cost profile is dramatically different from base GPT-5.4, at $30 / 1M input and $180 / 1M output.
That makes GPT-5.4 Pro a premium compute path intended for workloads where extra reasoning can justify both the latency and the price, not a default mode for everyday use.
........
· GPT-5.4 Pro is documented as a distinct model, not a simple stronger toggle.
· It is Responses-API only in the official API documentation.
· It keeps the same top-end context and output scale as base GPT-5.4 but with a much higher price.
· OpenAI explicitly recommends background mode because some runs can take several minutes.
........
Base GPT-5.4 vs GPT-5.4 Pro
Dimension | GPT-5.4 | GPT-5.4 Pro |
Primary role | Frontier base model | Higher-compute premium model |
API surface | Standard API model docs | Responses API only |
Input pricing | $2.50 / 1M input | $30 / 1M input |
Output pricing | $15 / 1M output | $180 / 1M output |
Runtime posture | General professional work | Slower, heavier, harder-thinking runs |
··········
How the pricing structure defines the real contract for giant-context usage.
The 1M-plus context headline is real, but OpenAI prices very long sessions as a premium operating regime.
Base GPT-5.4 is priced at $2.50 / 1M input and $15 / 1M output.
The more important pricing detail is the large-context surcharge regime for the 1.05M-context models.
OpenAI states that prompts above 272K input tokens are charged at 2x input and 1.5x output for the full session, and that regional endpoints carry an additional 10% uplift.
That commercial structure means huge context is available, but not as a cheap default operating zone.
Crossing into very large sessions pushes the run into a different cost regime, which encourages routing, compaction, and selective use of very long traces rather than unrestricted “always max context” behavior.
··········
What GPT-5.4 now means for Codex and coding-agent workflows.
GPT-5.4 has absorbed the role of GPT-5.3-Codex and now sits at the center of OpenAI’s coding-agent stack.
OpenAI’s model guide says GPT-5.4 replaces gpt-5.3-codex in Codex.
The Codex changelog adds that GPT-5.4 is available in Codex as OpenAI’s most capable and efficient frontier model for professional work, and describes it as the first general-purpose model with native computer use in Codex.
The same Codex material says GPT-5.4 is available wherever Codex runs, including the Codex app, CLI, IDE extension, Codex Cloud on the web, and the API.
This is a major product shift.
GPT-5.4 is not only a frontier chat model and not only a frontier API model.
It is also the new default frontier model for repo-aware, tool-heavy, coding-agent workflows that previously had a clearer separation under the Codex-specific naming line.
........
· GPT-5.4 replaces gpt-5.3-codex in Codex according to OpenAI’s model guide.
· Codex now treats GPT-5.4 as a general-purpose frontier model with native computer use.
· The family now spans chat, API, premium reasoning, and coding-agent execution.
........
GPT-5.4’s role in Codex
Codex layer | Officially described posture |
Model replacement | GPT-5.4 replaces gpt-5.3-codex |
Capability shift | First general-purpose model with native computer use in Codex |
Surfaces | Codex app, CLI, IDE extension, Codex Cloud on the web, API |
Ecosystem role | GPT-5.4 becomes the main frontier model for coding-agent work |
··········
How ChatGPT usage limits and tool availability shape the practical product contract.
The model family is powerful, but access and runtime behavior still depend on plan tier and surface.
OpenAI’s Help Center states that Plus or Business users can manually select GPT-5.4 Thinking with a limit of up to 3,000 messages per week.
For Go, OpenAI states users can enable Thinking from the tools menu and send up to 10 messages every 5 hours after enabling it.
For Business and Pro, OpenAI states there is unlimited access to GPT-5 models, subject to abuse guardrails and Terms of Use restrictions.
OpenAI also states that GPT-5.3 Instant and GPT-5.4 Thinking support all tools in ChatGPT, including web search, data analysis, image analysis, file analysis, canvas, image generation, memory, and custom instructions.
That gives GPT-5.4 Thinking a full modern product-tool contract inside ChatGPT, which is especially relevant because earlier GPT-5.2 Pro restrictions had created a narrower tool surface.
··········
What it is safest to say when describing ChatGPT 5.4 as a whole.
The correct definition is multi-surface rollout of the GPT-5.4 family, not one identical picker option with one identical contract.
The cleanest phrasing is that ChatGPT 5.4 refers to a broader GPT-5.4 family rollout across ChatGPT, the API, and Codex, with different user-facing contracts on each surface.
In ChatGPT, the visible expression is mainly GPT-5.4 Thinking and GPT-5.4 Pro, with tier-dependent access and documented product limits.
In the API, the main published model identities are gpt-5.4 and gpt-5.4-pro, with hard specs, reasoning controls, and pricing.
In Codex, GPT-5.4 now functions as the default frontier coding-agent model and has absorbed the role previously associated with gpt-5.3-codex.
That is what “ChatGPT 5.4” actually is in practice: one frontier family, several distinct runtime contracts, and a rollout that is unified at the generation level but not flattened at the product-surface level.
·····
FOLLOW US FOR MORE.
·····
·····
DATA STUDIOS
·····

