OpenAI Launches Compact GPT-5.4 Mini and Nano for Faster, Cheaper AI
What’s New
I see OpenAI released two tiny brain models called Mini and Nano. You get better results when you pick speed over raw size for your coding tools. These new tools helps you do quick data pulls without waiting for a slow reply. I want you to think about how much time you save with these small models.
Speed and Benchmarks
I noticed Mini runs two times faster than the old big model. You can trust these numbers because Mini hits 54.4% on that SWE-Bench Pro test. The big version got 57.7% so the gap is real small for you to worry about. Look at OSWorld where Mini gets 72.1% and the large one hits 75%.
Use Nano when you need to save every penny on huge jobs. I find that every millisecond counts when you run many tasks at once.
Pricing Made Simple
I think you will like paying only $0.75 for one million input tokens on Mini. Output tokens cost $4.50 which is a steal for your budget. Nano drops the price even more to $0.20 for inputs and $1.25 for outputs. Both models still sees pictures and reads 400k tokens of text. You get the same features for a much lower price.
How to Use Them
I want you to let the big model do the thinking first. Give the boring grunt work to Mini to save time. Some testers say Mini beats the big model in their work flows. You find Mini in your free ChatGPT plan right now because it works as a backup.
Connect to Nano through the API if you run a big company. I suggest you use this for tasks that happen a lot and cost too much.
Why It Matters
I believe the world wants smaller tools that do specific jobs. You should balance your speed and cost to win. OpenAI makes AI feel like a normal tool you use every day. These models gives you the power to build things that were too expensive before.
