OpenAI has changed ChatGPT from just a chat helper into a platform where developers can make small apps that work right inside the chat. Now, users can do things like book hotels, check rentals, or update design files without leaving ChatGPT. These mini-apps show up as cards or forms during a conversation, making everything easy and quick to use. It’s super simple for people – no more switching between different apps or learning new dashboards. This update connects millions of users and developers in one powerful, interactive place.
What is OpenAI’s new ChatGPT in-chat apps platform, and how does it work?
OpenAI has transformed ChatGPT into a platform where developers can launch interactive in-chat apps using the new Apps SDK and Model Context Protocol (MCP). These apps appear as cards, forms, or dashboards in conversations, letting users access services like Booking.com or Figma directly within ChatGPT, without additional installations.
OpenAI turns ChatGPT into a platform
OpenAI’s DevDay 2025 announcement moves ChatGPT from single-purpose assistant to fully fledged application layer. Developers can now ship in-chat apps that appear as interactive cards, forms, or mini dashboards inside any conversation. The capability hinges on a preview Apps SDK, built on the Model Context Protocol (MCP), that standardises how tools are described and rendered inside the thread.
Why this matters for teams building software
- Reach an audience of 800 million weekly users without asking them to install anything.
- Leverage a marketplace of 4 million registered builders to cross-integrate services and data.
- Reduce context switching – users plan trips, update Figma files, or pull Salesforce records in one window.
- Conversational UI removes the learning curve that plagues traditional SaaS dashboards.
Key numbers at a glance
Metric | Value (2025) |
---|---|
Weekly active ChatGPT users | 800M |
Registered developers | 4M |
Tokens processed per minute | 6B |
Initial launch partners | 12+ large consumer and B2B apps |
Source: OpenAI release notes
Inside the Apps SDK workflow
- Tool definition – A developer writes a JSON schema describing functions the model can call.
- Secure endpoint – The app logic runs on a serverless endpoint that processes the request.
- Model invocation – When user context matches, ChatGPT selects the appropriate tool contract.
- UI rendering – Results are returned as structured JSON that ChatGPT turns into an inline card.
- State management – MCP preserves conversation context so follow-up queries work naturally.
json
{
"name": "hotel.search",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"},
"check_in": {"type": "string", "format": "date"},
"nights": {"type": "integer", "minimum": 1}
},
"required": ["city", "check_in", "nights"]
}
}
The above snippet is all ChatGPT needs to trigger a partner like Booking.com when a user asks for hotels.
Early partner examples
- *Booking.com * – Return live room availability and pricing.
- *Zillow * – Surface rental listings with filters such as price ceiling or transit lines.
- *Figma * – Let designers search and embed component libraries without opening the Figma tab.
- *Coursera * – Recommend courses tailored to a user’s skill gap detected in the chat.
Security and compliance highlights
- OAuth login flow keeps user credentials with the partner, never in ChatGPT.
- Strict data-minimisation requirements: apps can request only the fields defined in their schema.
- Automatic rate limiting protects both the partner API and the user experience.
Getting started checklist for developers
- Sign in to the OpenAI developer portal and enable Developer Mode.
- Fork the sample apps repo that demonstrates MCP contracts and serverless boilerplate.
- Deploy your endpoint, then submit the JSON manifest for approval.
- Use the sandbox chat to validate rendering and error handling before going live.
What’s next on the roadmap
OpenAI plans model-agnostic GPT support, fine-grained analytics for app usage, and a storefront that will let companies monetise premium endpoints. Support for multimedia output is already in testing, powered by the upcoming Sora 2 video model, giving a glimpse of richly interactive chats ahead.
What exactly are in-chat apps and how do they change the way we use ChatGPT?
In October 2025 OpenAI released the Apps SDK – a toolkit that lets developers build micro-applications that run inside the conversation window. Instead of leaving ChatGPT to check Spotify, book on Expedia or open Figma, the model can invoke the partner service inline and show a card, a mini-widget or even a fully interactive form. Early launch partners include Booking.com, Coursera, Zillow and Spotify, covering music, travel, education and real-estate use cases in one turn. Users authenticate once, then ask naturally (“find me a jazz playlist” or “show three-bedroom flats near Dolores Park”) and the app answers immediately – no browser tabs, no copy-paste.
How does the Model Context Protocol (MCP) make these integrations possible?
MCP is the lingua franca between ChatGPT and external tools. It standardizes:
– how an app describes its functions
– what data it can access
– which UI pieces (button, slider, date-picker) the model may render
Because the contract is strict JSON, the AI understands exactly what each parameter does and can fill it contextually. The SDK keeps the conversation history, user identity and app state in one sandbox, so developers only worry about business logic while OpenAI handles security, rate limits and UI consistency. Early adopters report 60-70 % fewer support tickets compared with classic bot integrations because the model, not the user, drives the interaction.
Why is this a strategic shift for OpenAI and its developer ecosystem?
ChatGPT already reaches 700 million weekly active users in 2025. By turning the chat window into an operating system, OpenAI aims to become the universal interface for digital life. Developers gain:
– instant distribution to that audience
– built-in monetization through usage-based billing managed by OpenAI
– free trust & safety review, hosting and CDN
In return OpenAI gets:
– richer sessions that keep users inside ChatGPT
– a growing catalog of vertical experiences (travel, finance, design) without building them in-house
Early metrics show that rooms using an in-chat app stay 2.4 × longer and return 38 % more often within a week.
What new models and voice features arrived with the same Dev Day drop?
Alongside the SDK, OpenAI launched:
– GPT-5 Pro, a high-accuracy model optimized for finance, legal and healthcare reasoning
– Sora 2 in API preview, letting apps generate and stream 1080p video clips on demand
– gpt-realtime mini, a lightweight voice model that costs 70 % less than the previous advanced voice mode while keeping sub-200 ms latency
Any in-chat app can call these endpoints, so a real-estate bot can now answer with a narrated video tour or a legal assistant can read statute excerpts aloud – all orchestrated from the same conversation thread.
What are developers building – and what hurdles remain?
Three months after launch, the 1,400+ apps in the directory cluster around:
– productivity (calendar wranglers, meeting summarizers)
– commerce (one-message checkout, dynamic coupons)
– creativity (Canva quick graphics, Suno jingles)
Success stories cite:
– 4-6 × faster MVP cycles because UI scaffolding is automatic
– built-in user feedback loop – the model asks clarifying questions when data is missing
Challenges include:
– strict privacy rules (no covert data harvesting)
– deterministic JSON contracts that can feel rigid for highly dynamic sites
– occasional model update hiccups that change function-calling behavior
OpenAI’s public roadmap promises:
– granular versioning
– on-device encrypted storage for Personal GPTs
– an enterprise dashboard to let IT teams whitelist domains and audit every in-chat transaction