
I made a study app in MCP Apps where you can check execution within the chat!
This page has been translated by machine translation. View original
Hello, I'm Shunta Toda from the Retail App Co-creation Department.
I created a VS Code-like playground using MCP Apps × Next.js × Monaco Editor that allows you to edit and run JavaScript code within AI chats.
With this, you can complete the entire process from asking AI questions to running code, modifying it, and asking follow-up questions all within the chat when learning JS syntax.
What I Built
When you ask the AI something like "How do I use spread syntax?", in addition to text explanations, a code widget that can be edited and executed appears in the chat.

Traditionally, you needed to copy code blocks and paste them into DevTools or elsewhere to run them, but with this app, you can just edit the code right there and press the Run button to see the results.
Traditional Experience
- Ask AI a question
- Get a response with a code block
- Copy the code
- Paste into DevTools or another environment to run it
- If you make edits, copy and paste again... (return to step 3)
Experience with This App
- Ask AI a question
- VS Code-like editor + terminal widget appears in the chat
- Edit the code right there and press ▶ Run to execute it
- If you don't understand something, press 💬 Ask AI to ask about it (return to step 2)
What is MCP Apps?
MCP Apps is a mechanism that allows you to display custom HTML/React UIs inline with MCP tool call results. Claude Desktop, ChatGPT, and others support this.
While normal MCP tools only return text, MCP Apps can display React UIs in sandboxed iframes.
Claude Desktop chat interface
│
├── User question (normal chat)
├── AI text response (normal chat)
└── MCP tool call
└── returns structuredContent
└── MCP Apps widget (iframe) is rendered within the chat
If you want to learn more about MCP Apps, please check this article.
Architecture
Technology Stack
| Technology | Purpose |
|---|---|
| Next.js 16 (App Router) | Framework |
@modelcontextprotocol/ext-apps |
MCP Apps SDK |
mcp-handler |
MCP handler for Next.js (by Vercel) |
@monaco-editor/react |
VS Code's editor engine |
| Tailwind CSS 4 | Styling |
| Zod | Schema validation |
| Vercel | Hosting |
Directory Structure
study-programming-mcp-apps/
├── app/
│ ├── mcp/
│ │ └── route.ts # MCP server + playground tool definition
│ ├── components/
│ │ ├── PlaygroundWidget.tsx # Overall widget
│ │ ├── MonacoEditor.tsx # Monaco Editor (dynamic import)
│ │ ├── TabBar.tsx # script.js tab
│ │ ├── Terminal.tsx # Terminal output panel
│ │ └── Toolbar.tsx # Run / Clear / Copy / Ask AI buttons
│ ├── hooks/
│ │ ├── use-mcp-app.ts # MCP connection bridge
│ │ └── useCodeExecution.ts # Code execution management
│ ├── lib/
│ │ └── vscode-theme.ts # VS Code-like color palette
│ ├── page.tsx # Entry point
│ └── layout.tsx # iframe-compatible layout
├── baseUrl.ts
├── middleware.ts
└── next.config.ts
Implementation Points
MCP Tool Definition — Data Flow from AI → Widget
With MCP Apps, you register two things on the server side:
- Tool — A function called by the AI. In this case, it's named
playgroundand takescodeandautoRunas arguments - Resource — The widget's HTML. This is the UI body displayed in the chat when the tool is called
By linking these two with resourceUri, we create the flow of "AI calls the tool → corresponding UI is displayed".
The tool's return value has two types: content and structuredContent. content is text that remains in the AI's conversation context, while structuredContent is data that only reaches the widget and is invisible to the AI. In this case, we pass the code we want to display in the widget via structuredContent.
Widget-side Reception — useApp Hook and State Management
To receive data on the widget side, we register ontoolinput and ontoolresult handlers with the official SDK's useApp hook.
One thing to note is that handlers must be registered within the onAppCreated callback. Since useApp sets up handlers before the internal connect(), we don't miss any events immediately after connection.
One issue I encountered during development was mounting the Monaco Editor with default code before MCP data arrives. Once mounted, Monaco Editor doesn't update its display even when props change.
To address this, I made it so that when connected && !data (connected but data hasn't arrived), the widget isn't mounted, and PlaygroundWidget is only rendered after the data arrives.
Code Execution — Hijacking console
Let's look at what happens when the ▶ Run button is pressed.
The basic flow is to temporarily override console.log and other methods, channel the output into React state to display in the terminal panel, and then restore them after execution. For code execution, I use the AsyncFunction constructor, which also supports code with await.
Since MCP Apps widgets already run in a sandboxed iframe on the host side, this eval-based approach has limited security risks.
Bidirectional Communication from Widget → AI — sendMessage
So far, the flow has been one-way from AI → widget. However, for a complete learning experience, it would be better if you could ask the AI about the results after modifying and running the code.
Using app.sendMessage(), you can send messages from the widget to Claude's conversation.
When you press the "💬 Ask AI" button, it sends the current code in the editor along with the execution results to Claude. This is useful when you've modified and run the code but don't understand the results. The learning loop of AI presents code → user tries it → user asks AI if confused is completed entirely within the chat.
Development Considerations
Monaco Editor CDN Loading and CSP
MCP Apps' iframe sandbox has strict CSP settings. Monaco Editor loads worker files from cdn.jsdelivr.net by default, so I needed to explicitly allow this in the CSP settings.
If you forget this, the editor will remain stuck on "Loading..." forever.
Try It Out
It's deployed on Vercel, so you can try it immediately from Claude Desktop. Add the following to your claude_desktop_config.json and restart Claude Desktop:
{
"mcpServers": {
"js-playground": {
"command": "npx",
"args": ["mcp-remote", "https://study-programming-mcp-apps.vercel.app/mcp"]
}
}
}
Then just ask Claude something about JavaScript like "How do I use spread syntax?", and the playground tool will be called, displaying the widget.
Conclusion
With MCP Apps, you can embed interactive UIs within AI conversations. While I created a JavaScript Playground this time, the same mechanism can be used to create various widgets such as graph drawing tools, form builders, data visualizers, and more.
The app introduced here eliminates the need for "ask AI for code, copy-paste, and execute" iterations, creating a learning experience that takes place entirely within the chat.
I hope this is helpful to you.
References