
Building the Openfort MCP Server was driven by a single goal: to make developer onboarding as fast and autonomous as possible. Traditionally, getting started with a new infrastructure provider meant spending hours reading documentation and manually configuring projects. By using the Model Context Protocol, we flipped that experience, allowing AI assistants to handle the heavy lifting for you.
Since the original launch, we have expanded beyond the MCP Server into a full suite of AI tools. This post covers what MCP is, how the Openfort MCP Server works, and the broader set of AI integrations now available to developers.
What is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard, originally developed by Anthropic, that defines how AI models connect to external tools and data sources. The simplest way to think about it: MCP is to AI tools what USB is to peripherals. Before USB, every device needed its own connector. Before MCP, every AI assistant needed custom code to talk to every external service.
MCP solves three problems at once:
- Tool discovery -- An AI assistant connects to an MCP server and immediately knows which tools are available, what parameters they accept, and what they return. No manual wiring required.
- Authentication -- MCP supports OAuth-based authentication, so users log in once and the AI can make authorized API calls on their behalf for the duration of the session.
- Stateful sessions -- Unlike stateless REST calls, MCP maintains a persistent connection. The server can remember context (like which project you are working on) across multiple tool invocations within a session.
In practice, this means a developer using Cursor or Claude Desktop can type "create a new Openfort project with a gas sponsorship policy" and the AI assistant will discover the right tools, authenticate, and execute the request, all without the developer writing a single API call or visiting the dashboard.
Openfort AI Tools Overview
Openfort now offers multiple ways for AI systems to interact with wallet infrastructure. Each path is designed for a different use case:
| Integration | Best for | How it works |
|---|---|---|
| MCP Server | IDE-based development with AI assistants | AI assistants in Cursor, Windsurf, VS Code, or Claude Desktop connect to the server and call tools directly |
| Vercel AI SDK | Building custom AI applications | Use the @ai-sdk/mcp package to connect your own AI app to Openfort tools programmatically |
| LLM-friendly docs | Any AI model or RAG pipeline | Consume structured documentation via llms.txt and llms-full.txt endpoints |
For full details on all integration options, see the Building with AI documentation.
The Openfort MCP Server
The Openfort MCP Server is a hosted implementation of MCP that gives AI coding assistants direct access to Openfort infrastructure. It provides a bridge for LLMs to securely create projects, configure policies, and fetch real-time documentation snippets without the developer needing to leave their IDE. By exposing these tools to an AI agent, developers can scaffold entire on-chain applications and manage wallet configurations through natural language prompts, reducing manual setup time by up to 90%.
We architected the MCP Server with two guiding principles:
-
Speed up initial development and scaffolding
The primary design goal was to allow developers (or AI agents acting on their behalf) to create, configure, and scaffold full projects without needing to manually navigate the dashboard. By exposing project and resource management tools (like creating keys, policies, contracts, etc.) directly to the LLM, we allow developers to go from zero to prototype in minutes.
To support this, we also incorporated tool-based code snippets that provide product-specific scaffolding.
-
Provide precise and focused context
Rather than relying on generic documentation search or bloated RAG, our MCP acts as a query-aware context provider. It can fetch the exact snippets of documentation, code examples, and configuration references that are relevant to the user's current query, project state, or LLM gaps in knowledge. This feedback loop enhances reasoning and reduces hallucination.
Tool Categories
The MCP Server exposes 42 tools across three categories:
- Context -- Fetch real-time documentation and code snippets to scaffold projects. Includes tools like
search-documentation,create-openfortkit-app, andcreate-embedded-wallet-app. - Management -- Create and initialize new projects, manage API keys, and handle Shield keys. Includes tools like
create-project,list-projects,select-project, and key management tools. - Project -- Control all the details within a specific project: policies, contracts, users, accounts, and transactions. Includes tools for full CRUD operations on policies and policy rules, contract management, user and account management, and transaction simulation.
Installation
Add the MCP Server to your editor's configuration file:
_10{_10 "mcpServers": {_10 "openfort-docs": {_10 "url": "https://www.openfort.io/api/mcp"_10 }_10 }_10}
This works with Cursor, Windsurf, VS Code (with GitHub Copilot), and Claude Desktop. Authentication is triggered automatically using GitHub, Google, or email/password through Supabase OAuth.
Architecture
Deployment
Running the MCP Server locally using stdio was discarded due to setup complexity, lack of centralized monitoring, and difficulty broadcasting tool updates. Instead, we deployed it as a remote server.
The Cloudflare MCP infrastructure provides out-of-the-box deployment with native support for Server-Sent Events (SSE). This enables persistent connections where the server pushes data to clients in real time, similar to WebSockets but server-to-client only.
Authentication
The Cloudflare infrastructure provides the workers-oauth-provider library, an OAuth 2.1 provider that eliminates the need to pass API keys directly through chat or store them in .env files. Authentication uses the PKCE flow, keeping everything server-side. Supabase OAuth provides a second layer of authentication, handled through a custom implementation using the workers-oauth-provider library.
State Management
The McpAgent SDK handles transport and provides session state. During a session, the AI agent stores the API keys from a selected project and routes all subsequent API calls to that project, avoiding repeated manual input.
Structure
The source code follows a simple layout:
handlers-- OAuth handlers with a custom Supabase OAuth implementationutils-- HTML forms for the authentication prompttools-- Tool definitions divided by category (context, management, project)index.ts-- MCP Agent initialization and tool registration
Using Openfort with the Vercel AI SDK
For developers building their own AI-powered applications, Openfort tools can also be accessed through the Vercel AI SDK. Because the Openfort MCP Server implements the standard protocol, it can be connected via the @ai-sdk/mcp package:
_11import { createMCPClient } from "@ai-sdk/mcp";_11_11const openfortClient = await createMCPClient({_11 transport: {_11 type: "sse",_11 url: "https://www.openfort.io/api/mcp",_11 },_11});_11_11// All 42 Openfort tools are now available_11const tools = await openfortClient.tools();
This enables use cases like:
- AI chatbots that can create wallets and manage policies on behalf of users
- Automation pipelines that scaffold and configure new projects programmatically
- Agent systems (LangChain, CrewAI, AutoGen) that need wallet infrastructure for autonomous transactions
LLM-Friendly Documentation
For AI models and RAG pipelines that need to understand Openfort without connecting to the full MCP Server, we provide structured documentation endpoints:
/docs/llms.txt-- A concise overview of Openfort with key links/docs/llms-full.txt-- Full documentation in a format optimized for LLM consumption
These follow the llms.txt standard and can be consumed by any AI model or documentation pipeline.
Limitations
The current state of Openfort's AI tools still poses some limitations:
- The MCP protocol itself is still evolving. For example, the return format from the MCP Server to the MCP Client must always be text, so JSON responses are stringified, which can occasionally confuse LLMs.
- Output quality depends on the underlying LLM. Results are non-deterministic and can hallucinate or produce errors.
- Tool selection is sensitive to prompt phrasing. A slight change in wording can lead the LLM to choose a different tool or skip one entirely.
- The MCP Server currently only works with test projects. Live mode support is not yet implemented.
- Code scaffolding is primarily designed for the React + Vite stack.
- MCP support requires an IDE or client that supports custom MCP servers.
Resources
- Building with AI -- Openfort Documentation
- MCP Server Documentation
- Model Context Protocol -- Official Introduction
- Vercel AI SDK -- MCP Tools
- Cloudflare -- What is Model Context Protocol?
Related reading:
