Last active
May 27, 2025 17:38
-
-
Save Siedrix/5b2f722025ed3984f9b593711efb4f5f to your computer and use it in GitHub Desktop.
Summary task
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"taskName": "openAi:summary", | |
"projectName": "pkms", | |
"fixtureUUID": "1d0e9aef-35dc-4885-be34-61c667b828f0", | |
"input": { | |
"uuid": "b7cc8b2f-937c-43cc-81fb-8e939e45a21a" | |
}, | |
"boundaries": { | |
"findByUuid": [ | |
{ | |
"input": [ | |
"b7cc8b2f-937c-43cc-81fb-8e939e45a21a" | |
], | |
"output": { | |
"_id": "6835f47a70eb72c4385deaf6", | |
"url": "https://xata.io/blog/built-xata-mcp-server", | |
"status": "hasContent", | |
"title": "From OpenAPI spec to MCP: how we built Xata's MCP server | xata", | |
"description": "Learn how we built an OpenAPI-driven MCP server using Kubb, custom code generators, and Vercel’s Next.js MCP adapter.", | |
"content": "* Features\n* Open source\n* [Pricing](https://xata.io/blog/built-xata-mcp-server/pricing)\n* [Blog](https://xata.io/blog/built-xata-mcp-server/blog)\n\n[8.0k](https://github.com/xataio)\n\n[8.0k](https://github.com/xataio)\n\n* Log in\n\n[Get access](https://xata.io/blog/built-xata-mcp-server/get-access)\n\n# From OpenAPI spec to MCP: How we built Xata's MCP server\n\nLearn how we built an OpenAPI-driven MCP server using Kubb, custom code generators, and Vercel’s Next.js MCP adapter.\n\nAuthor\n\nAlexis Rico\n\nDate published\n\nMay 20, 2025\n\nModel Context Protocol (MCP) is an emerging standard that lets AI models securely interact with tools and APIs in real time. Building an MCP server means exposing a set of “tools” (operations) that a Large Language Model (LLM) can call to perform tasks, for example, fetching data or triggering actions via your backend. Rather than hand-coding each tool, we set out to **generate an MCP server from our existing OpenAPI specification**, leveraging our API’s schema as a single source of truth. This OpenAPI-driven approach promises quick development and consistency, but it comes with design considerations.\n\nOn one hand, auto-generating tools directly from a REST API spec is very appealing. You’ve already documented your API thoroughly, so why not transform those endpoints into AI-accessible functions? **It saves time and keeps the API and MCP definitions in sync**, avoiding duplicate work. On the other hand, a **naïve one-to-one mapping of every endpoint to an MCP tool can overwhelm an LLM**. LLMs struggle to choose the right action from so many low-level options, leading to frequent errors or unpredictable calls, especially if several endpoints have similar purposes.\n\nThe solution lies in a balanced approach. Writing an MCP server entirely by hand for a large API would be a massive time sink, manually crafting each tool’s schema and handler is tedious and error-prone, especially when a well-defined OpenAPI spec already exists. Instead, **we can autogenerate the groundwork from OpenAPI, then curate it**. In practice, this means using codegen to produce a set of tool definitions and client calls, _then_ trimming or augmenting the OpenAPI spec that generates the tools to align with real-world usage. \n\nThis post is a technical overview of how we built the Xata MCP Server, covering our switch to a new OpenAPI codegen approach, custom generation of MCP tools, and the Next.js server implementation.\n\nWe’ll walk through the journey in three parts:\n\n1. **Migrating to Kubb for OpenAPI code generation.** Why we replaced our previous codegen with Kubb and the benefits gained.\n2. **Customizing Kubb with custom generators.** How we generated a TypeScript API client and a suite of MCP tools from our OpenAPI spec.\n3. **Creating the MCP Server with Next.js.** Wiring it all together using Vercel’s MCP adapter, route handlers, authentication middleware, and initializing the generated tools.\n\nLet’s explore, step by step, how the Xata MCP Server was built.\n\n## Migrating from OpenAPI Codegen to Kubb\n\nOur first task was to revisit how we generate API client code from Xata’s API specification. Historically, we used a traditional OpenAPI code generator to produce a TypeScript client for the Xata REST API. This approach worked, but it was rigid and hard to customize. Adding new output formats or tweaking the generated code meant wrestling with scripts or post-processing the results. We wanted a more flexible, integrated solution.\n\nEnter [**Kubb**](https://kubb.dev), a toolkit designed for TypeScript projects to generate code from OpenAPI/Swagger specs. Kubb can generate TypeScript types, API clients, React Query hooks, Zod validators, MSW handlers, and even MCP integration code, all from an OpenAPI spec.\n\nAnother key reason we chose Kubb was its **plugin and generator architecture**. Kubb’s code generation process is highly customizable: you can plug in predefined generators or write your own to tailor the output. This was exactly what we needed. Instead of treating our OpenAPI spec as just input for a one-size-fits-all client generator, we could leverage it to produce **multiple outputs**, like a low-level API client and a set of MCP tools in one go.\n\n### Kubb Configuration for Xata’s API\n\nSetting up Kubb was straightforward. We added a `kubb.config.ts` in our project, pointing it to Xata’s OpenAPI spec (which we maintain for our REST API) and declaring which plugins/generators to use. For our case, we enabled the core OpenAPI parser, TypeScript type generation, a custom client generator, and a custom MCP tool generator.\n\nHere’s a simplified version of what our Kubb config looks like:\n\nCopy Code\n\nIn this config, the `@kubb/plugin-oas` plugin handles reading the OpenAPI spec and iterating over its contents. We then provide our own generators in the `generators` array, one to build the API client and another to build the MCP tool definitions. The `plugin-ts` is included to output TypeScript interfaces/types for our API schemas (useful for strong typing of request and response bodies). Kubb will orchestrate all these plugins in one run, parsing the spec once and feeding the data to each generator.\n\nWith the config in place, a simple command (e.g. `pnpm kubb generate`) triggers the code generation.\n\n## Custom Generators: API Client and MCP Tools\n\nUsing Kubb’s extensibility, we wrote **custom generators** to produce two key outputs from the OpenAPI spec: (a) **TypeScript API client** for Xata’s REST API, and (b) **MCP tool handlers** that map onto those API endpoints. By generating these from the spec, we ensure consistency and save a ton of manual coding.\n\n### 1\\. Generating a Typed API Client from OpenAPI\n\nThe first generator, `clientGenerator`, focuses on creating a lightweight API client library. We wanted to keep the ergonomics of the fully type-safe API client that we have been using for 3 years. While Kubb offers a default client generator (using Axios by default), we opted to customize it to better fit our needs (for example, to use fetch, handle our auth scheme seamlessly and provide the same surfacing API that we had in our codebase).\n\nIn essence, our client generator iterates over each operation in the OpenAPI spec and emits a function that calls that endpoint. For each operation, we use its **operation ID** (or a modified version of it) as the function name, and generate a TypeScript function signature based on the operation’s parameters and response schema.\n\nBecause this client code is generated, it stays up-to-date with our API. If we add a new endpoint or change a parameter in the OpenAPI spec, re-running Kubb will update the client functions accordingly. This beats hand-writing HTTP calls for each new feature. It’s also less error-prone as we don’t risk typos or forgetting a header, because the generation logic consistently applies the spec’s details. In short, **the OpenAPI spec remains the single source of truth**, and our API client is a direct reflection of it.\n\n### 2\\. Generating MCP Tool Definitions from OpenAPI\n\nThe second (and more interesting) generator is the `mcpGenerator`. This one produces code that bridges the gap between our API and the MCP **tool interface** that an AI agent can use. Although Kubb has support to build a default MCP server, we decided to customize the generator to produce a `initMcpTools` function that we can call from Vercel's MCP Adapter.\n\n**Tool curation and descriptions:** While we generated most tools, we did make some intentional choices. For instance, we **omitted certain internal or less useful endpoints** from the MCP interface to avoid cluttering the AI with too many options. We also edited some tool descriptions for clarity, for example, an OpenAPI description meant for developers might be adjusted to be more instructive for an AI agent.\n\n**Using Zod for input validation:** We decided to use **Zod** schemas for the tool input definitions. Kubb conveniently can generate Zod schemas from the OpenAPI spec (via `@kubb/plugin-zod`), which we leveraged for complex data structures. Zod serves two purposes: it defines the input format for the AI (so that the AI knows what arguments to provide), and it validates any incoming request at runtime, adding a safety net. If an AI somehow provides an incorrect type, the MCP server will reject it before hitting our API.\n\n## Building the MCP Server as a Next.js App\n\nWith our client and tool code generated, the final step was to stand up the MCP server itself. We chose **Next.js** to implement the server, using [Vercel’s @vercel/mcp-adapter](https://github.com/vercel/mcp-adapter) package to handle the protocol details. This choice was driven by a few factors:\n\n* **Seamless Vercel Deployment:** Xata’s MCP server would be deployed on Vercel, and Next.js is a first-class citizen there. Vercel’s MCP adapter is built to drop into a Next.js API route, making deployment and scaling straightforward.\n* **Serverless and Fluid Compute:** Next.js on Vercel can take advantage of their new “Fluid” Node.js runtime, which is well-suited for long-lived connections like SSE (Server-Sent Events) and can yield cost savings for AI workloads.\n* **Routing & Middleware:** Next’s API route and Middleware features allowed us to handle authentication and request routing in the same way we are building the rest of our frontend applications.\n\n### Route Handling with `@vercel/mcp-adapter`\n\nWe created a dedicated API route for MCP under the Next.js `app` directory. Following Vercel’s example, we used a dynamic route `[transport]` to support both SSE and HTTP transports. In our project, we have a file like `app/api/[transport]/route.ts`. This dynamic segment (`[transport]`) means the route will match `/api/mcp` and `/api/sse`. Inside this file, we use the adapter:\n\nCopy Code\n\nLet’s break down what’s happening here. We call `createMcpHandler` to create a Next.js request handler that speaks the MCP protocol. We pass in a callback that receives a `server` object where we **register our tools**. Rather than manually listing each tool, we call our generated `initMcpTools(server)` helper, which in turn invokes all the `server.tool(name, desc, schema, impl)` definitions that were generated from our OpenAPI spec. This populates the MCP server with the full toolkit of Xata actions.\n\nThe MCP server is wrapped with a `withAuth` wrapper function that verifies the token provided by the MCP Host. If no token is found, we return a 401 and prompt the MCP Host to start the OAuth Dynamic Client Registration against our authentication server.\n\nWe export the same handler for both GET and POST HTTP methods. According to the MCP spec (and Vercel’s adapter), the MCP client (the AI’s side) may use GET/POST for different phases of the handshake and tool calling. By exporting both, we ensure our Next.js route will handle all required requests.\n\nIn the configuration object passed to `createMcpHandler`, we included a `redisUrl`. This is because **Server-Sent Events (SSE)** transport (used by Claude and some clients) is stateful and it expects the server to maintain conversation state between calls. The Vercel adapter uses Redis if provided to store state (identified by a session ID) so that multiple function calls in a session share context.\n\nWith this route set up, our Next.js app is essentially a **fully functional MCP server**. When an AI agent connects to it (via an MCP client), the adapter will handle the initial handshake and advertise all the tools we registered. The AI can then invoke any of those tools, and the adapter will call into our implementations (which call Xata’s API) and return the results back to the AI.\n\n**MCP vs traditional API:** It’s worth noting how this differs from a normal REST API. Instead of the client calling specific endpoints directly, the AI does a handshake to discover available **tools** and then calls them by name. You can think of it as a capabilities-based RPC system. For example, rather than hitting a `/databases` REST endpoint, the AI asks “what tools do you have?” and the server replies with something like “I have a `list_databases` tool that lists all databases in a workspace, a `create_branch` tool that creates a branch on a project,” etc. The AI decides which tool to use and sends a request like “invoke `list_databases` with workspace=X”. The MCP server then executes our function for `list_databases`, which in turn calls the real Xata API, and the result is sent back to the AI.\n\n## Conclusion\n\nBy leaning into our OpenAPI schema, we gave our MCP server “superpowers”: the ability to evolve at the speed of our API and the confidence of strong typing and validation at every step. This hybrid approach (auto-generate then polish) let us stand up a powerful AI integration in a fraction of the time it would take to code from scratch. The **MCP Server** now offers a conversational interface to our platform, turning natural language prompts into real actions backed by our APIs. All of this was achieved by treating the API spec as executable knowledge, not just documentation.\n\nAs AI continues to weave into developer platforms, techniques like this will become increasingly common. They enable us to build smarter apps without reinventing the wheel for each new interface. If you’re excited by the possibilities at this intersection of AI and backend infrastructure, we invite you to give our new platform a try. **Xata’s latest offering “Postgres at scale” with data branching and PII anonymization is now live**. It combines a serverless Postgres experience with modern features like instant branching and data masking. Check out [our announcement](https://xata.io/blog/xata-postgres-with-data-branching-and-pii-anonymizationhttps://) or [request beta access](https://https://xata.io/get-access) to see how it can supercharge your development workflow, and feel free to experiment with our MCP server example as you explore what’s next in AI-driven development. Happy coding!\n\n## Related Posts\n\n### [Xata: Postgres with data branching and PII anonymization](https://xata.io/blog/built-xata-mcp-server/blog/xata-postgres-with-data-branching-and-pii-anonymization)\n\nRelaunching Xata as \"Postgres at scale\". A Postgres platform with Copy-on-Write branching, data masking, and separation of storage from compute.\n\n### [Are AI agents the future of observability?](https://xata.io/blog/built-xata-mcp-server/blog/are-ai-agents-the-future-of-observability)\n\nAfter vibe coding, is vibe observability next?\n\nPostgres at scale\n\n[Twitter](http://twitter.com/xata)[Bluesky](https://bsky.app/profile/xata.io)[LinkedIn](https://www.linkedin.com/company/xataio)[Discord](https://xata.io/discord)[YouTube](https://www.youtube.com/@xataio)[Contact us](mailto:[email protected])\n\n[Home](https://xata.io/blog/built-xata-mcp-server/)[About](/about)\n\n[Compare with Neon](https://xata.io/blog/built-xata-mcp-server/versus-neon)[Pricing](https://xata.io/blog/built-xata-mcp-server/pricing)\n\n---\n\n© Copyright 2025 Xatabase Inc. All Rights Reserved.\n\n[Privacy Policy](https://xata.io/blog/built-xata-mcp-server/privacy)[Terms of Use](https://xata.io/blog/built-xata-mcp-server/terms)", | |
"summary": "", | |
"uuid": "b7cc8b2f-937c-43cc-81fb-8e939e45a21a", | |
"createdAt": "2025-05-27T17:20:58.298Z", | |
"updatedAt": "2025-05-27T17:21:10.075Z", | |
"__v": 0 | |
} | |
} | |
], | |
"generateSummary": [ | |
{ | |
"input": [ | |
"Title: From OpenAPI spec to MCP: how we built Xata's MCP server | xata\n\nDescription: Learn how we built an OpenAPI-driven MCP server using Kubb, custom code generators, and Vercel’s Next.js MCP adapter.\n\nContent: Features Open source Pricing Blog 8.0k 8.0k Log in Get access From OpenAPI spec to MCP: How we built Xata's MCP server Learn how we built an OpenAPI-driven MCP server using Kubb, custom code generators, and Vercel’s Next.js MCP adapter. Author Alexis Rico Date published May 20, 2025 Model Context Protocol (MCP) is an emerging standard that lets AI models securely interact with tools and APIs in real time. Building an MCP server means exposing a set of “tools” (operations) that a Large Language Model (LLM) can call to perform tasks, for example, fetching data or triggering actions via your backend. Rather than hand-coding each tool, we set out to generate an MCP server from our existing OpenAPI specification, leveraging our API’s schema as a single source of truth. This OpenAPI-driven approach promises quick development and consistency, but it comes with design considerations. On one hand, auto-generating tools directly from a REST API spec is very appealing. You’ve already documented your API thoroughly, so why not transform those endpoints into AI-accessible functions? It saves time and keeps the API and MCP definitions in sync, avoiding duplicate work. On the other hand, a naïve one-to-one mapping of every endpoint to an MCP tool can overwhelm an LLM. LLMs struggle to choose the right action from so many low-level options, leading to frequent errors or unpredictable calls, especially if several endpoints have similar purposes. The solution lies in a balanced approach. Writing an MCP server entirely by hand for a large API would be a massive time sink, manually crafting each tool’s schema and handler is tedious and error-prone, especially when a well-defined OpenAPI spec already exists. Instead, we can autogenerate the groundwork from OpenAPI, then curate it. In practice, this means using codegen to produce a set of tool definitions and client calls, then trimming or augmenting the OpenAPI spec that generates the tools to align with real-world usage. This post is a technical overview of how we built the Xata MCP Server, covering our switch to a new OpenAPI codegen approach, custom generation of MCP tools, and the Next.js server implementation. We’ll walk through the journey in three parts: Migrating to Kubb for OpenAPI code generation. Why we replaced our previous codegen with Kubb and the benefits gained. Customizing Kubb with custom generators. How we generated a TypeScript API client and a suite of MCP tools from our OpenAPI spec. Creating the MCP Server with Next.js. Wiring it all together using Vercel’s MCP adapter, route handlers, authentication middleware, and initializing the generated tools. Let’s explore, step by step, how the Xata MCP Server was built. Migrating from OpenAPI Codegen to Kubb Our first task was to revisit how we generate API client code from Xata’s API specification. Historically, we used a traditional OpenAPI code generator to produce a TypeScript client for the Xata REST API. This approach worked, but it was rigid and hard to customize. Adding new output formats or tweaking the generated code meant wrestling with scripts or post-processing the results. We wanted a more flexible, integrated solution. Enter Kubb, a toolkit designed for TypeScript projects to generate code from OpenAPI/Swagger specs. Kubb can generate TypeScript types, API clients, React Query hooks, Zod validators, MSW handlers, and even MCP integration code, all from an OpenAPI spec. Another key reason we chose Kubb was its plugin and generator architecture. Kubb’s code generation process is highly customizable: you can plug in predefined generators or write your own to tailor the output. This was exactly what we needed. Instead of treating our OpenAPI spec as just input for a one-size-fits-all client generator, we could leverage it to produce multiple outputs, like a low-level API client and a set of MCP tools in one go. Kubb Configuration for Xata’s API Setting up Kubb was straightforward. We added a kubb.config.ts in our project, pointing it to Xata’s OpenAPI spec (which we maintain for our REST API) and declaring which plugins/generators to use. For our case, we enabled the core OpenAPI parser, TypeScript type generation, a custom client generator, and a custom MCP tool generator. Here’s a simplified version of what our Kubb config looks like: Copy Code In this config, the @kubb/plugin-oas plugin handles reading the OpenAPI spec and iterating over its contents. We then provide our own generators in the generators array, one to build the API client and another to build the MCP tool definitions. The plugin-ts is included to output TypeScript interfaces/types for our API schemas (useful for strong typing of request and response bodies). Kubb will orchestrate all these plugins in one run, parsing the spec once and feeding the data to each generator. With the config in place, a simple command (e.g. pnpm kubb generate) triggers the code generation. Custom Generators: API Client and MCP Tools Using Kubb’s extensibility, we wrote custom generators to produce two key outputs from the OpenAPI spec: (a) TypeScript API client for Xata’s REST API, and (b) MCP tool handlers that map onto those API endpoints. By generating these from the spec, we ensure consistency and save a ton of manual coding. 1. Generating a Typed API Client from OpenAPI The first generator, clientGenerator, focuses on creating a lightweight API client library. We wanted to keep the ergonomics of the fully type-safe API client that we have been using for 3 years. While Kubb offers a default client generator (using Axios by default), we opted to customize it to better fit our needs (for example, to use fetch, handle our auth scheme seamlessly and provide the same surfacing API that we had in our codebase). In essence, our client generator iterates over each operation in the OpenAPI spec and emits a function that calls that endpoint. For each operation, we use its operation ID (or a modified version of it) as the function name, and generate a TypeScript function signature based on the operation’s parameters and response schema. Because this client code is generated, it stays up-to-date with our API. If we add a new endpoint or change a parameter in the OpenAPI spec, re-running Kubb will update the client functions accordingly. This beats hand-writing HTTP calls for each new feature. It’s also less error-prone as we don’t risk typos or forgetting a header, because the generation logic consistently applies the spec’s details. In short, the OpenAPI spec remains the single source of truth, and our API client is a direct reflection of it. 2. Generating MCP Tool Definitions from OpenAPI The second (and more interesting) generator is the mcpGenerator. This one produces code that bridges the gap between our API and the MCP tool interface that an AI agent can use. Although Kubb has support to build a default MCP server, we decided to customize the generator to produce a initMcpTools function that we can call from Vercel's MCP Adapter. Tool curation and descriptions: While we generated most tools, we did make some intentional choices. For instance, we omitted certain internal or less useful endpoints from the MCP interface to avoid cluttering the AI with too many options. We also edited some tool descriptions for clarity, for example, an OpenAPI description meant for developers might be adjusted to be more instructive for an AI agent. Using Zod for input validation: We decided to use Zod schemas for the tool input definitions. Kubb conveniently can generate Zod schemas from the OpenAPI spec (via @kubb/plugin-zod), which we leveraged for complex data structures. Zod serves two purposes: it defines the input format for the AI (so that the AI knows what arguments to provide), and it validates any incoming request at runtime, adding a safety net. If an AI somehow provides an incorrect type, the MCP server will reject it before hitting our API. Building the MCP Server as a Next.js App With our client and tool code generated, the final step was to stand up the MCP server itself. We chose Next.js to implement the server, using Vercel’s @vercel/mcp-adapter package to handle the protocol details. This choice was driven by a few factors: Seamless Vercel Deployment: Xata’s MCP server would be deployed on Vercel, and Next.js is a first-class citizen there. Vercel’s MCP adapter is built to drop into a Next.js API route, making deployment and scaling straightforward. Serverless and Fluid Compute: Next.js on Vercel can take advantage of their new “Fluid” Node.js runtime, which is well-suited for long-lived connections like SSE (Server-Sent Events) and can yield cost savings for AI workloads. Routing & Middleware: Next’s API route and Middleware features allowed us to handle authentication and request routing in the same way we are building the rest of our frontend applications. Route Handling with @vercel/mcp-adapter We created a dedicated API route for MCP under the Next.js app directory. Following Vercel’s example, we used a dynamic route [transport] to support both SSE and HTTP transports. In our project, we have a file like app/api/[transport]/route.ts. This dynamic segment ([transport]) means the route will match /api/mcp and /api/sse. Inside this file, we use the adapter: Copy Code Let’s break down what’s happening here. We call createMcpHandler to create a Next.js request handler that speaks the MCP protocol. We pass in a callback that receives a server object where we register our tools. Rather than manually listing each tool, we call our generated initMcpTools(server) helper, which in turn invokes all the server.tool(name, desc, schema, impl) definitions that were generated from our OpenAPI spec. This populates the MCP server with the full toolkit of Xata actions. The MCP server is wrapped with a withAuth wrapper function that verifies the token provided by the MCP Host. If no token is found, we return a 401 and prompt the MCP Host to start the OAuth Dynamic Client Registration against our authentication server. We export the same handler for both GET and POST HTTP methods. According to the MCP spec (and Vercel’s adapter), the MCP client (the AI’s side) may use GET/POST for different phases of the handshake and tool calling. By exporting both, we ensure our Next.js route will handle all required requests. In the configuration object passed to createMcpHandler, we included a redisUrl. This is because Server-Sent Events (SSE) transport (used by Claude and some clients) is stateful and it expects the server to maintain conversation state between calls. The Vercel adapter uses Redis if provided to store state (identified by a session ID) so that multiple function calls in a session share context. With this route set up, our Next.js app is essentially a fully functional MCP server. When an AI agent connects to it (via an MCP client), the adapter will handle the initial handshake and advertise all the tools we registered. The AI can then invoke any of those tools, and the adapter will call into our implementations (which call Xata’s API) and return the results back to the AI. MCP vs traditional API: It’s worth noting how this differs from a normal REST API. Instead of the client calling specific endpoints directly, the AI does a handshake to discover available tools and then calls them by name. You can think of it as a capabilities-based RPC system. For example, rather than hitting a /databases REST endpoint, the AI asks “what tools do you have?” and the server replies with something like “I have a list_databases tool that lists all databases in a workspace, a create_branch tool that creates a branch on a project,” etc. The AI decides which tool to use and sends a request like “invoke list_databases with workspace=X”. The MCP server then executes our function for list_databases, which in turn calls the real Xata API, and the result is sent back to the AI. Conclusion By leaning into our OpenAPI schema, we gave our MCP server “superpowers”: the ability to evolve at the speed of our API and the confidence of strong typing and validation at every step. This hybrid approach (auto-generate then polish) let us stand up a powerful AI integration in a fraction of the time it would take to code from scratch. The MCP Server now offers a conversational interface to our platform, turning natural language prompts into real actions backed by our APIs. All of this was achieved by treating the API spec as executable knowledge, not just documentation. As AI continues to weave into developer platforms, techniques like this will become increasingly common. They enable us to build smarter apps without reinventing the wheel for each new interface. If you’re excited by the possibilities at this intersection of AI and backend infrastructure, we invite you to give our new platform a try. Xata’s latest offering “Postgres at scale” with data branching and PII anonymization is now live. It combines a serverless Postgres experience with modern features like instant branching and data masking. Check out our announcement or request beta access to see how it can supercharge your development workflow, and feel free to experiment with our MCP server example as you explore what’s next in AI-driven development. Happy coding! Related Posts Xata: Postgres with data branching and PII anonymization Relaunching Xata as \"Postgres at scale\". A Postgres platform with Copy-on-Write branching, data masking, and separation of storage from compute. Are AI agents the future of observability? After vibe coding, is vibe observability next? Postgres at scale TwitterBlueskyLinkedInDiscordYouTubeContact us HomeAbout Compare with NeonPricing © Copyright 2025 Xatabase Inc. All Rights Reserved. Privacy PolicyTerms of Use" | |
], | |
"output": { | |
"content": "The article discusses the development of Xata's Model Context Protocol (MCP) server, which facilitates real-time interactions between AI models and APIs. The authors emphasize the importance of using an OpenAPI specification as the foundation for generating the MCP server, allowing for a more efficient and consistent development process. By leveraging existing API documentation, they aim to avoid the pitfalls of manually coding each tool, which can lead to errors and inconsistencies. The article highlights the challenges of directly mapping API endpoints to MCP tools, noting that a careful curation process is necessary to prevent overwhelming the AI with too many options.\n\nThe transition to Kubb, a code generation toolkit for TypeScript projects, is a key focus of the article. The authors explain how Kubb allows for greater flexibility and customization compared to traditional OpenAPI code generators. By configuring Kubb to generate both a TypeScript API client and MCP tool definitions from the OpenAPI spec, the team can maintain consistency and reduce manual coding efforts. The article outlines the setup process for Kubb, detailing how it integrates various plugins and generators to produce the desired outputs efficiently.\n\nCustom generators play a significant role in the development process, particularly in creating a type-safe API client and MCP tool handlers. The authors describe how the client generator ensures that the API client remains up-to-date with the OpenAPI spec, thus minimizing errors associated with manual HTTP calls. Additionally, the MCP tool generator is designed to bridge the API and the MCP interface, allowing for a streamlined interaction between AI agents and the backend. The use of Zod schemas for input validation further enhances the reliability of the system by ensuring that incoming requests conform to expected formats.\n\nFinally, the article details the implementation of the MCP server using Next.js and Vercel’s MCP adapter. This setup allows for seamless deployment and scaling while providing a robust routing and middleware framework. The authors explain how the MCP server differs from traditional REST APIs by using a capabilities-based RPC system, where AI agents can discover and invoke tools dynamically. By treating the OpenAPI spec as executable knowledge, the team has created a powerful integration that enhances the functionality of their platform, making it easier for developers to leverage AI in their applications.", | |
"usage": 3452 | |
} | |
} | |
], | |
"saveSummary": [ | |
{ | |
"input": [ | |
"b7cc8b2f-937c-43cc-81fb-8e939e45a21a", | |
"The article discusses the development of Xata's Model Context Protocol (MCP) server, which facilitates real-time interactions between AI models and APIs. The authors emphasize the importance of using an OpenAPI specification as the foundation for generating the MCP server, allowing for a more efficient and consistent development process. By leveraging existing API documentation, they aim to avoid the pitfalls of manually coding each tool, which can lead to errors and inconsistencies. The article highlights the challenges of directly mapping API endpoints to MCP tools, noting that a careful curation process is necessary to prevent overwhelming the AI with too many options.\n\nThe transition to Kubb, a code generation toolkit for TypeScript projects, is a key focus of the article. The authors explain how Kubb allows for greater flexibility and customization compared to traditional OpenAPI code generators. By configuring Kubb to generate both a TypeScript API client and MCP tool definitions from the OpenAPI spec, the team can maintain consistency and reduce manual coding efforts. The article outlines the setup process for Kubb, detailing how it integrates various plugins and generators to produce the desired outputs efficiently.\n\nCustom generators play a significant role in the development process, particularly in creating a type-safe API client and MCP tool handlers. The authors describe how the client generator ensures that the API client remains up-to-date with the OpenAPI spec, thus minimizing errors associated with manual HTTP calls. Additionally, the MCP tool generator is designed to bridge the API and the MCP interface, allowing for a streamlined interaction between AI agents and the backend. The use of Zod schemas for input validation further enhances the reliability of the system by ensuring that incoming requests conform to expected formats.\n\nFinally, the article details the implementation of the MCP server using Next.js and Vercel’s MCP adapter. This setup allows for seamless deployment and scaling while providing a robust routing and middleware framework. The authors explain how the MCP server differs from traditional REST APIs by using a capabilities-based RPC system, where AI agents can discover and invoke tools dynamically. By treating the OpenAPI spec as executable knowledge, the team has created a powerful integration that enhances the functionality of their platform, making it easier for developers to leverage AI in their applications." | |
], | |
"output": { | |
"success": true, | |
"summary": "The article discusses the development of Xata's Model Context Protocol (MCP) server, which facilitates real-time interactions between AI models and APIs. The authors emphasize the importance of using an OpenAPI specification as the foundation for generating the MCP server, allowing for a more efficient and consistent development process. By leveraging existing API documentation, they aim to avoid the pitfalls of manually coding each tool, which can lead to errors and inconsistencies. The article highlights the challenges of directly mapping API endpoints to MCP tools, noting that a careful curation process is necessary to prevent overwhelming the AI with too many options.\n\nThe transition to Kubb, a code generation toolkit for TypeScript projects, is a key focus of the article. The authors explain how Kubb allows for greater flexibility and customization compared to traditional OpenAPI code generators. By configuring Kubb to generate both a TypeScript API client and MCP tool definitions from the OpenAPI spec, the team can maintain consistency and reduce manual coding efforts. The article outlines the setup process for Kubb, detailing how it integrates various plugins and generators to produce the desired outputs efficiently.\n\nCustom generators play a significant role in the development process, particularly in creating a type-safe API client and MCP tool handlers. The authors describe how the client generator ensures that the API client remains up-to-date with the OpenAPI spec, thus minimizing errors associated with manual HTTP calls. Additionally, the MCP tool generator is designed to bridge the API and the MCP interface, allowing for a streamlined interaction between AI agents and the backend. The use of Zod schemas for input validation further enhances the reliability of the system by ensuring that incoming requests conform to expected formats.\n\nFinally, the article details the implementation of the MCP server using Next.js and Vercel’s MCP adapter. This setup allows for seamless deployment and scaling while providing a robust routing and middleware framework. The authors explain how the MCP server differs from traditional REST APIs by using a capabilities-based RPC system, where AI agents can discover and invoke tools dynamically. By treating the OpenAPI spec as executable knowledge, the team has created a powerful integration that enhances the functionality of their platform, making it easier for developers to leverage AI in their applications." | |
} | |
} | |
], | |
"markSummaryError": [] | |
}, | |
"output": { | |
"status": "Ok", | |
"summary": "The article discusses the development of Xata's Model Context Protocol (MCP) server, which facilitates real-time interactions between AI models and APIs. The authors emphasize the importance of using an OpenAPI specification as the foundation for generating the MCP server, allowing for a more efficient and consistent development process. By leveraging existing API documentation, they aim to avoid the pitfalls of manually coding each tool, which can lead to errors and inconsistencies. The article highlights the challenges of directly mapping API endpoints to MCP tools, noting that a careful curation process is necessary to prevent overwhelming the AI with too many options.\n\nThe transition to Kubb, a code generation toolkit for TypeScript projects, is a key focus of the article. The authors explain how Kubb allows for greater flexibility and customization compared to traditional OpenAPI code generators. By configuring Kubb to generate both a TypeScript API client and MCP tool definitions from the OpenAPI spec, the team can maintain consistency and reduce manual coding efforts. The article outlines the setup process for Kubb, detailing how it integrates various plugins and generators to produce the desired outputs efficiently.\n\nCustom generators play a significant role in the development process, particularly in creating a type-safe API client and MCP tool handlers. The authors describe how the client generator ensures that the API client remains up-to-date with the OpenAPI spec, thus minimizing errors associated with manual HTTP calls. Additionally, the MCP tool generator is designed to bridge the API and the MCP interface, allowing for a streamlined interaction between AI agents and the backend. The use of Zod schemas for input validation further enhances the reliability of the system by ensuring that incoming requests conform to expected formats.\n\nFinally, the article details the implementation of the MCP server using Next.js and Vercel’s MCP adapter. This setup allows for seamless deployment and scaling while providing a robust routing and middleware framework. The authors explain how the MCP server differs from traditional REST APIs by using a capabilities-based RPC system, where AI agents can discover and invoke tools dynamically. By treating the OpenAPI spec as executable knowledge, the team has created a powerful integration that enhances the functionality of their platform, making it easier for developers to leverage AI in their applications.", | |
"usage": 3452 | |
} | |
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The article discusses the development of Xata's Model Context Protocol (MCP) server, which facilitates real-time interactions between AI models and APIs. The authors emphasize the importance of using an OpenAPI specification as the foundation for generating the MCP server, allowing for a more efficient and consistent development process. By leveraging existing API documentation, they aim to avoid the pitfalls of manually coding each tool, which can lead to errors and inconsistencies. The article highlights the challenges of directly mapping API endpoints to MCP tools, noting that a careful curation process is necessary to prevent overwhelming the AI with too many options. | |
The transition to Kubb, a code generation toolkit for TypeScript projects, is a key focus of the article. The authors explain how Kubb allows for greater flexibility and customization compared to traditional OpenAPI code generators. By configuring Kubb to generate both a TypeScript API client and MCP tool definitions from the OpenAPI spec, the team can maintain consistency and reduce manual coding efforts. The article outlines the setup process for Kubb, detailing how it integrates various plugins and generators to produce the desired outputs efficiently. | |
Custom generators play a significant role in the development process, particularly in creating a type-safe API client and MCP tool handlers. The authors describe how the client generator ensures that the API client remains up-to-date with the OpenAPI spec, thus minimizing errors associated with manual HTTP calls. Additionally, the MCP tool generator is designed to bridge the API and the MCP interface, allowing for a streamlined interaction between AI agents and the backend. The use of Zod schemas for input validation further enhances the reliability of the system by ensuring that incoming requests conform to expected formats. | |
Finally, the article details the implementation of the MCP server using Next.js and Vercel’s MCP adapter. This setup allows for seamless deployment and scaling while providing a robust routing and middleware framework. The authors explain how the MCP server differs from traditional REST APIs by using a capabilities-based RPC system, where AI agents can discover and invoke tools dynamically. By treating the OpenAPI spec as executable knowledge, the team has created a powerful integration that enhances the functionality of their platform, making it easier for developers to leverage AI in their applications. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// TASK: summary | |
// Run this task with: | |
// forge task:run openAi:summary --uuid 0c2a10bf-cf34-4c21-a3be-ba31ed96ddad | |
import { createTask } from '@forgehive/task' | |
import { Schema } from '@forgehive/schema' | |
import { OpenAI } from 'openai' | |
import markdownToTxt from 'markdown-to-txt' | |
import { z } from "zod"; | |
import { zodResponseFormat } from "openai/helpers/zod"; | |
import { Url } from '@/models' | |
const description = 'Create a summary from extracted URL content using OpenAI' | |
const schema = new Schema({ | |
uuid: Schema.string() | |
}) | |
// Removed ContentSummary schema since we're returning plain text | |
const boundaries = { | |
findByUuid: async (uuid: string) => { | |
return await Url.findOne({ uuid }) | |
}, | |
generateSummary: async (content: string): Promise<{ content: string; usage: number | undefined }> => { | |
const apiKey = process.env.OPENAI_API_KEY; | |
if (!apiKey) { | |
throw new Error('OpenAI API key is not configured. Please set OPENAI_API_KEY in your .env file.'); | |
} | |
const openai = new OpenAI({ apiKey }); | |
const response = await openai.chat.completions.create({ | |
model: "gpt-4o-mini-2024-07-18", | |
messages: [ | |
{ | |
role: "system", | |
content: ` | |
You are an assistant that analyzes content and creates comprehensive summaries. | |
Create a clear and concise summary of the provided content in 4 paragraphs. | |
The summary should be written in markdown format with proper formatting. | |
Do not include the title in your summary - focus only on summarizing the main content. | |
Make the summary explain why i should read the content. | |
Return only the markdown summary text. | |
` | |
}, | |
{ | |
role: "user", | |
content: content, | |
} | |
], | |
max_tokens: 4000, | |
temperature: 0.3, | |
// Removed response_format since we're returning plain text | |
}); | |
if (!response.choices[0].message.content) { | |
throw new Error('Could not generate summary from the content.'); | |
} | |
console.log('Usage =>', response.usage?.prompt_tokens, response.usage?.completion_tokens, response.usage?.total_tokens) | |
return { | |
content: response.choices[0].message.content, | |
usage: response.usage?.total_tokens | |
}; | |
}, | |
saveSummary: async (uuid: string, summary: string) => { | |
// Use findOneAndUpdate with upsert to save or create | |
const url = await Url.findOneAndUpdate( | |
{ uuid }, | |
{ | |
summary: summary, | |
status: 'hasSummary' | |
}, | |
{ | |
upsert: true, | |
new: true | |
} | |
) | |
return { success: true, summary } | |
}, | |
markSummaryError: async (uuid: string) => { | |
const url = await Url.findOne({ uuid }) | |
if (!url) { | |
throw new Error(`URL not found for uuid: ${uuid}`) | |
} | |
url.status = 'hasSummaryError' | |
await url.save() | |
return { success: true } | |
} | |
} | |
export const summary = createTask( | |
schema, | |
boundaries, | |
async function ({ uuid }, { findByUuid, generateSummary, saveSummary, markSummaryError }) { | |
// Get the URL document | |
const urlDoc = await findByUuid(uuid); | |
if (!urlDoc) { | |
throw new Error(`URL not found for uuid: ${uuid}`); | |
} | |
if (!urlDoc.content) { | |
throw new Error('URL document does not contain content to summarize'); | |
} | |
try { | |
// Convert markdown to plain text and limit length | |
const text = markdownToTxt(urlDoc.content) | |
.replace(/\s+/g, ' ') | |
.trim() | |
.slice(0, 15000); // Limit to 15k chars | |
if (!text) { | |
throw new Error('No text content found to summarize'); | |
} | |
console.log('->', urlDoc.title || urlDoc.url, text.length) | |
// Prepare the content for analysis | |
const content = [ | |
urlDoc.title ? `Title: ${urlDoc.title}` : '', | |
urlDoc.description ? `Description: ${urlDoc.description}` : '', | |
`Content: ${text}` | |
].filter(Boolean).join('\n\n'); | |
// Generate summary using OpenAI | |
const analysis = await generateSummary(content); | |
// Get the plain text summary | |
const summary = analysis.content; | |
const usage = analysis.usage; | |
console.log('Summary length ->', summary.length) | |
// Save the summary | |
await saveSummary(uuid, summary); | |
return { | |
status: 'Ok', | |
summary: summary, | |
usage: usage | |
}; | |
} catch (error) { | |
// Mark as error if something goes wrong | |
await markSummaryError(uuid); | |
throw error; | |
} | |
} | |
) | |
summary.setDescription(description) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The article discusses the recurring hype surrounding the idea that new technologies, particularly AI, will replace software developers. Historically, each wave of technological advancement, such as NoCode and cloud computing, has led to the transformation of roles rather than outright replacement. Instead of eliminating developers, these technologies have created new specializations, often resulting in higher salaries and more complex roles. The current trend with AI-assisted development is no different; it is evolving the role of engineers from mere code writers to system architects who can effectively manage and orchestrate AI systems. | |
The author reflects on past technological revolutions, starting with the NoCode movement, which promised to empower non-technical users to build applications. However, this led to the emergence of NoCode specialists who understood both business needs and technical limitations, ultimately commanding higher salaries than traditional developers. Similarly, the cloud revolution did not eliminate system administrators but transformed them into DevOps engineers, expanding their roles and responsibilities while increasing their compensation. | |
The article also addresses the challenges of offshore development, which initially seemed like a cost-saving measure but revealed complexities in communication and quality. This led to the realization that effective software development requires deep contextual knowledge and collaboration, resulting in higher overall costs. The current AI coding assistant trend is following a similar path, where AI can generate code but often produces errors that require experienced developers to verify and correct, emphasizing the need for skilled architects to manage the resulting systems. | |
Ultimately, the author argues that the most valuable skill in software engineering is not writing code but architecting systems. As AI accelerates the speed of code generation, it also increases the potential for architectural mistakes, making the role of system architects even more critical. The article concludes that while AI may enhance certain aspects of development, it cannot replace the strategic thinking required to design and manage complex systems effectively. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment