Idiomatic Rust Server Implementation of the Model Context Protocol (MCP)
[!NOTE]
Thinking of funding this for ~$200 lmk if you are interested.
This issue outlines the design and development roadmap for an idiomatic Rust implementation of the Model Context Protocol (MCP). The goal is to create a robust, type-safe, and high-performance library that aligns with Rust’s best practices and integrates seamlessly with the MCP ecosystem.
Hypothetical
#[derive(Deserialize, JsonSchema)]
struct WeatherInput {
city: String,
}
#[derive(Clone)]
struct AppState {
api_key: String,
client: reqwest::Client,
}
async fn weather(state: &AppState, WeatherInput { city }: WeatherInput) -> String {
let api_key = &state.api_key;
let url = format!(
"https://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric",
);
let response = state
.client
.get(&url)
.send()
.await
.wrap_err("Failed to make weather API request")
.unwrap();
let data: serde_json::Value = response
.json()
.await
.wrap_err("Failed to parse weather API response")
.unwrap();
let temp = data["main"]["temp"]
.as_f64()
.ok_or_else(|| eyre::eyre!("Temperature data not found"))
.unwrap();
let description = data["weather"][0]["description"]
.as_str()
.ok_or_else(|| eyre::eyre!("Weather description not found"))
.unwrap();
Ok(format!("Weather in {city}: {temp}°C, {description}"))
}
let state = AppState {
api_key: std::env::var("OPENWEATHER_API_KEY")
.expect("OPENWEATHER_API_KEY environment variable not set"),
client: reqwest::Client::new(),
};
let mut registry = ToolRegistry::new(state);
registry.register("weather", weather);
// one of the following:
// mcp::serve_sse(registry, socket);
// mcp::serve_stdio(registry);
Background on MCP
The Model Context Protocol (MCP) is an open standard that enables seamless integration between LLM (Large Language Model) applications and external data sources or tools ([GitHub - linux-china/mcp-rs-template: Model Context Protocol (MCP) CLI server template for Rust](https://github.com/linux-china/mcp-rs-template#:~:text=Model%20Context%20Protocol%20,with%20the%20context%20they%20need)). In practice, MCP acts like a “USB-C for AI” – providing a standardized interface so AI-powered applications (chatbots, IDE assistants, etc.) can request context (files, database info, APIs) from specialized context providers ([GitHub - linux-china/mcp-rs-template: Model Context Protocol (MCP) CLI server template for Rust](https://github.com/linux-china/mcp-rs-template#:~:text=Model%20Context%20Protocol%20,with%20the%20context%20they%20need)). MCP follows a client-server architecture: an MCP host (e.g. an AI IDE or chat app) communicates with one or more MCP servers (lightweight programs exposing specific data/services) using a well-defined message protocol ([Introduction - Model Context Protocol](https://modelcontextprotocol.io/#:~:text=MCP%20is%20an%20open%20protocol,different%20data%20sources%20and%20tools)) ([Introduction - Model Context Protocol](https://modelcontextprotocol.io/#:~:text=At%20its%20core%2C%20MCP%20follows,can%20connect%20to%20multiple%20servers)). This separation of concerns lets developers build powerful AI integrations by plugging in different context servers as needed, without custom ad-hoc APIs for each tool.
In summary, MCP’s purpose is to standardize how an application provides or obtains context for an AI model. A Rust implementation of MCP would allow developers to build MCP servers or clients in Rust, benefiting from strong compile-time checks and performance while adhering to the MCP spec. Before diving into design, we clarify the guiding philosophy and goals for a Rust-based MCP library.
Design Goals & Philosophy
Why Rust? Rust offers a combination of memory safety and high performance that makes it an excellent choice for implementing a protocol library like MCP. By leveraging Rust’s strong type system, we can model MCP’s request/response structures in a way that invalid messages are caught at compile time. This prevents many runtime errors and ensures that any MCP messages our library produces or consumes are schema-compliant by construction. Rust’s focus on zero-cost abstractions means we can achieve this safety without sacrificing speed – the compiled code will be efficient and suitable for low-latency, high-throughput contexts (important if an AI assistant is making many context requests).
Additionally, Rust’s ecosystem provides great libraries that align with MCP’s needs. For example, Serde is the de facto standard for serializing/deserializing Rust data structures ([Overview · Serde](https://serde.rs/#:~:text=Serde%20is%20a%20framework%20for,data%20structures%20efficiently%20and%20generically)) (we will use it for JSON encoding/decoding of MCP messages), and Tokio provides an async runtime for handling I/O concurrency (useful for networking or multiple concurrent requests). The Rust community also has crates for JSON schema handling, CLI interfaces, etc., which we can integrate. Using Rust means we can deliver a single self-contained binary or library with minimal runtime dependencies – ideal for CLI-based MCP servers or embedding in other systems.
Design philosophy: We aim to make the Rust MCP implementation idiomatic – embracing Rust’s conventions and best practices. That means favoring type safety, explicit error handling, and clear module abstractions. Wherever possible, we’ll prefer compile-time guarantees over runtime checks. For example, rather than parsing untyped JSON and manually checking fields, we will define Rust structs/enums that represent each MCP message, leveraging Serde to handle JSON (this way, if the JSON structure deviates from the expected format, it results in a compile-time or deserialization error rather than silently misbehaving). We also plan to design the API to feel familiar to Rust developers – using traits, generics, and patterns that align with popular Rust frameworks – so that contributors and users can quickly get up to speed.
Performance and safety are both top priorities. Rust’s ownership model will help ensure that our implementation is free of data races or memory leaks, which is important for long-running MCP servers. At the same time, Rust’s low-level control allows integration with system resources (files, network sockets, etc.) for context providers in a secure way (e.g., controlling memory usage when streaming large files as context). In short, Rust’s qualities of type safety, speed, and a rich ecosystem guide our design decisions, ensuring the MCP implementation will be reliable in production and ergonomic to use in development.
Architecture Overview
The architecture of the MCP Rust library will be organized into clear components, inspired by proven design patterns from the Rust ecosystem. At a high level, we will separate the concerns of message handling, transport, schema, and error management, allowing each to be developed and reasoned about independently. This section provides an overview of the planned architecture and the rationale behind key design choices.
Design Patterns and Inspirations
To build an idiomatic solution, we’ll draw inspiration from existing Rust frameworks:
- Handler Routing (à la axum): The library will route incoming MCP requests to the appropriate handler function or module, conceptually similar to how web frameworks like axum route HTTP requests to endpoints. Axum, for instance, focuses on ergonomics and modularity, allowing routing with a macro-free API ([axum - Rust - Docs.rs](https://docs.rs/axum/latest/axum/#:~:text=axum%20,free%20API)). We plan to do similarly for MCP: define clear interfaces for each type of request (e.g. a trait or function for handling
"resource"
queries vs."prompt"
requests) rather than requiring users to write boilerplate matching on message types. This design keeps things modular – new request types can be added as new handler implementations without touching unrelated code – and avoids heavy macro use (improving compile-times and clarity). - Serde for Serialization: We will utilize Serde’s powerful derive system to automatically serialize/deserialize MCP messages to JSON. Serde is built on Rust’s trait system and avoids runtime reflection overhead ([Overview · Serde](https://serde.rs/#:~:text=Design)), which means our JSON handling will be efficient. Each MCP message type (requests, responses, error formats, etc.) will implement (or derive)
Serialize
andDeserialize
, making it straightforward to go between Rust structs and the JSON text format defined by MCP. Using Serde ensures compatibility with other languages’ MCP implementations (since JSON is the wire format) while keeping our code type-safe. - Schemars for Schema: For JSON schema definition and validation, we’ll use schemars. Schemars can derive JSON Schema documents from Rust types by implementing the
JsonSchema
trait ([Overview | Schemars](https://graham.cool/schemars/#:~:text=Schemars%20is%20a%20library%20to,documents%20from%20Rust%20data%20structures)). This is extremely useful for MCP, because it means we can generate a schema for our messages directly from our Rust structs and ensure it matches the official MCP specification. In fact, Schemars is designed to be compatible with Serde – it will respect Serde attributes and make sure the schema aligns with howserde_json
serializes our types ([Overview | Schemars](https://graham.cool/schemars/#:~:text=One%20of%20the%20main%20aims,adjust%20the%20generated%20schema%20accordingly)). This lets us confidently say our Rust types are correct by comparing against the MCP spec’s schema, and we can even provide the JSON Schema to users of the library for integration or validation in other contexts. - Minimal Macros, Clear Code: In line with Rust’s ethos and libraries like axum, we aim to minimize custom macros in the API. Macros can hide complexity but also make debugging harder and can slow down compile times if overused (since procedural macros run at compile time for each invocation) ([Is there any performance difference between macros and functions in Rust? - Stack Overflow](https://stackoverflow.com/questions/73186696/is-there-any-performance-difference-between-macros-and-functions-in-rust#:~:text=I%20assume%20you%27re%20talking%20about,are%20compiled%20for%20each%20invocation)). We will prefer using Rust’s traits and generics to achieve extensibility. For example, instead of a macro to define a new MCP command handler, we might provide a trait that the user can implement or a registration function to add handlers. This keeps the API usage straightforward (
fn handle_x(request: XRequest) -> XResponse
) and maintainable.
The architecture will thus emphasize modularity (separating transport, protocol logic, and user-defined handlers), type safety (strongly typed messages), and extensibility (easy to add new transports or message types). Contributors familiar with Rust’s web servers or RPC libraries will find the structure recognizable.
Transport Layer Design
MCP communication can occur over different channels, so our design will abstract the transport layer. The initial focus will be on STDIO and Server-Sent Events (SSE):
- STDIO Transport: Many MCP use cases (like running a local context server for an IDE or Claude Desktop) involve launching a subprocess and communicating via its standard input/output streams. We will implement a
StdioTransport
that reads JSON lines fromstdin
and writes JSON responses tostdout
. This transport will likely be synchronous in order (request in, response out), following the JSON-RPC style of communication. Using STDIO allows easy integration with existing tools (just start the Rust binary as an MCP server). The design will consider message framing (e.g., newline-delimited JSON or length-prefixed) as per the MCP spec – typically JSON-RPC 2.0 over STDIO treats each line or message separately. - SSE (Server-Sent Events) Transport: SSE is a mechanism often used for pushing events over HTTP. In contexts where an MCP server might run as a standalone service (perhaps to allow browsers or remote clients to get context), SSE can be used to continuously stream context updates or responses. We plan to design an HTTP-based transport that supports SSE. For example, an MCP server could open an HTTP endpoint that upgrades to an SSE stream for sending events, while receiving requests via HTTP POST. Our library might integrate with an async HTTP framework (like hyper or axum) to handle this. The initial implementation could focus on simple cases (one client connecting to the SSE stream), ensuring that events are formatted according to MCP’s expectations. SSE support will be designed such that adding it doesn’t affect the core logic – it will use the same message handling pipeline, just plugged into an HTTP server context. (In practice, SSE could be implemented by having a task push JSON messages to an HTTP response stream whenever a new MCP message is ready to send.)
The transport layer will be asynchronous (using Rust Future
s and async/await) to handle I/O without blocking. This means our library can listen for incoming messages and send responses concurrently, which is important if multiple requests can be in flight or if using transports like WebSockets. Each transport implementation will translate the raw data (e.g., lines from stdin, or HTTP requests) into higher-level MCP request objects and vice versa, handing them to the core message handling logic.
By designing a clean transport abstraction, we make the core MCP logic independent of how the data arrives or is sent. This not only follows the Single Responsibility Principle (each transport module deals with I/O, while core deals with logic) but also makes testing easier (we can substitute a mock transport feeding canned messages to simulate different scenarios).
Schema and JSON Handling
Schema management is a crucial aspect for ensuring our implementation aligns with the MCP specification. JSON Schema defines the structure of MCP messages, and we want our Rust types to match that schema exactly. To achieve this, we will:
- Make sure works with errors
- Define Rust Types for All MCP Structures: ✅ have already done this
- Leverage Schemars for JSON Schema Generation: ✅ done this mostly
- Serialization/Deserialization Best Practices: We will follow JSON best practices to ensure interoperability:
- Use explicit field names matching the MCP spec (possibly with Serde rename attributes if the Rust naming conventions differ).
Comments (5)
I'm about to put a couple hundred dollars bounty on this. Let me know if you have any ideas on how MCP should be organized.
I'm interested in implementing this. I believe we should add a trait like:
trait Tool {
fn handlers(&self) -> &'static [(&'static str, fn(&Self, &serde_json::Value))];
}
and adding a proc macro that automatically implements this and returns an array with all the method names and functions. This would allow us to easily combine handlers from multiple tools into one tool registry, and each tool like the weather tool could have multiple handlers each returning more specific data (like climate, weather watches/warnings, etc).
I'm interested in implementing this. I believe we should add a trait like:
trait Tool { fn handlers(&self) -> &'static [(&'static str, fn(&Self, &serde_json::Value))]; } and adding a proc macro that automatically implements this and returns an array with all the method names and functions. This would allow us to easily combine handlers from multiple tools into one tool registry, and each tool like the weather tool could have multiple handlers each returning more specific data (like climate, weather watches/warnings, etc).
How do you think the proc macro might work? For inspiration the official python SDK is
https://github.com/modelcontextprotocol/python-sdk
The official TypeScript SDK is
Any updates, @TestingPlant?
It seems like a core issue with Bounty Bot right now is that someone can start working on something but never push their changes. I want to address this by enabling multiple contributors to collaborate on an issue—where one person can start, another can build on it, and they can go back and forth—while ensuring both contributors receive a percentage of the bounty. This is something I want to explore further.
Apologies for the delay @andrewgazelka, I'm used to commiting everything locally and then pushing at the end. I've created https://github.com/ghbountybot/mcp/pull/3 to track progress. I think the idea of multiple people collaborating on the same PR would be good. Someone (probably one of the PR authors) would need to review other collaborators' changes, but this could be abused by someone acting in a hostile manner, so I'm not sure how exactly it should be implemented.
Bounty Available
Fund Requests
No fund requests yet. Be the first to request funding.
Paid Out
No payments have been made yet. Start working on this bounty to receive your payout