Model Context Protocol (MCP) is an open standard developed by Anthropic that enables AI models to seamlessly access external tools and resources. It creates a standardized way for AI models to interact with tools, access the internet, run code, and more, without needing custom integrations for each tool or model.
In this tutorial, we'll build a complete MCP server that integrates with Brave Search, and then connect it to Google's Gemini 2.0 model to demonstrate how MCP creates a flexible architecture for AI-powered applications.
Resources: Find the complete code for this tutorial in the MCP Gemini Demo Repository and explore the official MCP documentation.
MCP provides a standardized interface between AI models and external tools. Benefits include:
MCP works through a client-server architecture:
The key insight: Any function that can be coded can be exposed as an MCP tool. This opens up endless possibilities - from API integrations to database access, custom calculations, or even control of physical devices.
To start building with MCP, you'll need the MCP SDK and Bun (for fast TypeScript execution):
mkdir mcp-gemini
cd mcp-gemini
bun init -y
bun add @modelcontextprotocol/sdk@^1.7.0 @google/generative-ai
Note: Check out the MCP SDK Repository for the latest version and features.
Our MCP server will expose two tools:
But remember, you could expose virtually any function - PDF processing, database queries, email sending, image generation, etc.
The core of MCP is defining tools. Each tool represents a callable function with a defined input schema:
// Web Search Tool Definition (simplified)
export const WEB_SEARCH_TOOL: Tool = {
name: "brave_web_search",
description: "Performs a web search using the Brave Search API...",
inputSchema: {
type: "object",
properties: {
query: {
type: "string",
description: "Search query",
},
count: {
type: "number",
description: "Number of results (1-20, default 10)",
default: 10,
},
// Other parameters...
},
required: ["query"],
},
};
The key components of a tool definition are:
This declarative approach means models can discover what tools are available and how to use them correctly.
Once you've defined a tool, you need to implement the actual functionality. This is simply a function that receives the tool's arguments and returns a result:
// Web search handler (simplified)
export async function webSearchHandler(args: unknown) {
// Validate arguments
if (!isValidArgs(args)) {
throw new Error("Invalid arguments");
}
const { query, count } = args;
// Call your API, function, or any code you want
const results = await performWebSearch(query, count);
// Return formatted results in MCP response format
return {
content: [{ type: "text", text: results }],
isError: false,
};
}
The power of MCP lies in this flexibility. Your tool implementation can:
As long as you can code it in TypeScript/JavaScript, you can expose it as an MCP tool.
The MCP server connects your tool definitions and implementations together:
// Create MCP server (simplified)
export function createServer() {
const server = new Server(
{
name: SERVER_CONFIG.name,
version: SERVER_CONFIG.version,
},
{
capabilities: {
tools: {
tools: [WEB_SEARCH_TOOL, LOCAL_SEARCH_TOOL],
},
},
}
);
// Register handlers
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [WEB_SEARCH_TOOL, LOCAL_SEARCH_TOOL],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
// Route to appropriate handler
switch (name) {
case "brave_web_search":
return await webSearchHandler(args);
case "brave_local_search":
return await localSearchHandler(args);
default:
// Handle unknown tools
}
});
return server;
}
The server provides two key functions:
This separation of concerns makes your server maintainable and extensible. Adding new tools is as simple as defining them and adding a new case to the handler.
To demonstrate our server, let's create a basic client. This helps understand how tools are called from outside systems:
// Create MCP client and connect (simplified)
const transport = new StdioClientTransport({
command: "bun",
args: ["index.ts"],
});
const client = new Client(
{ name: "brave-search-demo-client", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Connect to server
await client.connect(transport);
// List available tools
const { tools } = await client.listTools();
// Call a tool
const result = await client.callTool({
name: "brave_web_search",
arguments: {
query: "latest AI research papers",
count: 3,
},
});
This basic flow shows the core interactions:
The real power of MCP comes when we connect it to AI models. Let's integrate our server with Google's Gemini 2.0:
// Configure Gemini with function declarations (simplified)
const model = googleGenAi.getGenerativeModel({
model: "gemini-2.0-pro-exp-02-05",
tools: [
{
functionDeclarations: [
{
name: "brave_web_search",
description: "Search the web using Brave Search API",
parameters: {
// Schema matching our MCP tool
},
},
// Other tools...
],
},
],
});
// Process user queries with Gemini and MCP tools
async function processQuery(userQuery) {
// Generate a response with Gemini
const result = await model.generateContent({
contents: [{ role: "user", parts: [{ text: userQuery }] }],
});
// Check if Gemini wants to call a function
if (hasFunctionCall(result)) {
const functionCall = extractFunctionCall(result);
// Call our MCP tool
const searchResults = await client.callTool({
name: functionCall.name,
arguments: functionCall.args,
});
// Send function results back to Gemini for final response
return await generateFinalResponse(userQuery, functionCall, searchResults);
}
return result.text();
}
This integration demonstrates the true potential of MCP:
The result is a seamless experience where the AI model acts as an intelligent router to the most appropriate functionality.
Deep dive: Learn more about Google Gemini's function calling capabilities in the official documentation.
While our example focused on search tools, remember that MCP can expose any functionality. You could add tools for:
Each tool follows the same pattern:
This extensibility makes MCP a powerful architecture for building AI applications that can grow with your needs.
To run the examples, set up the required environment variables:
export BRAVE_API_KEY="your_brave_api_key"
export GOOGLE_API_KEY="your_google_api_key"
Then run the examples with Bun:
# Basic client
bun examples/basic-client.ts
# Gemini integration
bun examples/gemini-tool-function.ts
Resource: Find ready-to-use example code in the MCP Examples Repository.
In this tutorial, we've explored how to build a complete MCP server and integrate it with Google's Gemini 2.0 model. The key takeaways:
This approach to AI architecture offers significant advantages:
As the AI ecosystem continues to evolve, standards like MCP will become increasingly important for building interoperable, extensible systems that combine the best of human-coded functions with the power of large language models.