client-mcp/README.md
2025-06-16 16:36:54 +05:30

3.1 KiB

MCP Client

A TypeScript client library for interacting with Model Context Protocol (MCP) services, providing a production-ready interface for AI conversations with tool support.

Installation

npm install https://git.everydayseries.io/kroy665/client-mcp.git

Features

  • TypeScript support out of the box
  • Streaming chat completions
  • Tool and function calling support
  • Automatic conversation management
  • Configurable connection to MCP servers
  • Debug logging
  • Request cancellation support

Quick Start

import { ClientMCP } from 'mcp-client';

// Create a new client instance
const client = new ClientMCP({
  apiKey: 'your-api-key',
  model: 'gemini-2.0-flash',
  debug: true
});

// Connect to the MCP server
await client.connectToServer('http://localhost:3003/mcp');

// Stream chat responses
for await (const chunk of client.chat("Hello, how are you?")) {
  console.log(chunk.choices[0]?.delta?.content);
}

// Don't forget to clean up
await client.disconnect();

API Reference

new ClientMCP(config: ClientMCPConfig)

Creates a new MCP client instance.

Parameters

  • config (Object): Configuration object
    • apiKey (string): Your API key for authentication
    • model (string): The model to use (default: "gemini-2.0-flash")
    • baseUrl (string): Base URL for the API
    • timeout (number): Request timeout in milliseconds (default: 30000)
    • debug (boolean): Enable debug logging (default: false)
    • systemMessages (string): Custom system messages (optional)

Methods

connectToServer(serverPath: string | URL, sessionId?: string): Promise<void>

Establishes connection to MCP server.

  • serverPath: URL or string path to the MCP server
  • sessionId: Optional session ID for reconnection

disconnect(): Promise<void>

Disconnects from the MCP server and cleans up resources.

chatCompletionStream(options: ChatCompletionStreamOptions): AsyncGenerator<ChatCompletionChunk>

Performs streaming chat completion.

const stream = client.chatCompletionStream({
  messages: [{ role: 'user', content: 'Hello!' }],
  reasoningEffort: 'high',
  tools: [...]
});

for await (const chunk of stream) {
  console.log(chunk.choices[0]?.delta?.content);
}

chat(content: string, options?: ChatOptions): AsyncGenerator<ChatChunk>

Main chat interface with automatic tool handling and conversation management.

for await (const chunk of client.chat("What's the weather like?", {
  maxDepth: 3,
  autoSummarize: true
})) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

cancelRequests(): void

Cancels all ongoing requests.

Development

  1. Clone the repository
  2. Install dependencies:
    npm install
    
  3. Build the project:
    npm run build
    
  4. Run tests:
    npm test
    

License

MIT

Notes

  • The client automatically manages conversation state and tool calls
  • Supports both HTTP and WebSocket transports
  • Includes comprehensive error handling and logging
  • Thread-safe for concurrent usage
  • Memory efficient with streaming support