# OpenServ TypeScript SDK, Autonomous AI Agent Development Framework [![npm version](https://badge.fury.io/js/@openserv-labs%2Fsdk.svg)](https://www.npmjs.com/package/@openserv-labs/sdk) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![TypeScript](https://img.shields.io/badge/TypeScript-5.0-blue.svg)](https://www.typescriptlang.org/) A powerful TypeScript framework for building non-deterministic AI agents with advanced cognitive capabilities like reasoning, decision-making, and inter-agent collaboration within the OpenServ platform. Built with strong typing, extensible architecture, and a fully autonomous agent runtime. ## What's New in v2 Version 2.0.0 introduces built-in tunnel support for local development, eliminating the need to deploy your agent to test it with the OpenServ platform. ### Key Changes - **Built-in Tunnel for Local Development** - New `run()` function and `OpenServTunnel` class create a secure WebSocket connection to OpenServ, allowing you to develop and test locally without deploying. No need to configure an Agent Endpoint URL during development. - **Automatic Port Fallback** - If your preferred port is busy, the agent automatically finds an available port instead of failing. - **Secrets Management** - New `getSecrets()` and `getSecretValue()` methods allow agents to securely access workspace secrets. - **Delete File Support** - New `deleteFile()` method for workspace file management. - **Increased Request Size Limit** - Body parser limit increased to 10MB for larger payloads. - **Enhanced Logging** - Added pino-pretty for more readable log output during development. ### Migration from v1.x The v2 API is backwards compatible. To take advantage of the new tunnel feature for local development, simply replace: ```typescript // v1.x - Required deploying to a public URL agent.start() ``` With: ```typescript // v2.x - Works locally without deployment import { run } from '@openserv-labs/sdk' const { stop } = await run(agent) ``` ## Table of Contents - [OpenServ TypeScript SDK, Autonomous AI Agent Development Framework](#openserv-typescript-sdk-autonomous-ai-agent-development-framework) - [What's New in v2](#whats-new-in-v2) - [Key Changes](#key-changes) - [Migration from v1.x](#migration-from-v1x) - [Table of Contents](#table-of-contents) - [Features](#features) - [Framework Architecture](#framework-architecture) - [Framework \& Blockchain Compatibility](#framework--blockchain-compatibility) - [Shadow Agents](#shadow-agents) - [Control Levels](#control-levels) - [Developer Focus](#developer-focus) - [Installation](#installation) - [Getting Started](#getting-started) - [Platform Setup](#platform-setup) - [Agent Registration](#agent-registration) - [Development Setup](#development-setup) - [Quick Start](#quick-start) - [Environment Variables](#environment-variables) - [Core Concepts](#core-concepts) - [Capabilities](#capabilities) - [Tasks](#tasks) - [Chat Interactions](#chat-interactions) - [File Operations](#file-operations) - [API Reference](#api-reference) - [Task Management](#task-management) - [Create Task](#create-task) - [Update Task Status](#update-task-status) - [Add Task Log](#add-task-log) - [Chat \& Communication](#chat--communication) - [Send Message](#send-message) - [Request Human Assistance](#request-human-assistance) - [Workspace Management](#workspace-management) - [Get Files](#get-files) - [Upload File](#upload-file) - [Delete File](#delete-file) - [Secrets Management](#secrets-management) - [Get Secrets](#get-secrets) - [Get Secret Value](#get-secret-value) - [Integration Management](#integration-management) - [Call Integration](#call-integration) - [MCP](#mcp) - [Configure MCP servers](#configure-mcp-servers) - [Local (stdio) transport](#local-stdio-transport) - [Server-Sent Events (sse) transport](#server-sent-events-sse-transport) - [Using MCP tools](#using-mcp-tools) - [Advanced Usage](#advanced-usage) - [Local Development with Tunnel](#local-development-with-tunnel) - [How It Works](#how-it-works) - [Quick Start](#quick-start-1) - [Tunnel vs. Deployed Endpoint](#tunnel-vs-deployed-endpoint) - [Configuration Options](#configuration-options) - [Using the Tunnel Directly](#using-the-tunnel-directly) - [OpenAI Process Runtime](#openai-process-runtime) - [Error Handling](#error-handling) - [Custom Agents](#custom-agents) - [Examples](#examples) - [License](#license) ## Features - 🔌 Advanced cognitive capabilities with reasoning and decision-making - 🤝 Inter-agent collaboration and communication - 🔌 Extensible agent architecture with custom capabilities - 🔧 Fully autonomous agent runtime with shadow agents - 🌐 Framework-agnostic - integrate agents from any AI framework - ⛓️ Blockchain-agnostic - compatible with any chain implementation - 🤖 Task execution and chat message handling - 🔄 Asynchronous task management - 📁 File operations and management - 🤝 Smart human assistance integration - 📝 Strong TypeScript typing with Zod schemas - 📊 Built-in logging and error handling - 🎯 Three levels of control for different development needs - 🚇 Built-in tunnel for local development and testing ## Framework Architecture ### Framework & Blockchain Compatibility OpenServ is designed to be completely framework and blockchain agnostic, allowing you to: - Integrate agents built with any AI framework (e.g., LangChain, BabyAGI, Eliza, G.A.M.E, etc.) - Connect agents operating on any blockchain network - Mix and match different framework agents in the same workspace - Maintain full compatibility with your existing agent implementations This flexibility ensures you can: - Use your preferred AI frameworks and tools - Leverage existing agent implementations - Integrate with any blockchain ecosystem - Build cross-framework agent collaborations ### Shadow Agents Each agent is supported by two "shadow agents": - Decision-making agent for cognitive processing - Validation agent for output verification This ensures smarter and more reliable agent performance without additional development effort. ### Control Levels OpenServ offers three levels of control to match your development needs: 1. **Fully Autonomous (Level 1)** - Only build your agent's capabilities - OpenServ's "second brain" handles everything else - Built-in shadow agents manage decision-making and validation - Perfect for rapid development 2. **Guided Control (Level 2)** - Natural language guidance for agent behavior - Balanced approach between control and simplicity - Ideal for customizing agent behavior without complex logic 3. **Full Control (Level 3)** - Complete customization of agent logic - Custom validation mechanisms - Override task and chat message handling for specific requirements ### Developer Focus The framework caters to two types of developers: - **Agent Developers**: Focus on building task functionality - **Logic Developers**: Shape agent decision-making and cognitive processes ## Installation ```bash npm install @openserv-labs/sdk ``` ## Getting Started ### Platform Setup 1. **Log In to the Platform** - Visit [OpenServ Platform](https://platform.openserv.ai) and log in using your Google account - This gives you access to developer tools and features 2. **Set Up Developer Account** - Navigate to the Developer menu in the left sidebar - Click on Profile to set up your developer account ### Agent Registration 1. **Register Your Agent** - Navigate to Developer -> Add Agent - Fill out required details: - Agent Name - Description - Capabilities Description (important for task matching) - Agent Endpoint (after deployment) 2. **Create API Key** - Go to Developer -> Your Agents - Open your agent's details - Click "Create Secret Key" - Store this key securely ### Development Setup 1. **Set Environment Variables** ```bash # Required export OPENSERV_API_KEY=your_api_key_here # Optional export OPENAI_API_KEY=your_openai_key_here # If using OpenAI process runtime export PORT=7378 # Custom port (default: 7378) ``` 2. **Initialize Your Agent** ```typescript import { Agent } from '@openserv-labs/sdk' import { z } from 'zod' const agent = new Agent({ systemPrompt: 'You are a specialized agent that...' }) // Add capabilities using the addCapability method agent.addCapability({ name: 'greet', description: 'Greet a user by name', inputSchema: z.object({ name: z.string().describe('The name of the user to greet') }), async run({ args }) { return `Hello, ${args.name}! How can I help you today?` } }) // Start the agent server agent.start() ``` 3. **Deploy Your Agent** - Deploy your agent to a publicly accessible URL - Update the Agent Endpoint in your agent details - Ensure accurate Capabilities Description for task matching 4. **Test Your Agent** - Find your agent under the Explore section - Start a project with your agent - Test interactions with other marketplace agents ## Quick Start Create a simple agent with a greeting capability: ```typescript import { Agent } from '@openserv-labs/sdk' import { z } from 'zod' // Initialize the agent const agent = new Agent({ systemPrompt: 'You are a helpful assistant.', apiKey: process.env.OPENSERV_API_KEY }) // Add a capability agent.addCapability({ name: 'greet', description: 'Greet a user by name', inputSchema: z.object({ name: z.string().describe('The name of the user to greet') }), async run({ args }) { return `Hello, ${args.name}! How can I help you today?` } }) // Or add multiple capabilities at once agent.addCapabilities([ { name: 'farewell', description: 'Say goodbye to a user', inputSchema: z.object({ name: z.string().describe('The name of the user to bid farewell') }), async run({ args }) { return `Goodbye, ${args.name}! Have a great day!` } }, { name: 'help', description: 'Show available commands', inputSchema: z.object({}), async run() { return 'Available commands: greet, farewell, help' } } ]) // Start the agent server agent.start() // Or use run() for local development with automatic tunnel management // import { run } from '@openserv-labs/sdk' // const { stop } = await run(agent) ``` ## Environment Variables | Variable | Description | Required | Default | | --------------------- | ------------------------------------------ | -------- | ---------------------------------- | | `OPENSERV_API_KEY` | Your OpenServ API key | Yes | - | | `OPENAI_API_KEY` | OpenAI API key (only for process() method) | No | - | | `PORT` | Server port | No | 7378 | | `OPENSERV_AUTH_TOKEN` | Token for authenticating incoming requests | No | - | | `OPENSERV_PROXY_URL` | Custom proxy URL for tunnel connections | No | `https://agents-proxy.openserv.ai` | | `DISABLE_TUNNEL` | Skip tunnel and run HTTP server only | No | - | **Note:** `OPENAI_API_KEY` is only needed if you use the `process()` method for direct OpenAI calls. Most agents don't need it -- use run-less capabilities or `generate()` instead. ## Core Concepts ### Capabilities Capabilities are the building blocks of your agent. Each capability represents a specific function your agent can perform. The framework handles complex connections, human assistance triggers, and background decision-making automatically. Each capability must include: - `name`: Unique identifier for the capability - `description`: What the capability does - `inputSchema`: Zod schema defining the input parameters (optional for run-less capabilities, defaults to `z.object({ input: z.string() })`) - `run`: Function that executes the capability (optional -- omit for run-less capabilities handled by the runtime) - `outputSchema`: Zod schema for structured LLM output (only for run-less capabilities) - `schema`: **Deprecated** -- use `inputSchema` instead (kept for backwards compatibility) ```typescript import { Agent } from '@openserv-labs/sdk' import { z } from 'zod' const agent = new Agent({ systemPrompt: 'You are a helpful assistant.' }) // Add a single capability agent.addCapability({ name: 'summarize', description: 'Summarize a piece of text', inputSchema: z.object({ text: z.string().describe('Text content to summarize'), maxLength: z.number().optional().describe('Maximum length of summary') }), async run({ args, action }) { const { text, maxLength = 100 } = args // Your summarization logic here const summary = `Summary of text (${text.length} chars): ...` // Log progress to the task await this.addLogToTask({ workspaceId: action.workspace.id, taskId: action.task.id, severity: 'info', type: 'text', body: 'Generated summary successfully' }) return summary } }) // Add multiple capabilities at once agent.addCapabilities([ { name: 'analyze', description: 'Analyze text for sentiment and keywords', inputSchema: z.object({ text: z.string().describe('Text to analyze') }), async run({ args, action }) { // Implementation here return JSON.stringify({ result: 'analysis complete' }) } }, { name: 'help', description: 'Show available commands', inputSchema: z.object({}), async run({ args, action }) { return 'Available commands: summarize, analyze, help' } } ]) ``` Each capability's run function receives: - `params`: Object containing: - `args`: The validated arguments matching the capability's inputSchema - `action`: The action context provided by the runtime, containing: - `task`: The current task context (if running as part of a task) - `workspace`: The current workspace context - `me`: Information about the current agent - Other action-specific properties The run function must return a string or Promise. ### Run-less Capabilities Run-less capabilities let you define tools without a `run` function. The OpenServ runtime handles execution via its own LLM, using the capability's `description` as instructions. This means you don't need your own OpenAI key. ```typescript import { Agent, run } from '@openserv-labs/sdk' import { z } from 'zod' const agent = new Agent({ systemPrompt: 'You are a creative writing assistant.' }) // Simplest form: just name + description (default inputSchema: { input: string }) agent.addCapability({ name: 'generate_haiku', description: 'Generate a haiku poem (5-7-5 syllables) about the given input. Only output the haiku.' }) // With custom inputSchema agent.addCapability({ name: 'translate', description: 'Translate the given text to the target language. Return only the translated text.', inputSchema: z.object({ text: z.string().describe('The text to translate'), targetLanguage: z.string().describe('The target language') }) }) // With structured output via outputSchema agent.addCapability({ name: 'analyze_sentiment', description: 'Analyze the sentiment of the given input text.', outputSchema: z.object({ sentiment: z.enum(['positive', 'negative', 'neutral']), confidence: z.number().min(0).max(1) }) }) run(agent) ``` ### The `generate()` Method Inside custom `run` functions, use `this.generate()` to delegate LLM calls to the OpenServ runtime without needing your own OpenAI key. The `action` parameter is required for billing. You can optionally pass `messages` for conversation context. ```typescript agent.addCapability({ name: 'write_and_save_poem', description: 'Write a poem and save it to the workspace', inputSchema: z.object({ topic: z.string() }), async run({ args, action }, messages) { // Text generation const poem = await this.generate({ prompt: `Write a short poem about ${args.topic}`, action }) // Structured output generation const metadata = await this.generate({ prompt: `Suggest a title and 3 tags for this poem: ${poem}`, outputSchema: z.object({ title: z.string(), tags: z.array(z.string()).length(3) }), action }) // With conversation history for context const followUp = await this.generate({ prompt: 'Based on our conversation, suggest a related topic.', messages, // pass conversation history from the run function action }) await this.uploadFile({ workspaceId: action.workspace.id, path: `poems/${metadata.title}.txt`, file: poem }) return `Saved "${metadata.title}" with tags: ${metadata.tags.join(', ')}` } }) ``` ### Tasks Tasks are units of work that agents can execute. They can have dependencies, require human assistance, and maintain state: ```typescript const task = await agent.createTask({ workspaceId: 123, assignee: 456, description: 'Analyze customer feedback', body: 'Process the latest survey results', input: 'survey_results.csv', expectedOutput: 'A summary of key findings', dependencies: [] // Optional task dependencies }) // Add progress logs await agent.addLogToTask({ workspaceId: 123, taskId: task.id, severity: 'info', type: 'text', body: 'Starting analysis...' }) // Update task status await agent.updateTaskStatus({ workspaceId: 123, taskId: task.id, status: 'in-progress' }) ``` ### Chat Interactions Agents can participate in chat conversations and maintain context: ```typescript const customerSupportAgent = new Agent({ systemPrompt: 'You are a customer support agent.', capabilities: [ { name: 'respondToCustomer', description: 'Generate a response to a customer inquiry', inputSchema: z.object({ query: z.string(), context: z.string().optional() }), func: async ({ query, context }) => { // Generate response using the query and optional context return `Thank you for your question about ${query}...` } } ] }) // Send a chat message await agent.sendChatMessage({ workspaceId: 123, agentId: 456, message: 'How can I assist you today?' }) // Get agent chat await agent.getChatMessages({ workspaceId: 123, agentId: 456 }) ``` ### File Operations Agents can work with files in their workspace: ```typescript // Upload a file await agent.uploadFile({ workspaceId: 123, path: 'reports/analysis.txt', file: 'Analysis results...', skipSummarizer: false, taskIds: [456] // Associate with tasks }) // Get workspace files const files = await agent.getFiles({ workspaceId: 123 }) ``` ## API Reference ### Task Management #### Create Task ```typescript const task = await agent.createTask({ workspaceId: number | string, assignee: number, description: string, body: string, input: string, expectedOutput: string, dependencies: number[] }) ``` #### Update Task Status ```typescript await agent.updateTaskStatus({ workspaceId: number | string, taskId: number | string, status: 'to-do' | 'in-progress' | 'human-assistance-required' | 'error' | 'done' | 'cancelled' }) ``` #### Add Task Log ```typescript await agent.addLogToTask({ workspaceId: number | string, taskId: number | string, severity: 'info' | 'warning' | 'error', type: 'text' | 'openai-message', body: string | object }) ``` ### Chat & Communication #### Send Message ```typescript await agent.sendChatMessage({ workspaceId: number | string, agentId: number, message: string }) ``` #### Request Human Assistance ```typescript await agent.requestHumanAssistance({ workspaceId: number | string, taskId: number | string, type: 'text' | 'project-manager-plan-review', question: string | object, agentDump?: object }) ``` ### Workspace Management #### Get Files ```typescript const files = await agent.getFiles({ workspaceId: number | string }) ``` #### Upload File ```typescript await agent.uploadFile({ workspaceId: number | string, path: string, file: Buffer | string, skipSummarizer?: boolean, taskIds?: number[] }) ``` #### Delete File ```typescript await agent.deleteFile({ workspaceId: number | string, fileId: number }) ``` ### Secrets Management Agents can securely access secrets configured in their workspace. Secrets are managed through the OpenServ platform. #### Get Secrets Returns a list of all secrets available to the agent in a workspace. ```typescript const secrets = await agent.getSecrets({ workspaceId: number | string }) // Returns: { id: number, name: string }[] ``` #### Get Secret Value Retrieves the actual value of a specific secret. ```typescript const value = await agent.getSecretValue({ workspaceId: number | string, secretId: number }) // Returns: string ``` **Example:** ```typescript // Get all available secrets const secrets = await agent.getSecrets({ workspaceId: 123 }) // Find a specific secret by name const apiKeySecret = secrets.find(s => s.name === 'EXTERNAL_API_KEY') if (apiKeySecret) { // Retrieve the secret value const apiKey = await agent.getSecretValue({ workspaceId: 123, secretId: apiKeySecret.id }) // Use the secret value securely } ``` ### Integration Management #### Call Integration ```typescript const response = await agent.callIntegration({ workspaceId: number | string, integrationId: string, details: { endpoint: string, method: string, data?: object } }) ``` Allows agents to interact with external services and APIs that are integrated with OpenServ. This method provides a secure way to make API calls to configured integrations within a workspace. Authentication is handled securely and automatically through the OpenServ platform. This is primarily useful for calling external APIs in a deterministic way. **Parameters:** - `workspaceId`: ID of the workspace where the integration is configured - `integrationId`: ID of the integration to call (e.g., 'twitter-v2', 'github') - `details`: Object containing: - `endpoint`: The endpoint to call on the integration - `method`: HTTP method (GET, POST, etc.) - `data`: Optional payload for the request **Returns:** The response from the integration endpoint **Example:** ```typescript // Example: Sending a tweet using Twitter integration const response = await agent.callIntegration({ workspaceId: 123, integrationId: 'twitter-v2', details: { endpoint: '/2/tweets', method: 'POST', data: { text: 'Hello from my AI agent!' } } }) ``` ### MCP Easily connect your agent to external [Model Context Protocol](https://modelcontextprotocol.org) (MCP) servers and automatically import their tools as capabilities. #### Configure MCP servers Provide an `mcpServers` object when creating the agent. Each key is a **server ID** of your choice. Supported transports are `http`, `sse`, and `stdio`. ```typescript import { Agent } from '@openserv-labs/sdk' const agent = new Agent({ systemPrompt: 'You are a search-engine assistant.', mcpServers: { Exa: { transport: 'http', url: 'https://server.smithery.ai/exa/mcp?api_key=YOUR_API_KEY', autoRegisterTools: true // automatically turn MCP tools into capabilities } } }) await agent.start() ``` ##### Local (stdio) transport ```typescript mcpServers: { LocalLLM: { transport: 'stdio', command: 'my-mcp-binary', args: ['--model', 'gpt-4o'], env: { OPENAI_API_KEY: process.env.OPENAI_API_KEY }, autoRegisterTools: true } } ``` ##### Server-Sent Events (sse) transport ```typescript mcpServers: { Anthropic: { transport: 'sse', url: 'https://my-mcp-server.com/sse', autoRegisterTools: false } } ``` #### Using MCP tools If `autoRegisterTools` is `true`, each MCP tool becomes a capability named `mcp__`. You can also access the raw MCP client via `agent.mcpClients['MCP_SERVER_ID']` to list tools (`getTools`) or execute them directly (`executeTool`) inside your agents own capabilities. ## Advanced Usage ### Local Development with Tunnel The SDK provides a built-in tunnel that connects your locally running agent to the OpenServ platform. This eliminates the need to deploy your agent to a public URL during development. #### How It Works When you use the `run()` function or `OpenServTunnel` class: 1. Your agent starts an HTTP server locally (default port 7378) 2. A WebSocket connection is established to OpenServ's proxy server 3. The proxy authenticates your agent using your `OPENSERV_API_KEY` 4. OpenServ routes incoming tasks through the tunnel to your local machine **No Agent Endpoint URL configuration is needed during local development.** The tunnel connection is identified by your API key, which is already associated with your registered agent in the OpenServ platform. #### Quick Start ```typescript import { Agent, run } from '@openserv-labs/sdk' import { z } from 'zod' const agent = new Agent({ systemPrompt: 'You are a helpful assistant.' }) agent.addCapability({ name: 'greet', description: 'Greet someone', inputSchema: z.object({ name: z.string() }), async run({ args }) { return `Hello, ${args.name}!` } }) // Start the agent with automatic tunnel management const { tunnel, stop } = await run(agent) // The agent is now connected to OpenServ and ready to receive tasks // To gracefully stop the agent and tunnel: await stop() ``` The `run()` function automatically: - Starts the agent's HTTP server - Creates a WebSocket tunnel to the OpenServ proxy - Handles reconnection with exponential backoff (up to 10 retries) - Registers signal handlers for graceful shutdown (SIGTERM, SIGINT) #### Tunnel vs. Deployed Endpoint | Aspect | Tunnel (Local Development) | Deployed Endpoint (Production) | | ----------------- | -------------------------- | -------------------------------- | | Setup | Just run your code | Deploy to cloud/server | | URL Configuration | Not needed | Set Agent Endpoint in platform | | Connection | WebSocket via proxy | Direct HTTP | | Tunnel | Enabled (default) | Disabled (`DISABLE_TUNNEL=true`) | | Use case | Development & testing | Production | When deploying to a hosting provider like Cloud Run, set `DISABLE_TUNNEL=true` as an environment variable. This makes `run()` start only the HTTP server without opening a WebSocket tunnel to the proxy — the platform reaches your agent directly at its public URL. #### Configuration Options ```typescript const { tunnel, stop } = await run(agent, { // Tunnel-specific options tunnel: { apiKey: 'your-api-key', // Defaults to OPENSERV_API_KEY env var proxyUrl: 'custom-proxy-url' // Defaults to OPENSERV_PROXY_URL env var }, // Disable automatic signal handlers if you want to handle shutdown yourself handleSignals: false }) ``` #### Using the Tunnel Directly For more advanced control, you can use the `OpenServTunnel` class directly: ```typescript import { Agent, OpenServTunnel } from '@openserv-labs/sdk' const agent = new Agent({ systemPrompt: 'You are a helpful assistant.' }) await agent.start() const tunnel = new OpenServTunnel({ apiKey: process.env.OPENSERV_API_KEY, onConnected: isReconnect => { console.log(isReconnect ? 'Reconnected!' : 'Connected!') }, onError: error => { console.error('Tunnel error:', error.message) } }) await tunnel.start(agent.port) // Later, to stop: await tunnel.stop() await agent.stop() ``` ### OpenAI Process Runtime The framework includes built-in OpenAI function calling support through the `process()` method: ```typescript const result = await agent.process({ messages: [ { role: 'system', content: 'You are a helpful assistant' }, { role: 'user', content: 'Create a task to analyze the latest data' } ] }) ``` ### Error Handling Implement robust error handling in your agents: ```typescript try { await agent.doTask(action) } catch (error) { await agent.markTaskAsErrored({ workspaceId: action.workspace.id, taskId: action.task.id, error: error instanceof Error ? error.message : 'Unknown error' }) // Log the error await agent.addLogToTask({ workspaceId: action.workspace.id, taskId: action.task.id, severity: 'error', type: 'text', body: `Error: ${error.message}` }) } ``` ### Custom Agents Create specialized agents by extending the base Agent class: ```typescript class DataAnalysisAgent extends Agent { protected async doTask(action: z.infer) { if (!action.task) return try { await this.updateTaskStatus({ workspaceId: action.workspace.id, taskId: action.task.id, status: 'in-progress' }) // Implement custom analysis logic const result = await this.analyzeData(action.task.input) await this.completeTask({ workspaceId: action.workspace.id, taskId: action.task.id, output: JSON.stringify(result) }) } catch (error) { await this.handleError(action, error) } } private async analyzeData(input: string) { // Custom data analysis implementation } private async handleError(action: any, error: any) { // Custom error handling logic } } ``` ## Examples Check out our [examples directory](https://github.com/openserv-labs/agent/tree/main/examples) for more detailed implementation examples. ## License ``` MIT License Copyright (c) 2024 OpenServ Labs Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` --- Built with ❤️ by [OpenServ Labs](https://openserv.ai)