This file is a merged representation of a subset of the codebase, containing specifically included files and files not matching ignore patterns, combined into a single document by Repomix. # File Summary ## Purpose This file contains a packed representation of a subset of the repository's contents that is considered the most important context. It is designed to be easily consumable by AI systems for analysis, code review, or other automated processes. ## File Format The content is organized as follows: 1. This summary section 2. Repository information 3. Directory structure 4. Repository files (if enabled) 5. Multiple file entries, each consisting of: a. A header with the file path (## File: path/to/file) b. The full contents of the file in a code block ## Usage Guidelines - This file should be treated as read-only. Any changes should be made to the original repository files, not this packed version. - When processing this file, use the file path to distinguish between different files in the repository. - Be aware that this file may contain sensitive information. Handle it with the same level of security as you would the original repository. ## Notes - Some files may have been excluded based on .gitignore rules and Repomix's configuration - Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files - Only files matching these patterns are included: packages, infra - Files matching these patterns are excluded: infra/abbreviations.json, packages/ui-angular/libs/ui/, .vscode/, */**/.vscode/ - Files matching patterns in .gitignore are excluded - Files matching default ignore patterns are excluded - Files are sorted by Git change count (files with more changes are at the bottom) # Directory Structure ``` infra/ hooks/ api/ setup.ps1 setup.sh mcp/ setup.ps1 setup.sh ui/ setup.ps1 setup.sh postprovision.ps1 postprovision.sh modules/ fetch-container-image.bicep .gitkeep main.bicep main.parameters.json resources.bicep packages/ api-langchain-js/ src/ agents/ index.ts graph/ index.ts mcp/ index.ts mcp-http-client.ts mcp-sse-client.ts mcp-tools.ts providers/ azure-openai.ts docker-models.ts foundry-local.ts github-models.ts index.ts ollama-models.ts tools/ index.ts mcp-bridge.ts utils/ instrumentation.ts types.ts index.ts server.ts .gitignore Dockerfile package.json api-llamaindex-ts/ src/ agents/ index.ts graph/ index.ts mcp/ index.ts mcp-http-client.ts mcp-sse-client.ts mcp-tools.ts providers/ azure-openai.ts docker-models.ts foundry-local.ts github-models.ts index.ts ollama-models.ts tools/ index.ts mcp-bridge.ts utils/ instrumentation.ts types.ts index.ts server.ts .gitignore Dockerfile package.json api-maf-python/ src/ orchestrator/ agents/ customer_query_agent/ __init__.py destination_recommendation_agent/ __init__.py echo_agent/ __init__.py itinerary_planning_agent/ __init__.py triage_agent/ __init__.py __init__.py providers/ __init__.py azure_openai.py base.py docker_models.py foundry_local.py github_models.py ollama_models.py tools/ __init__.py examples.py mcp_tool_wrapper.py tool_config.py tool_registry.py __init__.py magentic_workflow.py workflow.py tests/ __init__.py test_agents.py test_config.py test_mcp_client.py test_mcp_graceful_degradation.py test_providers.py test_workflow.py utils/ __init__.py __init__.py config.py main.py .dockerignore Dockerfile pyproject.toml test.http mcp-servers/ customer-query/ AITravelAgent.CustomerQueryServer/ Models/ CustomerQueryAnalysisResult.cs Properties/ launchSettings.json Tools/ CustomerQueryTool.cs EchoTool.cs AITravelAgent.CustomerQueryServer.csproj Program.cs AITravelAgent.ServiceDefaults/ AITravelAgent.ServiceDefaults.csproj Extensions.cs AITravelAgent.sln Dockerfile destination-recommendation/ .mvn/ wrapper/ maven-wrapper.properties MavenWrapperDownloader.java src/ main/ java/ com/ microsoft/ mcp/ sample/ server/ config/ StartupConfig.java controller/ HealthController.java exception/ GlobalExceptionHandler.java model/ ActivityType.java BudgetCategory.java Destination.java PreferenceRequest.java Season.java service/ DestinationService.java McpServerApplication.java resources/ application.yml banner.txt .gitignore Dockerfile pom.xml echo-ping/ src/ index.ts instrumentation.ts server.ts token-provider.ts tools.ts Dockerfile package.json itinerary-planning/ src/ __init__.py app.py mcp_server.py .dockerignore Dockerfile pyproject.toml ui-angular/ public/ favicon.ico src/ app/ chat-conversation/ chat-conversation.component.css chat-conversation.component.html chat-conversation.component.spec.ts chat-conversation.component.ts chat-conversation.service.ts components/ accordion/ accordion.component.ts alert/ alert.component.ts skeleton-preview/ skeleton-preview.component.ts theme-toggle/ theme-toggle.component.ts services/ api.service.ts theme.service.ts app.component.css app.component.html app.component.spec.ts app.component.ts app.config.server.ts app.config.ts app.routes.server.ts app.routes.ts environments/ environment.development.ts environment.ts env.d.ts index.html main.server.ts main.ts server.ts styles.css .editorconfig .gitignore components.json Dockerfile Dockerfile.production package.json tailwind.config.js tsconfig.app.json tsconfig.spec.json ``` # Files ## File: infra/modules/fetch-container-image.bicep ```` param exists bool param name string resource existingApp 'Microsoft.App/containerApps@2023-05-02-preview' existing = if (exists) { name: name } output containers array = exists ? existingApp.properties.template.containers : [] ```` ## File: infra/main.parameters.json ````json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "environmentName": { "value": "${AZURE_ENV_NAME}" }, "location": { "value": "${AZURE_LOCATION}" }, "apiLangchainJsExists": { "value": "${SERVICE_API_LANGCHAIN_JS_RESOURCE_EXISTS=false}" }, "apiLangchainJsDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "apiLlamaindexTsExists": { "value": "${SERVICE_API_LLAMAINDEX_TS_RESOURCE_EXISTS=false}" }, "apiLlamaindexTsDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "apiMafPythonExists": { "value": "${SERVICE_API_MAF_PYTHON_RESOURCE_EXISTS=false}" }, "apiMafPythonDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "uiAngularExists": { "value": "${SERVICE_UI_ANGULAR_RESOURCE_EXISTS=false}" }, "uiAngularDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "itineraryPlanningExists": { "value": "${SERVICE_ITINERARY_PLANNING_RESOURCE_EXISTS=false}" }, "itineraryPlanningDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "customerQueryExists": { "value": "${SERVICE_CUSTOMER_QUERY_RESOURCE_EXISTS=false}" }, "customerQueryDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "destinationRecommendationExists": { "value": "${SERVICE_DESTINATION_RECOMMENDATION_RESOURCE_EXISTS=false}" }, "destinationRecommendationDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "echoPingExists": { "value": "${SERVICE_ECHO_PING_RESOURCE_EXISTS=false}" }, "echoPingDefinition": { "value": { "settings": [ { "name": "", "value": "${VAR}", "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR} to use the value of 'VAR' from the current environment." }, { "name": "", "value": "${VAR_S}", "secret": true, "_comment_name": "The name of the environment variable when running in Azure. If empty, ignored.", "_comment_value": "The value to provide. This can be a fixed literal, or an expression like ${VAR_S} to use the value of 'VAR_S' from the current environment." } ] } }, "principalId": { "value": "${AZURE_PRINCIPAL_ID}" }, "isContinuousIntegration": { "value": "${GITHUB_ACTIONS=false}" } } } ```` ## File: packages/api-langchain-js/src/mcp/index.ts ````typescript export { MCPClient as MCPSSEClient } from "./mcp-sse-client.js"; export { MCPClient as MCPHTTPClient } from "./mcp-http-client.js"; ```` ## File: packages/api-langchain-js/src/mcp/mcp-http-client.ts ````typescript import EventEmitter from 'node:events'; import { Client } from '@modelcontextprotocol/sdk/client/index.js'; import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'; import { ToolListChangedNotificationSchema } from '@modelcontextprotocol/sdk/types.js'; export class MCPClient extends EventEmitter { private client: Client; private transport: StreamableHTTPClientTransport; constructor(serverName: string, serverUrl: string, accessToken?: string) { super(); this.client = new Client({ name: 'mcp-http-client-' + serverName, version: '1.0.0', }); let headers = {}; if (accessToken) { headers = { Authorization: 'Bearer ' + accessToken, }; } this.transport = new StreamableHTTPClientTransport(new URL(serverUrl), { requestInit: { headers: { ...headers, }, }, }); this.client.setNotificationHandler( ToolListChangedNotificationSchema, () => { console.log('Emitting toolListChanged event'); this.emit('toolListChanged'); } ); } async connect() { await this.client.connect(this.transport); console.log('Connected to server'); } async listTools() { const result = await this.client.listTools(); return result.tools; } async callTool(name: string, toolArgs: string) { console.log(`Calling tool ${name} with arguments:`, toolArgs); return await this.client.callTool({ name, arguments: JSON.parse(toolArgs), }); } async cleanup() { console.log('Closing transport...'); await this.transport.close(); } } ```` ## File: packages/api-langchain-js/src/mcp/mcp-sse-client.ts ````typescript import { Client } from "@modelcontextprotocol/sdk/client/index.js"; import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js"; import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js"; import { tracer, log } from "../utils/instrumentation.js"; /** * MCPClient is a client for connecting to Model Context Protocol (MCP) servers using Server-Sent Events (SSE). * * NOTE: This is a legacy implementation and should be replaced with the Streamable HTTP client! */ export class MCPClient { private client: Client; private tools: Array = []; private transport: Transport; constructor(serverName: string, serverUrl: string, accessToken?: string) { this.client = new Client({ name: "mcp-client-" + serverName, version: "1.0.0", }); let headers = {}; if (accessToken) { headers = { Authorization: "Bearer " + accessToken, }; } this.transport = new SSEClientTransport(new URL(serverUrl), { requestInit: { headers: { ...headers, }, }, }); } connect() { return tracer.startActiveSpan("connect", async (span) => { log("Connecting to MCP SSE server"); try { await this.client.connect(this.transport); log("Connected to MCP SSE server"); span.end(); return this.client; } catch (error: any) { log("Error connecting to MCP SSE server:", error); span.setStatus({ code: 2, message: (error as Error).message }); span.end(); throw new Error( `Failed to connect to MCP SSE server: ${(error as Error).message}` ); } }); } async listTools() { return tracer.startActiveSpan("listTools", async (span) => { log("Tools", this.tools); const toolsResult = await this.client.listTools(); this.tools = toolsResult.tools; log("Tools: ", toolsResult); span.end(); return toolsResult; }); } async callTool(toolName: string, args: Record) { console.log(`Called ${toolName} with params:`, args); return tracer.startActiveSpan("processQuery", async (span) => { log("Tools", this.tools); const toolResult = await this.client.callTool({ name: toolName, arguments: args, }); log("Tool result", toolResult); span.end(); return toolResult; }); } async cleanup() { console.log(`Cleaning up MCP SSE client`); await this.transport.close(); } } ```` ## File: packages/api-langchain-js/src/mcp/mcp-tools.ts ````typescript import { MCPClientOptions, McpServerDefinition } from "../utils/types.js"; import { MCPClient as MCPHTTPClient } from "./mcp-http-client.js"; import { MCPClient as MCPSSEClient } from "./mcp-sse-client.js"; function client(config: MCPClientOptions): MCPSSEClient | MCPHTTPClient { console.log(`Initializing MCP client`); console.log(`Using configuration:`, {config}); if (config.type === "sse") { // legacy implementation using SSE return new MCPSSEClient("langchain-js-sse-client", config.url, config.accessToken); } else { return new MCPHTTPClient("langchain-js-http-client", config.url, config.accessToken); } } export async function mcpToolsList(config: McpServerDefinition[]) { return await Promise.all( config.map(async ({ id, name, config }) => { const { url, type } = config; const mcpClient = client(config); try { console.log(`Connecting to MCP server ${name} at ${url}`); await mcpClient.connect(); console.log(`MCP server ${name} is reachable`); const tools = await mcpClient.listTools(); console.log(`MCP server ${name} has ${tools.length} tools`); return { id, name, url, type, reachable: true, selected: id !== "echo-ping", tools, }; } catch (error: unknown) { console.error( `MCP server ${name} is not reachable`, (error as Error).message ); return { id, name, url, type, reachable: false, selected: false, tools: [], error: (error as Error).message, }; } finally { await mcpClient.cleanup(); } }) ); } ```` ## File: packages/api-langchain-js/src/providers/azure-openai.ts ````typescript import { ChatOpenAI } from "@langchain/openai"; import { DefaultAzureCredential, getBearerTokenProvider, ManagedIdentityCredential, } from "@azure/identity"; const AZURE_COGNITIVE_SERVICES_SCOPE = "https://cognitiveservices.azure.com/.default"; export const llm = async () => { console.log("Using Azure OpenAI"); const isRunningInLocalDocker = process.env.IS_LOCAL_DOCKER_ENV === "true"; if (isRunningInLocalDocker) { // running in local Docker environment console.log( "Running in local Docker environment, Azure Managed Identity is not supported. Authenticating with apiKey." ); // return new AzureChatOpenAI({ // azureOpenAIEndpoint: process.env.AZURE_OPENAI_ENDPOINT, // azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_DEPLOYMENT, // azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY, // temperature: 0, // }); const token = process.env.AZURE_OPENAI_API_KEY; return new ChatOpenAI({ configuration: { baseURL: process.env.AZURE_OPENAI_ENDPOINT, }, modelName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME ?? "gpt-5", streaming: true, useResponsesApi: true, apiKey: token, verbose: true, }); } // Set up Azure AD authentication for production let credential: any = new DefaultAzureCredential(); const clientId = process.env.AZURE_CLIENT_ID; if (clientId) { // running in production with a specific client ID console.log("Using Azure Client ID:", clientId); credential = new ManagedIdentityCredential({ clientId, }); } // Create the token provider function that will be called on every request const azureADTokenProvider = getBearerTokenProvider( credential, AZURE_COGNITIVE_SERVICES_SCOPE ); console.log( "Using Azure OpenAI Endpoint:", process.env.AZURE_OPENAI_ENDPOINT ); console.log( "Using Azure OpenAI Deployment Name:", process.env.AZURE_OPENAI_DEPLOYMENT_NAME ); return new ChatOpenAI({ configuration: { baseURL: process.env.AZURE_OPENAI_ENDPOINT, }, modelName: process.env.AZURE_OPENAI_DEPLOYMENT_NAME ?? "gpt-5", streaming: true, useResponsesApi: true, apiKey: await azureADTokenProvider(), verbose: true, }); }; ```` ## File: packages/api-langchain-js/src/providers/docker-models.ts ````typescript import { ChatOpenAI } from "@langchain/openai"; export const llm = async () => { console.log("Using Docker Models"); return new ChatOpenAI({ openAIApiKey: 'DOCKER_API_KEY', modelName: process.env.DOCKER_MODEL || "gpt-3.5-turbo", configuration: { baseURL: process.env.DOCKER_MODEL_ENDPOINT, }, temperature: 0, }); }; ```` ## File: packages/api-langchain-js/src/providers/foundry-local.ts ````typescript import { ChatOpenAI } from "@langchain/openai"; import { FoundryLocalManager } from "foundry-local-sdk"; // By using an alias, the most suitable model will be downloaded // to your end-user's device. // TIP: You can find a list of available models by running the // following command in your terminal: `foundry model list`. const alias = process.env.AZURE_FOUNDRY_LOCAL_MODEL_ALIAS || "phi-3.5-mini"; export const llm = async () => { // Create a FoundryLocalManager instance. This will start the Foundry // Local service if it is not already running. const foundryLocalManager = new FoundryLocalManager(); // Initialize the manager with a model. This will download the model // if it is not already present on the user's device. console.log("Initializing Foundry Local Manager..."); const modelInfo = await foundryLocalManager.init(alias); console.log("Azure Local Foundry Model Info:", modelInfo); console.log("Using Azure Local Foundry"); return new ChatOpenAI({ openAIApiKey: foundryLocalManager.apiKey, configuration: { baseURL: foundryLocalManager.endpoint, }, temperature: 0, }); }; ```` ## File: packages/api-langchain-js/src/providers/github-models.ts ````typescript import { ChatOpenAI } from "@langchain/openai"; export const llm = async () => { console.log("Using GitHub Models"); return new ChatOpenAI({ openAIApiKey: process.env.GITHUB_TOKEN, modelName: process.env.GITHUB_MODEL || "gpt-5", configuration: { baseURL: "https://models.inference.ai.azure.com", }, temperature: 0, }); }; ```` ## File: packages/api-langchain-js/src/providers/index.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import { llm as azureOpenAI } from "./azure-openai.js"; import { llm as foundryLocal } from "./foundry-local.js"; import { llm as githubModels } from "./github-models.js"; import { llm as dockerModels } from "./docker-models.js"; import { llm as ollamaModels } from "./ollama-models.js"; type LLMProvider = | "azure-openai" | "github-models" | "foundry-local" | "docker-models" | "ollama-models"; const provider = (process.env.LLM_PROVIDER || "") as LLMProvider; export const llm = async () => { switch (provider) { case "azure-openai": return azureOpenAI(); case "github-models": return githubModels(); case "docker-models": return dockerModels(); case "ollama-models": return ollamaModels(); case "foundry-local": return foundryLocal(); default: throw new Error( `Unknown LLM_PROVIDER "${provider}". Valid options are: azure-openai, github-models, foundry-local, docker-models, ollama-models.` ); } }; ```` ## File: packages/api-langchain-js/src/providers/ollama-models.ts ````typescript import { ChatOpenAI } from "@langchain/openai"; export const llm = async () => { console.log("Using Ollama Models"); return new ChatOpenAI({ openAIApiKey: 'OLLAMA_API_KEY', modelName: process.env.OLLAMA_MODEL || "llama2", configuration: { baseURL: process.env.OLLAMA_MODEL_ENDPOINT, }, temperature: 0, }); }; ```` ## File: packages/api-langchain-js/src/tools/index.ts ````typescript import { McpServerDefinition } from "../utils/types.js"; export type McpServerName = | "echo-ping" | "customer-query" | "itinerary-planning" | "destination-recommendation"; const MCP_API_HTTP_PATH = "/mcp"; export const McpToolsConfig = (): { [k in McpServerName]: McpServerDefinition; } => ({ "echo-ping": { config: { url: process.env["MCP_ECHO_PING_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, requestInit: { headers: { "Authorization": "Bearer " + process.env["MCP_ECHO_PING_ACCESS_TOKEN"], } }, }, id: "echo-ping", name: "Echo Test", }, "customer-query": { config: { url: process.env["MCP_CUSTOMER_QUERY_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, }, id: "customer-query", name: "Customer Query", }, "itinerary-planning": { config: { url: process.env["MCP_ITINERARY_PLANNING_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, }, id: "itinerary-planning", name: "Itinerary Planning", }, "destination-recommendation": { config: { url: process.env["MCP_DESTINATION_RECOMMENDATION_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, }, id: "destination-recommendation", name: "Destination Recommendation", }, }); ```` ## File: packages/api-langchain-js/src/tools/mcp-bridge.ts ````typescript import { DynamicStructuredTool } from "@langchain/core/tools"; import { loadMcpTools } from "@langchain/mcp-adapters"; import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js"; import { Client } from "@modelcontextprotocol/sdk/client/index.js"; import { McpServerDefinition } from "../utils/types.js"; // Create MCP tools using the official @langchain/mcp-adapters export const createMcpToolsFromDefinition = async ( mcpServerConfig: McpServerDefinition ): Promise => { try { console.log(`Creating MCP tools for ${mcpServerConfig.id} at ${mcpServerConfig.config.url}`); // Extract headers if available const headers: Record = {}; if (mcpServerConfig.config.requestInit?.headers) { const configHeaders = mcpServerConfig.config.requestInit.headers as Record; Object.assign(headers, configHeaders); } // Create MCP client using StreamableHTTPClientTransport const client = new Client({ name: `langchain-mcp-client-${mcpServerConfig.id}`, version: "1.0.0", }); const transport = new StreamableHTTPClientTransport( new URL(mcpServerConfig.config.url), { requestInit: { headers, }, } ); // Connect to MCP server await client.connect(transport); // Load tools using the official Langchain MCP adapters const tools = await loadMcpTools(mcpServerConfig.id, client); console.log(`Created ${tools.length} Langchain tools for ${mcpServerConfig.id}`); return tools; } catch (error) { console.error(`Error creating MCP tools for ${mcpServerConfig.id}:`, error); return []; } }; ```` ## File: packages/api-langchain-js/src/utils/instrumentation.ts ````typescript import { metrics, trace } from "@opentelemetry/api"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; import { OTLPLogExporter } from "@opentelemetry/exporter-logs-otlp-grpc"; import { OTLPMetricExporter } from "@opentelemetry/exporter-metrics-otlp-grpc"; import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-grpc"; import { OTLPExporterConfigBase } from "@opentelemetry/otlp-exporter-base"; import { resourceFromAttributes } from "@opentelemetry/resources"; import { LoggerProvider, SimpleLogRecordProcessor } from "@opentelemetry/sdk-logs"; import { PeriodicExportingMetricReader } from "@opentelemetry/sdk-metrics"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION, } from "@opentelemetry/semantic-conventions"; const otlpEndpoint = process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4317"; const otlpHeaders = process.env.OTEL_EXPORTER_OTLP_HEADERS || "x-otlp-header=header-value"; const otlpServiceName = process.env.OTEL_SERVICE_NAME || "api"; const otlpServiceVersion = process.env.OTLP_SERVICE_VERSION || "1.0"; const otlpOptions = { url: otlpEndpoint, headers: otlpHeaders.split(",").reduce((acc, header) => { const [key, value] = header.split("="); acc[key] = value; return acc; }, {} as Record), } as OTLPExporterConfigBase; const resource = resourceFromAttributes({ [ATTR_SERVICE_NAME]: otlpServiceName, [ATTR_SERVICE_VERSION]: otlpServiceVersion, }); const sdk = new NodeSDK({ resource, logRecordProcessor: new SimpleLogRecordProcessor( new OTLPLogExporter(otlpOptions) ), traceExporter: new OTLPTraceExporter(otlpOptions), metricReader: new PeriodicExportingMetricReader({ exporter: new OTLPMetricExporter(otlpOptions), }), instrumentations: [getNodeAutoInstrumentations()], }); sdk.start(); const loggerProvider = new LoggerProvider({ resource }); const logExporter = new OTLPLogExporter(otlpOptions); loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(logExporter)); const tracer = trace.getTracer(otlpServiceName, otlpServiceVersion); const meter = metrics.getMeter(otlpServiceName, otlpServiceVersion); const logger = loggerProvider.getLogger(otlpServiceName); function log( message: string, attributes: Record = {}, level: string = "INFO" ) { logger.emit({ severityText: level, body: message, attributes: { service: otlpServiceName, version: otlpServiceVersion, ...attributes, }, }); } export { log, meter, tracer }; ```` ## File: packages/api-langchain-js/src/utils/types.ts ````typescript import type { SSEClientTransportOptions } from "@modelcontextprotocol/sdk/client/sse.js"; export type MCPClientOptions = SSEClientTransportOptions & { url: string; type: "sse" | "http"; accessToken?: string; verbose?: boolean; }; export type McpServerDefinition = { name: string; id: string; config: MCPClientOptions; }; ```` ## File: packages/api-langchain-js/src/index.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import { McpServerDefinition } from "./utils/types.js"; import { llm as llmProvider } from "./providers/index.js"; import { TravelAgentsWorkflow } from "./graph/index.js"; // Function to set up agents and return the workflow instance export async function setupAgents(filteredTools: McpServerDefinition[] = []) { console.log("Setting up Langchain.js agents..."); let llm; try { llm = await llmProvider(); } catch (error) { throw new Error(error instanceof Error ? error.message : String(error)); } // Create and initialize the workflow const workflow = new TravelAgentsWorkflow(llm); await workflow.initialize(filteredTools); console.log("Langchain.js workflow initialized successfully"); return workflow; } // Re-export the McpToolsConfig and types export { McpToolsConfig } from "./tools/index.js"; export type { McpServerDefinition, MCPClientOptions } from "./utils/types.js"; ```` ## File: packages/api-langchain-js/src/server.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import cors from "cors"; import express from "express"; import { Readable } from "node:stream"; import { pipeline } from "node:stream/promises"; import { setupAgents } from "./index.js"; import { McpToolsConfig } from "./tools/index.js"; import { mcpToolsList } from "./mcp/mcp-tools.js"; const app = express(); const PORT = process.env.PORT || 4000; const CHUNK_END = "\n\n"; // Middleware app.use(cors()); app.use(express.json()); const apiRouter = express.Router(); // Add request body logging middleware for debugging apiRouter.use((req, res, next) => { if (req.path === "/chat" && req.method === "POST") { const contentType = req.headers["content-type"]?.replace(/\n|\r/g, ""); const body = typeof req.body === "string" ? req.body.replace(/\n|\r/g, "") : JSON.stringify(req.body).replace(/\n|\r/g, ""); console.log("Request Content-Type:", contentType); console.log("Request body:", body); } next(); }); // Health check endpoint apiRouter.get("/health", (req, res) => { res.status(200).json({ status: "OK", orchestrator: "langchain-js" }); }); // MCP tools apiRouter.get("/tools", async (req, res) => { try { const tools = await mcpToolsList(Object.values(McpToolsConfig())); console.log("Available tools:", tools); res.status(200).json({ tools }); } catch (error) { console.error("Error fetching MCP tools:", error); res.status(500).json({ error: "Error fetching MCP tools" }); } }); // Chat endpoint with Server-Sent Events (SSE) for streaming responses // @ts-ignore - Ignoring TypeScript errors for Express route handlers apiRouter.post("/chat", async (req, res) => { req.on("close", () => { console.log("Client disconnected, aborting..."); }); if (!req.body) { console.error( "Request body is undefined. Check Content-Type header in the request." ); return res.status(400).json({ error: "Request body is undefined. Make sure to set Content-Type to application/json.", }); } const message = req.body.message; const tools = req.body.tools; console.log("Tools to use:", JSON.stringify(tools, null, 2)); if (!message) { return res.status(400).json({ error: "Message is required" }); } res.setHeader("Content-Type", "text/event-stream"); res.setHeader("Cache-Control", "no-cache"); res.setHeader("Connection", "keep-alive"); try { const workflow = await setupAgents(tools); const context = workflow.run(message); const readableStream = new Readable({ async read() { try { for await (const event of context) { const { eventName, data } = event; const serializedData = JSON.stringify({ type: "metadata", kind: "langchain-js", agent: (data as any)?.agent || null, event: eventName, data: data ? JSON.parse(JSON.stringify(data)) : null, }); this.push(serializedData + CHUNK_END); console.log("Pushed event:", serializedData); } this.push(null); // Close the stream } catch (error: any) { console.error("Error during streaming:", error?.message); } }, }); await pipeline(readableStream, res); } catch (error) { console.error("Error occurred:", error); if (!res.headersSent) { res.status(500).json({ error: (error as any).message }); } else { res.write( `${JSON.stringify({ type: "error", message: (error as any).message, })}` + CHUNK_END ); res.end(); } } }); // Mount the API router with the /api prefix app.use("/api", apiRouter); // Add a root route for API information app.get("/", (req, res) => { res.json({ message: "AI Travel Agents API - LangChain.js", version: "1.0.0", orchestrator: "langchain-js", endpoints: { health: "/api/health", tools: "/api/tools", chat: "/api/chat", }, }); }); // Start the server app.listen(PORT, () => { console.log(`LangChain.js API server running on port ${PORT}`); console.log(`API endpoints:`); console.log(` - Health check: http://localhost:${PORT}/api/health (GET)`); console.log(` - MCP Tools: http://localhost:${PORT}/api/tools (GET)`); console.log(` - Chat: http://localhost:${PORT}/api/chat (POST)`); }); ```` ## File: packages/api-langchain-js/.gitignore ```` node_modules dist *.log .env .DS_Store ```` ## File: packages/api-langchain-js/Dockerfile ```` FROM node:22-alpine AS builder # Install build dependencies RUN apk add --no-cache python3 make g++ \ && rm -rf /var/cache/apk/* WORKDIR /app # Copy package files first for better caching COPY package*.json ./ # Use cache mount for npm and install dependencies RUN --mount=type=cache,target=/root/.npm \ npm i --prefer-offline --no-audit # Now copy source code (changes more frequently) COPY tsconfig.json ./ COPY src ./src # Build the application RUN npm run build # Runtime stage with security improvements FROM node:22-alpine AS release # Add non-root user for security RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 WORKDIR /app # Copy built artifacts COPY --from=builder /app/dist ./dist COPY --from=builder /app/package*.json ./ # Set production environment ENV NODE_ENV=production # Install production dependencies with cache mount RUN --mount=type=cache,target=/root/.npm \ npm ci --prefer-offline --omit=dev --ignore-scripts --no-audit # Change ownership to non-root user RUN chown -R nodejs:nodejs /app # Switch to non-root user USER nodejs # Expose port (adjust if needed) EXPOSE 4000 # Run the application CMD ["node", "dist/server.js"] ```` ## File: packages/api-llamaindex-ts/src/agents/index.ts ````typescript import { BaseToolWithCall, ToolCallLLM } from "@llamaindex/core/llms"; import { agent } from "@llamaindex/workflow"; import { McpToolsConfig } from "../tools/index.js"; import { createMcpToolsFromDefinition } from "../tools/mcp-bridge.js"; import { McpServerDefinition } from "../utils/types.js"; // Helper function to create MCP tools based on server configuration export const createMcpTools = async (mcpServerConfig: McpServerDefinition): Promise => { return createMcpToolsFromDefinition(mcpServerConfig); }; // Function to setup agents with filtered tools export const setupAgents = async ( filteredTools: McpServerDefinition[] = [], llm: ToolCallLLM ) => { const tools = Object.fromEntries( filteredTools.map((tool) => [tool.id, true]) ); console.log("Filtered tools:", tools); const agents: any[] = []; let mcpTools: BaseToolWithCall[] = []; const mcpToolsConfig = McpToolsConfig(); const verbose = false; // Create agents based on available tools if (tools["echo-ping"]) { const mcpServerConfig = mcpToolsConfig["echo-ping"]; const echoTools = await createMcpTools(mcpServerConfig); const _agent = agent({ llm, name: "EchoAgent", tools: echoTools, systemPrompt: "Echo back the received input. Do not respond with anything else. Always call the tools.", }); agents.push(_agent); mcpTools.push(...echoTools); } if (tools["customer-query"]) { const mcpServerConfig = mcpToolsConfig["customer-query"]; const customerTools = await createMcpTools(mcpServerConfig); const _agent = agent({ llm, name: "CustomerQueryAgent", tools: customerTools, systemPrompt: "Assists employees in better understanding customer needs, facilitating more accurate and personalized service. This agent is particularly useful for handling nuanced queries, such as specific travel preferences or budget constraints, which are common in travel agency interactions.", }); agents.push(_agent); mcpTools.push(...customerTools); } if (tools["itinerary-planning"]) { const mcpServerConfig = mcpToolsConfig["itinerary-planning"]; const itineraryTools = await createMcpTools(mcpServerConfig); const _agent = agent({ llm, name: "ItineraryPlanningAgent", tools: itineraryTools, systemPrompt: "Creates a travel itinerary based on user preferences and requirements.", }); agents.push(_agent); mcpTools.push(...itineraryTools); } if (tools["destination-recommendation"]) { const mcpServerConfig = mcpToolsConfig["destination-recommendation"]; const destinationTools = await createMcpTools(mcpServerConfig); const _agent = agent({ llm, name: "DestinationRecommendationAgent", tools: destinationTools, systemPrompt: "Suggests destinations based on customer preferences and requirements.", }); agents.push(_agent); mcpTools.push(...destinationTools); } const supervisor = agent({ name: "TravelAgent", systemPrompt: "Acts as a triage agent to determine the best course of action for the user's query. If you cannot handle the query, please pass it to the next agent. If you can handle the query, please do so.", tools: [...mcpTools], canHandoffTo: agents .map((target) => target.getAgents().map(((agent: any) => agent.name))) .flat(), llm, verbose, }); console.log("Agents list:", agents); console.log("Tools list:", JSON.stringify(mcpTools, null, 2)); console.log("Agents created:", Object.keys(agents)); console.log("All tools count:", mcpTools.length); return { supervisor, agents, mcpTools }; }; ```` ## File: packages/api-llamaindex-ts/src/mcp/index.ts ````typescript export { MCPClient as MCPSSEClient } from "./mcp-sse-client.js"; export { MCPClient as MCPHTTPClient } from "./mcp-http-client.js"; ```` ## File: packages/api-llamaindex-ts/src/mcp/mcp-http-client.ts ````typescript import EventEmitter from 'node:events'; import { Client } from '@modelcontextprotocol/sdk/client/index.js'; import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'; import { ToolListChangedNotificationSchema } from '@modelcontextprotocol/sdk/types.js'; export class MCPClient extends EventEmitter { private client: Client; private transport: StreamableHTTPClientTransport; constructor(serverName: string, serverUrl: string, accessToken?: string) { super(); this.client = new Client({ name: 'mcp-http-client-' + serverName, version: '1.0.0', }); let headers = {}; if (accessToken) { headers = { Authorization: 'Bearer ' + accessToken, }; } this.transport = new StreamableHTTPClientTransport(new URL(serverUrl), { requestInit: { headers: { ...headers, }, }, }); this.client.setNotificationHandler( ToolListChangedNotificationSchema, () => { console.log('Emitting toolListChanged event'); this.emit('toolListChanged'); } ); } async connect() { await this.client.connect(this.transport); console.log('Connected to server'); } async listTools() { const result = await this.client.listTools(); return result.tools; } async callTool(name: string, toolArgs: string) { console.log(`Calling tool ${name} with arguments:`, toolArgs); return await this.client.callTool({ name, arguments: JSON.parse(toolArgs), }); } async close() { console.log('Closing transport...'); await this.transport.close(); } } ```` ## File: packages/api-llamaindex-ts/src/mcp/mcp-sse-client.ts ````typescript import { Client } from "@modelcontextprotocol/sdk/client/index.js"; import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js"; import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js"; import { tracer, log } from "../utils/instrumentation.js"; /** * MCPClient is a client for connecting to Model Context Protocol (MCP) servers using Server-Sent Events (SSE). * * NOTE: This is a legacy implementation and should be replaced with the Streamable HTTP client! */ export class MCPClient { private client: Client; private tools: Array = []; private transport: Transport; constructor(serverName: string, serverUrl: string, accessToken?: string) { this.client = new Client({ name: "mcp-client-" + serverName, version: "1.0.0", }); let headers = {}; if (accessToken) { headers = { Authorization: "Bearer " + accessToken, }; } this.transport = new SSEClientTransport(new URL(serverUrl), { requestInit: { headers: { ...headers, }, }, }); } connect() { return tracer.startActiveSpan("connect", async (span) => { log("Connecting to MCP SSE server"); try { await this.client.connect(this.transport); log("Connected to MCP SSE server"); span.end(); return this.client; } catch (error: any) { log("Error connecting to MCP SSE server:", error); span.setStatus({ code: 2, message: (error as Error).message }); span.end(); throw new Error( `Failed to connect to MCP SSE server: ${(error as Error).message}` ); } }); } async listTools() { return tracer.startActiveSpan("listTools", async (span) => { log("Tools", this.tools); const toolsResult = await this.client.listTools(); this.tools = toolsResult.tools; log("Tools: ", toolsResult); span.end(); return toolsResult; }); } async callTool(toolName: string, args: Record) { console.log(`Called ${toolName} with params:`, args); return tracer.startActiveSpan("processQuery", async (span) => { log("Tools", this.tools); const toolResult = await this.client.callTool({ name: toolName, arguments: args, }); log("Tool result", toolResult); span.end(); return toolResult; }); } async cleanup() { await this.transport.close(); } } ```` ## File: packages/api-llamaindex-ts/src/mcp/mcp-tools.ts ````typescript import { MCPClientOptions, McpServerDefinition } from "../utils/types.js"; import { MCPClient as MCPHTTPClient } from "./mcp-http-client.js"; import { MCPClient as MCPSSEClient } from "./mcp-sse-client.js"; function client(config: MCPClientOptions): MCPSSEClient | MCPHTTPClient { console.log(`Initializing MCP client`); console.log(`Using configuration:`, {config}); if (config.type === "sse") { // legacy implementation using SSE return new MCPSSEClient("llamaindex-sse-client", config.url, config.accessToken); } else { return new MCPHTTPClient("llamaindex-http-client", config.url, config.accessToken); } } export async function mcpToolsList(config: McpServerDefinition[]) { return await Promise.all( config.map(async ({ id, name, config }) => { const { url, type } = config; const mcpClient = client(config); try { console.log(`Connecting to MCP server ${name} at ${url}`); await mcpClient.connect(); console.log(`MCP server ${name} is reachable`); const tools = await mcpClient.listTools(); console.log(`MCP server ${name} has ${tools.length} tools`); return { id, name, url, type, reachable: true, selected: id !== "echo-ping", tools, }; } catch (error: unknown) { console.error( `MCP server ${name} is not reachable`, (error as Error).message ); return { id, name, url, type, reachable: false, selected: false, tools: [], error: (error as Error).message, }; } }) ); } ```` ## File: packages/api-llamaindex-ts/src/providers/azure-openai.ts ````typescript import { AzureOpenAI } from "@llamaindex/azure"; import { DefaultAzureCredential, getBearerTokenProvider, ManagedIdentityCredential, } from "@azure/identity"; const AZURE_COGNITIVE_SERVICES_SCOPE = "https://cognitiveservices.azure.com/.default"; export const llm = async () => { console.log("Using Azure OpenAI"); const isRunningInLocalDocker = process.env.IS_LOCAL_DOCKER_ENV === "true"; if (isRunningInLocalDocker) { // running in local Docker environment console.log( "Running in local Docker environment, Azure Managed Identity is not supported. Authenticating with apiKey." ); return new AzureOpenAI({ endpoint: process.env.AZURE_OPENAI_ENDPOINT, deployment: process.env.AZURE_OPENAI_DEPLOYMENT_NAME, apiKey: process.env.AZURE_OPENAI_API_KEY, temperature: 1, // gpt-5 specific setting }); } let credential: any = new DefaultAzureCredential(); const clientId = process.env.AZURE_CLIENT_ID; if (clientId) { // running in production with a specific client ID console.log("Using Azure Client ID:", clientId); credential = new ManagedIdentityCredential({ clientId, }); } console.log("Using DefaultAzureCredential for authentication."); const azureADTokenProvider = getBearerTokenProvider( credential, AZURE_COGNITIVE_SERVICES_SCOPE ); return new AzureOpenAI({ azureADTokenProvider, endpoint: process.env.AZURE_OPENAI_ENDPOINT, deployment: process.env.AZURE_OPENAI_DEPLOYMENT_NAME, temperature: 1, // gpt-5 specific setting }); }; ```` ## File: packages/api-llamaindex-ts/src/providers/docker-models.ts ````typescript import { OpenAI, openai } from "@llamaindex/openai"; export const llm = async () => { console.log("Using Docker Models"); const provider = openai({ baseURL: process.env.DOCKER_MODEL_ENDPOINT, apiKey: 'DOCKER_API_KEY', model: process.env.DOCKER_MODEL, }); return { ...provider, // TODO: Remove this when LlamaIndex supports tool calls for non-OpenAI providers supportToolCall: true, } as OpenAI }; ```` ## File: packages/api-llamaindex-ts/src/providers/foundry-local.ts ````typescript import { openai } from "@llamaindex/openai"; import { FoundryLocalManager } from "foundry-local-sdk"; // By using an alias, the most suitable model will be downloaded // to your end-user's device. // TIP: You can find a list of available models by running the // following command in your terminal: `foundry model list`. const alias = process.env.AZURE_FOUNDRY_LOCAL_MODEL_ALIAS || "phi-3.5-mini"; export const llm = async () => { // Create a FoundryLocalManager instance. This will start the Foundry // Local service if it is not already running. const foundryLocalManager = new FoundryLocalManager(); // Initialize the manager with a model. This will download the model // if it is not already present on the user's device. console.log("Initializing Foundry Local Manager..."); const modelInfo = await foundryLocalManager.init(alias); console.log("Azure Local Foundry Model Info:", modelInfo); console.log("Using Azure Local Foundry"); return openai({ baseURL: foundryLocalManager.endpoint, apiKey: foundryLocalManager.apiKey, }); }; ```` ## File: packages/api-llamaindex-ts/src/providers/github-models.ts ````typescript import { openai } from "@llamaindex/openai"; export const llm = async () => { console.log("Using GitHub Models"); return openai({ baseURL: "https://models.inference.ai.azure.com", apiKey: process.env.GITHUB_TOKEN, model: process.env.GITHUB_MODEL, }); }; ```` ## File: packages/api-llamaindex-ts/src/providers/index.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import { llm as azureOpenAI } from "./azure-openai.js"; import { llm as foundryLocal } from "./foundry-local.js"; import { llm as githubModels } from "./github-models.js"; import { llm as dockerModels } from "./docker-models.js"; import { llm as ollamaModels } from "./ollama-models.js"; type LLMProvider = | "azure-openai" | "github-models" | "foundry-local" | "docker-models" | "ollama-models"; const provider = (process.env.LLM_PROVIDER || "") as LLMProvider; export const llm = async () => { switch (provider) { case "azure-openai": return azureOpenAI(); case "github-models": return githubModels(); case "docker-models": return dockerModels(); case "ollama-models": return ollamaModels(); case "foundry-local": return foundryLocal(); default: throw new Error( `Unknown LLM_PROVIDER "${provider}". Valid options are: azure-openai, github-models, foundry-local, docker-models, ollama-models.` ); } }; ```` ## File: packages/api-llamaindex-ts/src/providers/ollama-models.ts ````typescript import { OpenAI, openai } from "@llamaindex/openai"; export const llm = async () => { console.log("Using Ollama Models"); const provider = openai({ baseURL: process.env.OLLAMA_MODEL_ENDPOINT, apiKey: 'OLLAMA_API_KEY', model: process.env.OLLAMA_MODEL, }); return { ...provider, // TODO: Remove this when LlamaIndex supports tool calls for non-OpenAI providers supportToolCall: true, } as OpenAI }; ```` ## File: packages/api-llamaindex-ts/src/tools/mcp-bridge.ts ````typescript import { mcp } from "@llamaindex/tools"; import { McpServerDefinition } from "../utils/types.js"; import { BaseToolWithCall } from "@llamaindex/core/llms"; // Create MCP tools using the official @llamaindex/tools export const createMcpToolsFromDefinition = async ( mcpServerConfig: McpServerDefinition ): Promise => { try { console.log(`Creating MCP tools for ${mcpServerConfig.id} at ${mcpServerConfig.config.url}`); // Extract headers if available const headers: Record = {}; if (mcpServerConfig.config.requestInit?.headers) { const configHeaders = mcpServerConfig.config.requestInit.headers as Record; Object.assign(headers, configHeaders); } // Load tools using the official Llamaindex.TS MCP adapters const tools = await mcp(mcpServerConfig.config).tools(); console.log(`Created ${tools.length} Llamaindex.TS tools for ${mcpServerConfig.id}`); return tools; } catch (error) { console.error(`Error creating MCP tools for ${mcpServerConfig.id}:`, error); return []; } }; ```` ## File: packages/api-llamaindex-ts/src/utils/instrumentation.ts ````typescript import { metrics, trace } from "@opentelemetry/api"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; import { OTLPLogExporter } from "@opentelemetry/exporter-logs-otlp-grpc"; import { OTLPMetricExporter } from "@opentelemetry/exporter-metrics-otlp-grpc"; import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-grpc"; import { OTLPExporterConfigBase } from "@opentelemetry/otlp-exporter-base"; import { resourceFromAttributes } from "@opentelemetry/resources"; import { LoggerProvider, SimpleLogRecordProcessor } from "@opentelemetry/sdk-logs"; import { PeriodicExportingMetricReader } from "@opentelemetry/sdk-metrics"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION, } from "@opentelemetry/semantic-conventions"; const otlpEndpoint = process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4317"; const otlpHeaders = process.env.OTEL_EXPORTER_OTLP_HEADERS || "x-otlp-header=header-value"; const otlpServiceName = process.env.OTEL_SERVICE_NAME || "api"; const otlpServiceVersion = process.env.OTLP_SERVICE_VERSION || "1.0"; const otlpOptions = { url: otlpEndpoint, headers: otlpHeaders.split(",").reduce((acc, header) => { const [key, value] = header.split("="); acc[key] = value; return acc; }, {} as Record), } as OTLPExporterConfigBase; const resource = resourceFromAttributes({ [ATTR_SERVICE_NAME]: otlpServiceName, [ATTR_SERVICE_VERSION]: otlpServiceVersion, }); const sdk = new NodeSDK({ resource, logRecordProcessor: new SimpleLogRecordProcessor( new OTLPLogExporter(otlpOptions) ), traceExporter: new OTLPTraceExporter(otlpOptions), metricReader: new PeriodicExportingMetricReader({ exporter: new OTLPMetricExporter(otlpOptions), }), instrumentations: [getNodeAutoInstrumentations()], }); sdk.start(); const loggerProvider = new LoggerProvider({ resource }); const logExporter = new OTLPLogExporter(otlpOptions); loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(logExporter)); const tracer = trace.getTracer(otlpServiceName, otlpServiceVersion); const meter = metrics.getMeter(otlpServiceName, otlpServiceVersion); const logger = loggerProvider.getLogger(otlpServiceName); function log( message: string, attributes: Record = {}, level: string = "INFO" ) { logger.emit({ severityText: level, body: message, attributes: { service: otlpServiceName, version: otlpServiceVersion, ...attributes, }, }); } export { log, meter, tracer }; ```` ## File: packages/api-llamaindex-ts/src/utils/types.ts ````typescript import type { SSEClientTransportOptions } from "@modelcontextprotocol/sdk/client/sse.js"; import { MCPClientOptions as LlamaIndexMCPClientOptions } from "@llamaindex/tools"; export type MCPClientOptions = SSEClientTransportOptions & LlamaIndexMCPClientOptions & { url: string; type: "sse" | "http"; accessToken?: string; }; export type McpServerDefinition = { name: string; id: string; config: MCPClientOptions; }; ```` ## File: packages/api-llamaindex-ts/src/index.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import { TravelAgentsWorkflow } from "./graph/index.js"; import { llm as llmProvider } from "./providers/index.js"; import { McpServerDefinition } from "./utils/types.js"; // Function to set up agents and return the multiAgent instance // export async function setupAgents(filteredTools: McpServerDefinition[] = []) { // const tools = Object.fromEntries( // filteredTools.map((tool) => [tool.id, true]) // ); // console.log("Filtered tools:", tools); // let agents = []; // let handoffTargets = []; // let mcpTools = []; // const verbose = false; // const mcpToolsConfig = McpToolsConfig(); // let llm: ToolCallLLM = {} as ToolCallLLM; // try { // llm = await llmProvider(); // } catch (error) { // throw new Error(error instanceof Error ? error.message : String(error)); // } // if (tools["echo-ping"]) { // const mcpServerConfig = mcpToolsConfig["echo-ping"].config; // const tools = await mcp(mcpServerConfig).tools(); // const echoAgent = agent({ // name: "EchoAgent", // systemPrompt: // "Echo back the received input. Do not respond with anything else. Always call the tools.", // tools, // llm, // verbose, // }); // agents.push(echoAgent); // handoffTargets.push(echoAgent); // mcpTools.push(...tools); // } // if (tools["customer-query"]) { // const mcpServerConfig = mcpToolsConfig["customer-query"]; // const tools = await mcp(mcpServerConfig.config).tools(); // const customerQuery = agent({ // name: "CustomerQueryAgent", // systemPrompt: // "Assists employees in better understanding customer needs, facilitating more accurate and personalized service. This agent is particularly useful for handling nuanced queries, such as specific travel preferences or budget constraints, which are common in travel agency interactions.", // tools, // llm, // verbose, // }); // agents.push(customerQuery); // handoffTargets.push(customerQuery); // mcpTools.push(...tools); // } // if (tools["itinerary-planning"]) { // const mcpServerConfig = mcpToolsConfig["itinerary-planning"]; // const tools = await mcp(mcpServerConfig.config).tools(); // const itineraryPlanningAgent = agent({ // name: "ItineraryPlanningAgent", // systemPrompt: // "Creates a travel itinerary based on user preferences and requirements.", // tools, // llm, // verbose, // }); // agents.push(itineraryPlanningAgent); // handoffTargets.push(itineraryPlanningAgent); // mcpTools.push(...tools); // } // if (tools["destination-recommendation"]) { // const mcpServerConfig = mcpToolsConfig["destination-recommendation"]; // const tools = await mcp(mcpServerConfig.config).tools(); // const destinationRecommendationAgent = agent({ // name: "DestinationRecommendationAgent", // systemPrompt: // "Suggests destinations based on customer preferences and requirements.", // tools, // llm, // verbose, // }); // agents.push(destinationRecommendationAgent); // handoffTargets.push(destinationRecommendationAgent); // mcpTools.push(...tools); // } // // Define the triage agent taht will determine the best course of action // const travelAgent = agent({ // name: "TravelAgent", // systemPrompt: // "Acts as a triage agent to determine the best course of action for the user's query. If you cannot handle the query, please pass it to the next agent. If you can handle the query, please do so.", // tools: [...mcpTools], // canHandoffTo: handoffTargets // .map((target) => target.getAgents().map((agent) => agent.name)) // .flat(), // llm, // verbose, // }); // agents.push(travelAgent); // console.log("Agents list:", agents); // console.log("Handoff targets:", handoffTargets); // console.log("Tools list:", JSON.stringify(mcpTools, null, 2)); // // Create the multi-agent workflow // const supervisor = multiAgent({ // agents: agents, // rootAgent: travelAgent, // verbose, // }); // console.log("Agents created:", Object.keys(agents)); // console.log("All tools count:", mcpTools.length); // return { supervisor, agents, mcpTools }; // } export async function setupAgents(filteredTools: McpServerDefinition[] = []) { console.log("Setting up LlamaIndex.TS-based agents..."); let llm; try { llm = await llmProvider(); } catch (error) { throw new Error(error instanceof Error ? error.message : String(error)); } // Create and initialize the workflow const workflow = new TravelAgentsWorkflow(llm); await workflow.initialize(filteredTools); console.log("LlamaIndex.TS workflow initialized successfully"); return workflow; } // Re-export McpToolsConfig and types // Re-export the McpToolsConfig and types export { McpToolsConfig } from "./tools/index.js"; export type { MCPClientOptions, McpServerDefinition } from "./utils/types.js"; ```` ## File: packages/api-llamaindex-ts/src/server.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import cors from "cors"; import express from "express"; import { Readable } from "node:stream"; import { pipeline } from "node:stream/promises"; import { setupAgents } from "./index.js"; import { McpToolsConfig } from "./tools/index.js"; import { mcpToolsList } from "./mcp/mcp-tools.js"; import { agent } from "@llamaindex/workflow"; const app = express(); const PORT = process.env.PORT || 4001; const CHUNK_END = "\n\n"; // Middleware app.use(cors()); app.use(express.json()); const apiRouter = express.Router(); // Add request body logging middleware for debugging apiRouter.use((req, res, next) => { if (req.path === "/chat" && req.method === "POST") { const contentType = req.headers["content-type"]?.replace(/\n|\r/g, ""); const body = typeof req.body === "string" ? req.body.replace(/\n|\r/g, "") : JSON.stringify(req.body).replace(/\n|\r/g, ""); console.log("Request Content-Type:", contentType); console.log("Request body:", body); } next(); }); // Health check endpoint apiRouter.get("/health", (req, res) => { res.status(200).json({ status: "OK", orchestrator: "llamaindex-ts" }); }); // MCP tools apiRouter.get("/tools", async (req, res) => { try { const tools = await mcpToolsList(Object.values(McpToolsConfig())); console.log("Available tools:", tools); res.status(200).json({ tools }); } catch (error) { console.error("Error fetching MCP tools:", error); res.status(500).json({ error: "Error fetching MCP tools" }); } }); // Chat endpoint with Server-Sent Events (SSE) for streaming responses // @ts-ignore - Ignoring TypeScript errors for Express route handlers apiRouter.post("/chat", async (req, res) => { req.on("close", () => { console.log("Client disconnected, aborting..."); }); if (!req.body) { console.error( "Request body is undefined. Check Content-Type header in the request." ); return res.status(400).json({ error: "Request body is undefined. Make sure to set Content-Type to application/json.", }); } const message = req.body.message; const tools = req.body.tools; console.log("Tools to use:", JSON.stringify(tools, null, 2)); if (!message) { return res.status(400).json({ error: "Message is required" }); } res.setHeader("Content-Type", "text/event-stream"); res.setHeader("Cache-Control", "no-cache"); res.setHeader("Connection", "keep-alive"); try { const workflow = await setupAgents(tools); const context = workflow.run(message); const readableStream = new Readable({ async read() { try { for await (const event of context) { const serializedData = JSON.stringify({ type: "metadata", kind: "llamaindex-ts", event: event, data: event.data ? JSON.parse(JSON.stringify(event.data)) : null, }); console.log("Event streamed", { event, serializedData }); this.push(serializedData + CHUNK_END); console.log("Pushed event:", serializedData); } this.push(null); // Close the stream } catch (error: any) { console.error("Error during streaming:", error?.message); this.push({ type: "error", message: error instanceof Error ? error.message : String(error), } + CHUNK_END); } }, }); await pipeline(readableStream, res); } catch (error) { console.error("Error occurred:", error); if (!res.headersSent) { res.status(500).json({ error: (error as any).message }); } else { res.write( `${JSON.stringify({ type: "error", message: (error as any).message, })}` + CHUNK_END ); res.end(); } } }); // Mount the API router with the /api prefix app.use("/api", apiRouter); // Add a root route for API information app.get("/", (req, res) => { res.json({ message: "AI Travel Agents API - LlamaIndex.TS", version: "1.0.0", orchestrator: "llamaindex-ts", endpoints: { health: "/api/health", tools: "/api/tools", chat: "/api/chat", }, }); }); // Start the server app.listen(PORT, () => { console.log(`LlamaIndex.TS API server running on port ${PORT}`); console.log(`API endpoints:`); console.log(` - Health check: http://localhost:${PORT}/api/health (GET)`); console.log(` - MCP Tools: http://localhost:${PORT}/api/tools (GET)`); console.log(` - Chat: http://localhost:${PORT}/api/chat (POST)`); }); ```` ## File: packages/api-llamaindex-ts/.gitignore ```` node_modules dist *.log .env .DS_Store ```` ## File: packages/api-llamaindex-ts/Dockerfile ```` FROM node:22-alpine AS builder # Install build dependencies RUN apk add --no-cache python3 make g++ \ && rm -rf /var/cache/apk/* WORKDIR /app # Copy package files first for better caching COPY package*.json ./ # Use cache mount for npm and install dependencies RUN --mount=type=cache,target=/root/.npm \ npm i --prefer-offline --no-audit # Now copy source code (changes more frequently) COPY tsconfig.json ./ COPY src ./src # Build the application RUN npm run build # Runtime stage with security improvements FROM node:22-alpine AS release # Add non-root user for security RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 WORKDIR /app # Copy built artifacts COPY --from=builder /app/dist ./dist COPY --from=builder /app/package*.json ./ # Set production environment ENV NODE_ENV=production # Install production dependencies with cache mount RUN --mount=type=cache,target=/root/.npm \ npm ci --prefer-offline --omit=dev --ignore-scripts --no-audit # Change ownership to non-root user RUN chown -R nodejs:nodejs /app # Switch to non-root user USER nodejs # Expose port (adjust if needed) EXPOSE 4001 # Run the application CMD ["node", "dist/server.js"] ```` ## File: packages/api-llamaindex-ts/package.json ````json { "name": "@azure-ai-travel-agents/api-llamaindex-ts", "version": "1.0.0", "type": "module", "main": "dist/index.js", "types": "dist/index.d.ts", "scripts": { "start": "tsx --watch src/server.ts", "build": "tsc", "clean": "rm -rf dist" }, "files": [ "dist" ], "dependencies": { "@azure/ai-projects": "^1.0.0", "@azure/core-rest-pipeline": "^1.22.1", "@azure/identity": "^4.9.1", "@llamaindex/azure": "^0.1.37", "@llamaindex/core": "^0.6.22", "@llamaindex/openai": "^0.4.20", "@llamaindex/tools": "0.1.12", "@llamaindex/workflow": "^1.1.24", "@modelcontextprotocol/sdk": "^1.10.2", "@opentelemetry/api": "^1.9.0", "@opentelemetry/auto-instrumentations-node": "^0.57.0", "@opentelemetry/exporter-metrics-otlp-grpc": "^0.200.0", "@opentelemetry/exporter-trace-otlp-grpc": "^0.200.0", "@opentelemetry/instrumentation-express": "^0.48.0", "@opentelemetry/instrumentation-http": "^0.200.0", "@opentelemetry/sdk-metrics": "^2.0.0", "@opentelemetry/sdk-node": "^0.200.0", "@opentelemetry/sdk-trace-node": "^2.0.0", "@types/cors": "^2.8.17", "@types/express": "^5.0.1", "cors": "^2.8.5", "dotenv": "^16.4.7", "express": "^5.0.1", "foundry-local-sdk": "^0.3.1", "llamaindex": "0.12.0", "openai": "^4.96.0" }, "devDependencies": { "@types/node": "^20.11.24", "tsx": "^4.19.4", "typescript": "^5.3.3" }, "volta": { "node": "22.20.0" } } ```` ## File: packages/api-maf-python/src/orchestrator/agents/customer_query_agent/__init__.py ````python """CustomerQueryAgent - Analyzes customer travel preferences and requirements""" import os from agent_framework import ChatAgent from agent_framework.azure import AzureOpenAIChatClient from src.orchestrator.tools.tool_registry import create_mcp_tool # Agent instance following Agent Framework conventions agent = ChatAgent( name="CustomerQueryAgent", description="Analyzes customer travel preferences and requirements", instructions="""You are a customer service agent for a travel planning system. Your role is to understand and analyze customer travel preferences, requirements, and constraints. Key responsibilities: - Extract travel preferences (destinations, activities, accommodations) - Identify budget constraints - Understand time constraints and travel dates - Clarify any ambiguous requirements - Provide personalized recommendations Always be empathetic, patient, and thorough in understanding customer needs.""", chat_client=AzureOpenAIChatClient( api_key=os.environ.get("AZURE_OPENAI_API_KEY", ""), ), tools=[create_mcp_tool(["CustomerQueryAgent"])], ) ```` ## File: packages/api-maf-python/src/orchestrator/agents/destination_recommendation_agent/__init__.py ````python """DestinationRecommendationAgent - Recommends travel destinations based on preferences""" import os from agent_framework import ChatAgent from agent_framework.azure import AzureOpenAIChatClient from src.orchestrator.tools.tool_registry import create_mcp_tool # Agent instance following Agent Framework conventions agent = ChatAgent( name="DestinationRecommendationAgent", description="Recommends travel destinations based on preferences", instructions="""You are a destination recommendation expert for a travel planning system. Your role is to suggest ideal travel destinations based on customer preferences. Key responsibilities: - Analyze customer preferences and constraints - Recommend suitable destinations - Provide insights about each destination - Consider factors like budget, season, activities, and travel style - Use available tools to get current destination information Be creative, knowledgeable, and considerate of all preferences.""", chat_client=AzureOpenAIChatClient( api_key=os.environ.get("AZURE_OPENAI_API_KEY", ""), ), tools=[create_mcp_tool(["GetDestinationInfoTool"])], ) ```` ## File: packages/api-maf-python/src/orchestrator/agents/echo_agent/__init__.py ````python """EchoAgent - Simple echo agent for testing purposes""" import os from agent_framework import ChatAgent from agent_framework.azure import AzureOpenAIChatClient from src.orchestrator.tools.tool_registry import create_mcp_tool # Agent instance following Agent Framework conventions agent = ChatAgent( name="EchoAgent", description="Simple echo agent for testing purposes", instructions="""You are a simple echo agent for testing. Your role is to echo messages and test tool functionality. Simply acknowledge and echo what you receive.""", chat_client=AzureOpenAIChatClient( api_key=os.environ.get("AZURE_OPENAI_API_KEY", ""), ), tools=[create_mcp_tool(["EchoAgent"])], ) ```` ## File: packages/api-maf-python/src/orchestrator/agents/itinerary_planning_agent/__init__.py ````python """ItineraryPlanningAgent - Creates detailed travel itineraries""" import os from agent_framework import ChatAgent from agent_framework.azure import AzureOpenAIChatClient from src.orchestrator.tools.tool_registry import create_mcp_tool # Agent instance following Agent Framework conventions agent = ChatAgent( name="ItineraryPlanningAgent", description="Creates detailed travel itineraries", instructions="""You are an itinerary planning expert for a travel planning system. Your role is to create detailed, optimized travel itineraries. Key responsibilities: - Create day-by-day itineraries - Optimize travel routes and timing - Schedule activities and experiences - Estimate costs and budgets - Account for travel time and logistics - Use available tools for planning assistance Be detail-oriented, practical, and create realistic, enjoyable itineraries.""", chat_client=AzureOpenAIChatClient( api_key=os.environ.get("AZURE_OPENAI_API_KEY", ""), ), tools=[create_mcp_tool(["ItineraryPlanningAgent"])], ) ```` ## File: packages/api-maf-python/src/orchestrator/agents/triage_agent/__init__.py ````python """TriageAgent - Analyzes travel requests and routes to appropriate specialized agents""" import os from agent_framework import ChatAgent from agent_framework.azure import AzureOpenAIChatClient from src.orchestrator.tools.tool_registry import create_mcp_tool # Agent instance following Agent Framework conventions agent = ChatAgent( name="TriageAgent", description="Analyzes travel requests and routes to appropriate specialized agents", instructions="""You are a triage agent for a travel planning system. Your role is to analyze user requests and determine which specialized agents should handle them. Available specialized agents: - CustomerQueryAgent: Analyzes customer preferences and requirements - DestinationRecommendationAgent: Suggests travel destinations - ItineraryPlanningAgent: Creates detailed travel itineraries - EchoAgent: Simple echo tool for testing Your task: 1. Understand the user's request 2. Determine which agent(s) can best fulfill it 3. Coordinate the workflow between agents if multiple are needed 4. Provide clear, helpful responses Always be friendly, professional, and focused on helping users plan amazing trips.""", chat_client=AzureOpenAIChatClient( api_key=os.environ.get("AZURE_OPENAI_API_KEY", ""), ), tools=[create_mcp_tool(["TriageAgent"])], ) ```` ## File: packages/api-maf-python/src/orchestrator/agents/__init__.py ````python """Agent implementations for the travel planning system. This module provides agents discoverable by DevUI. Each agent is in its own directory with an __init__.py that exports 'agent'. For legacy compatibility, the old agent classes are still available from the _legacy modules. """ __all__ = [ "BaseAgent", "TriageAgent", "CustomerQueryAgent", "DestinationRecommendationAgent", "ItineraryPlanningAgent", "EchoAgent", ] ```` ## File: packages/api-maf-python/src/orchestrator/providers/__init__.py ````python """LLM provider factory using strategy pattern. This module provides a factory function to get the appropriate LLM client based on the LLM_PROVIDER environment variable, implementing the same strategy pattern as the TypeScript implementation. """ import logging from typing import Any from src.config import settings from .azure_openai import AzureOpenAIProvider from .docker_models import DockerModelsProvider from .foundry_local import FoundryLocalProvider from .github_models import GitHubModelsProvider from .ollama_models import OllamaModelsProvider logger = logging.getLogger(__name__) async def get_llm_client() -> Any: """Get LLM client based on configured provider. Returns: Configured LLM client instance Raises: ValueError: If provider is unknown or misconfigured """ provider = settings.llm_provider logger.info(f"Initializing LLM provider: {provider}") if provider == "azure-openai": provider_instance = AzureOpenAIProvider() return await provider_instance.get_client() elif provider == "github-models": provider_instance = GitHubModelsProvider() return await provider_instance.get_client() elif provider == "docker-models": provider_instance = DockerModelsProvider() return await provider_instance.get_client() elif provider == "ollama-models": provider_instance = OllamaModelsProvider() return await provider_instance.get_client() elif provider == "foundry-local": provider_instance = FoundryLocalProvider() return await provider_instance.get_client() else: raise ValueError( f'Unknown LLM_PROVIDER "{provider}". ' "Valid options are: azure-openai, github-models, docker-models, " "ollama-models, foundry-local." ) __all__ = ["get_llm_client"] ```` ## File: packages/api-maf-python/src/orchestrator/providers/azure_openai.py ````python """Azure OpenAI LLM provider.""" import logging from typing import Any from agent_framework.openai import OpenAIChatClient from azure.identity import DefaultAzureCredential, ManagedIdentityCredential from openai import AsyncAzureOpenAI from src.config import settings from .base import LLMProvider logger = logging.getLogger(__name__) AZURE_COGNITIVE_SERVICES_SCOPE = "https://cognitiveservices.azure.com/.default" class AzureOpenAIProvider(LLMProvider): """Azure OpenAI LLM provider with Managed Identity support.""" async def get_client(self) -> Any: """Get Azure OpenAI chat client for Microsoft Agent Framework. Returns: OpenAIChatClient configured with Azure OpenAI Raises: ValueError: If required configuration is missing """ logger.info("Using Azure OpenAI") if not settings.azure_openai_endpoint: raise ValueError("AZURE_OPENAI_ENDPOINT is required for azure-openai provider") if not settings.azure_openai_deployment_name: raise ValueError("AZURE_OPENAI_DEPLOYMENT_NAME is required for azure-openai provider") # Create the underlying Azure OpenAI async client async_client: AsyncAzureOpenAI # If API key is provided, use it (local development or explicit configuration) if settings.azure_openai_api_key: logger.info("Using API key authentication") async_client = AsyncAzureOpenAI( api_key=settings.azure_openai_api_key, api_version=settings.azure_openai_api_version, azure_endpoint=settings.azure_openai_endpoint, ) else: # Otherwise, use Managed Identity in Azure environments logger.info("Using Managed Identity authentication") credential: Any = DefaultAzureCredential() if settings.azure_client_id: logger.info(f"Using Azure Client ID: {settings.azure_client_id}") credential = ManagedIdentityCredential(client_id=settings.azure_client_id) # Get token for authentication token = credential.get_token(AZURE_COGNITIVE_SERVICES_SCOPE) async_client = AsyncAzureOpenAI( api_version=settings.azure_openai_api_version, azure_endpoint=settings.azure_openai_endpoint, azure_ad_token=token.token, ) # Wrap the Azure OpenAI client with MAF's OpenAIChatClient # This provides the ChatClientProtocol interface that ChatAgent expects maf_client = OpenAIChatClient( model_id=settings.azure_openai_deployment_name, async_client=async_client, ) logger.info(f"Created MAF OpenAIChatClient with model: {settings.azure_openai_deployment_name}") return maf_client ```` ## File: packages/api-maf-python/src/orchestrator/providers/base.py ````python """LLM provider implementations using strategy pattern. This module implements the same strategy pattern as the TypeScript implementation in packages/api/src/orchestrator/*/providers, allowing selection between different LLM providers based on the LLM_PROVIDER environment variable. """ from abc import ABC, abstractmethod from typing import Any class LLMProvider(ABC): """Base class for LLM providers.""" @abstractmethod async def get_client(self) -> Any: """Get the LLM client instance. Returns: LLM client instance configured for the specific provider """ pass ```` ## File: packages/api-maf-python/src/orchestrator/providers/docker_models.py ````python """Docker Models LLM provider.""" import logging from typing import Any from agent_framework.openai import OpenAIChatClient from openai import AsyncOpenAI from src.config import settings from .base import LLMProvider logger = logging.getLogger(__name__) class DockerModelsProvider(LLMProvider): """Docker Models LLM provider.""" async def get_client(self) -> Any: """Get Docker Models chat client for Microsoft Agent Framework. Returns: OpenAIChatClient configured for Docker Models Raises: ValueError: If required configuration is missing """ logger.info("Using Docker Models") if not settings.docker_model_endpoint: raise ValueError("DOCKER_MODEL_ENDPOINT is required for docker-models provider") if not settings.docker_model: raise ValueError("DOCKER_MODEL is required for docker-models provider") # Create the underlying OpenAI async client for Docker Models async_client = AsyncOpenAI( base_url=settings.docker_model_endpoint, api_key="DOCKER_API_KEY", # Placeholder API key for Docker models ) # Wrap with MAF's OpenAIChatClient maf_client = OpenAIChatClient( model_id=settings.docker_model, async_client=async_client, ) logger.info(f"Created MAF OpenAIChatClient with model: {settings.docker_model}") return maf_client ```` ## File: packages/api-maf-python/src/orchestrator/providers/foundry_local.py ````python """Foundry Local LLM provider.""" import logging from typing import Any from .base import LLMProvider logger = logging.getLogger(__name__) class FoundryLocalProvider(LLMProvider): """Foundry Local LLM provider. Note: This is a placeholder implementation. The actual Foundry Local SDK is not yet available in Python. This will need to be updated when the Python SDK becomes available. """ async def get_client(self) -> Any: """Get Foundry Local client. Returns: Configured OpenAI client for Foundry Local Raises: NotImplementedError: Foundry Local Python SDK not yet available """ logger.info("Using Foundry Local") logger.warning("Foundry Local Python SDK is not yet available. This is a placeholder implementation.") # Placeholder implementation # TODO: Update when Foundry Local Python SDK becomes available # Similar to TypeScript: const foundryLocalManager = new FoundryLocalManager() # const modelInfo = await foundryLocalManager.init(alias) raise NotImplementedError( "Foundry Local provider is not yet implemented in Python. " "Please use azure-openai, github-models, docker-models, or ollama-models instead." ) ```` ## File: packages/api-maf-python/src/orchestrator/providers/github_models.py ````python """GitHub Models LLM provider.""" import logging from typing import Any from agent_framework.openai import OpenAIChatClient from openai import AsyncOpenAI from src.config import settings from .base import LLMProvider logger = logging.getLogger(__name__) class GitHubModelsProvider(LLMProvider): """GitHub Models LLM provider.""" async def get_client(self) -> Any: """Get GitHub Models chat client for Microsoft Agent Framework. Returns: OpenAIChatClient configured for GitHub Models Raises: ValueError: If required configuration is missing """ logger.info("Using GitHub Models") if not settings.github_token: raise ValueError("GITHUB_TOKEN is required for github-models provider") if not settings.github_model: raise ValueError("GITHUB_MODEL is required for github-models provider") # Create the underlying OpenAI async client for GitHub Models async_client = AsyncOpenAI( base_url="https://models.inference.ai.azure.com", api_key=settings.github_token, ) # Wrap with MAF's OpenAIChatClient maf_client = OpenAIChatClient( model_id=settings.github_model, async_client=async_client, ) logger.info(f"Created MAF OpenAIChatClient with model: {settings.github_model}") return maf_client ```` ## File: packages/api-maf-python/src/orchestrator/providers/ollama_models.py ````python """Ollama Models LLM provider.""" import logging from typing import Any from agent_framework.openai import OpenAIChatClient from openai import AsyncOpenAI from src.config import settings from .base import LLMProvider logger = logging.getLogger(__name__) class OllamaModelsProvider(LLMProvider): """Ollama Models LLM provider.""" async def get_client(self) -> Any: """Get Ollama Models chat client for Microsoft Agent Framework. Returns: OpenAIChatClient configured for Ollama Models Raises: ValueError: If required configuration is missing """ logger.info("Using Ollama Models") if not settings.ollama_model_endpoint: raise ValueError("OLLAMA_MODEL_ENDPOINT is required for ollama-models provider") if not settings.ollama_model: raise ValueError("OLLAMA_MODEL is required for ollama-models provider") # Create the underlying OpenAI async client for Ollama async_client = AsyncOpenAI( base_url=settings.ollama_model_endpoint, api_key="OLLAMA_API_KEY", # Placeholder API key for Ollama models ) # Wrap with MAF's OpenAIChatClient maf_client = OpenAIChatClient( model_id=settings.ollama_model, async_client=async_client, ) logger.info(f"Created MAF OpenAIChatClient with model: {settings.ollama_model}") return maf_client ```` ## File: packages/api-maf-python/src/orchestrator/tools/__init__.py ````python """Tools package for orchestrator.""" from .tool_config import MCP_TOOLS_CONFIG, McpServerName from .tool_registry import tool_registry __all__ = ["MCP_TOOLS_CONFIG", "McpServerName", "tool_registry"] ```` ## File: packages/api-maf-python/src/orchestrator/tools/examples.py ````python """Example usage of MCP tools with Microsoft Agent Framework. This module demonstrates how to use MCP tools following Microsoft Agent Framework SDK best practices. Reference: https://learn.microsoft.com/en-us/agent-framework/user-guide/model-context-protocol/using-mcp-tools """ import asyncio import logging from typing import List from agent_framework import ChatAgent, AIFunction from orchestrator.tools.tool_registry import tool_registry from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader from orchestrator.tools.tool_config import MCPServerConfig logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) async def example_1_basic_usage(): """Example 1: Basic MCP tools usage with ChatAgent. Demonstrates: - Loading all MCP tools - Creating a ChatAgent with MCP tools - Processing user queries """ logger.info("=== Example 1: Basic MCP Tools Usage ===") # Load all available MCP tools mcp_tools = await tool_registry.get_all_tools() logger.info(f"Loaded {len(mcp_tools)} MCP tools") # Create a ChatAgent with MCP tools # Note: Requires an actual LLM client (Azure OpenAI, GitHub Models, etc.) # For this example, we'll just show the tool loading logger.info("MCP tools ready for use with ChatAgent") for tool in mcp_tools[:5]: # Show first 5 tools logger.info(f" - {tool.name}: {tool.description}") async def example_2_specific_servers(): """Example 2: Load tools from specific MCP servers. Demonstrates: - Selective tool loading - Server filtering """ logger.info("=== Example 2: Load Specific MCP Servers ===") # Load tools from specific servers only travel_tools = await tool_registry.get_all_tools(servers=["itinerary-planning", "destination-recommendation"]) logger.info(f"Loaded {len(travel_tools)} travel-related tools") for tool in travel_tools: logger.info(f" - {tool.name}") async def example_3_tool_discovery(): """Example 3: Discover and inspect MCP tools. Demonstrates: - Tool discovery - Schema inspection - Server capabilities """ logger.info("=== Example 3: Tool Discovery ===") # List all tools from all servers all_tools = await tool_registry.list_tools() for server_name, tools in all_tools.items(): if isinstance(tools, dict) and "error" in tools: logger.error(f"Server '{server_name}' error: {tools['error']}") continue logger.info(f"\nServer: {server_name}") logger.info(f" Tools: {len(tools)}") for tool in tools[:2]: # Show first 2 tools per server logger.info(f" - {tool.get('name', 'unknown')}") logger.info(f" Description: {tool.get('description', 'N/A')}") # Show input schema input_schema = tool.get("inputSchema", {}) properties = input_schema.get("properties", {}) if properties: logger.info(f" Parameters: {list(properties.keys())}") async def example_4_direct_tool_call(): """Example 4: Direct MCP tool invocation. Demonstrates: - Bypassing MAF wrapper for direct calls - Low-level MCP protocol usage """ logger.info("=== Example 4: Direct Tool Call ===") try: # Call MCP tool directly (without going through agent) result = await tool_registry.call_tool( server="echo-ping", tool_name="ping", arguments={"message": "Hello from MCP!"} ) logger.info(f"Direct call result: {result}") except Exception as e: logger.error(f"Direct call failed: {e}") async def example_5_custom_wrapper(): """Example 5: Create custom MCP tool wrapper. Demonstrates: - Custom server configuration - Manual wrapper creation - Tool conversion """ logger.info("=== Example 5: Custom MCP Wrapper ===") # Create custom MCP server config custom_config: MCPServerConfig = {"url": "http://localhost:8001/mcp", "type": "http", "verbose": True} # Create loader for custom server loader = MCPToolLoader(server_config=custom_config, server_name="Custom MCP Server") try: # Get tools from custom server tools = await loader.get_tools() logger.info(f"Custom server has {len(tools)} tools") for tool in tools: logger.info(f" - {tool.name}: {tool.description}") except Exception as e: logger.error(f"Failed to load custom server tools: {e}") finally: # Cleanup await loader.close() async def example_6_agent_with_tools(): """Example 6: Complete agent setup with MCP tools. Demonstrates: - Full agent initialization - MCP tools integration - Agent execution flow Note: This requires a valid LLM client configuration. """ logger.info("=== Example 6: Agent with MCP Tools ===") # Load MCP tools mcp_tools = await tool_registry.get_all_tools(servers=["itinerary-planning", "customer-query", "echo-ping"]) logger.info(f"Loaded {len(mcp_tools)} tools for agent") # The agent would be initialized like this: # from orchestrator.agents.base_agent import BaseAgent # from orchestrator.providers.azure_openai import get_azure_openai_client # # llm_client = await get_azure_openai_client() # # agent = BaseAgent( # name="travel_assistant", # description="AI-powered travel planning assistant", # system_prompt="You are a helpful travel planning assistant. " # "Use available tools to help users plan their trips.", # tools=mcp_tools # ) # # await agent.initialize(llm_client) # # # Process user request # response = await agent.process( # "I need a 3-day itinerary for Paris with hotel recommendations" # ) # # logger.info(f"Agent response: {response}") logger.info("Agent setup complete (LLM client required for execution)") async def example_7_error_handling(): """Example 7: Error handling with MCP tools. Demonstrates: - Graceful error handling - Server unavailability - Tool call failures """ logger.info("=== Example 7: Error Handling ===") # Try to load tools - some servers might be unavailable mcp_tools = await tool_registry.get_all_tools() # Tools with errors will be skipped, successful ones loaded logger.info(f"Successfully loaded {len(mcp_tools)} tools") # Try direct call to potentially unavailable server try: result = await tool_registry.call_tool(server="nonexistent-server", tool_name="test", arguments={}) except ValueError as e: logger.info(f"Expected error for unknown server: {e}") # MCP tool wrappers return errors as strings instead of raising # This allows agents to handle errors gracefully async def main(): """Run all examples.""" examples = [ example_1_basic_usage, example_2_specific_servers, example_3_tool_discovery, example_4_direct_tool_call, example_5_custom_wrapper, example_6_agent_with_tools, example_7_error_handling, ] for example in examples: try: await example() print() # Blank line between examples except Exception as e: logger.error(f"Example failed: {e}", exc_info=True) # Cleanup logger.info("=== Cleanup ===") await tool_registry.close_all() logger.info("All MCP clients closed") if __name__ == "__main__": asyncio.run(main()) ```` ## File: packages/api-maf-python/src/orchestrator/tools/mcp_tool_wrapper.py ````python """MCP Tool Management using Microsoft Agent Framework. This module is now deprecated. Use tool_registry.create_mcp_tool() instead. The tool registry follows Microsoft Agent Framework best practices by creating fresh MCPStreamableHTTPTool instances for each agent operation, managing their lifecycle with async context managers. Reference: https://learn.microsoft.com/en-us/agent-framework/user-guide/model-context-protocol/using-mcp-tools """ import logging logger = logging.getLogger(__name__) logger.warning("mcp_tool_wrapper is deprecated. Use tool_registry.create_mcp_tool() instead.") ```` ## File: packages/api-maf-python/src/orchestrator/tools/tool_registry.py ````python """Tool registry for managing MCP server connections. This module provides a centralized registry for MCP tool servers, using Microsoft Agent Framework's built-in MCPStreamableHTTPTool. Reference: https://learn.microsoft.com/en-us/agent-framework/user-guide/model-context-protocol/using-mcp-tools """ import asyncio import logging from typing import Optional, Any, Dict try: from agent_framework import MCPStreamableHTTPTool except ImportError: raise ImportError( "Microsoft Agent Framework SDK is required. Install with: pip install agent-framework>=1.0.0b251001" ) from .tool_config import MCP_TOOLS_CONFIG, McpServerName logger = logging.getLogger(__name__) class ToolRegistry: """Registry for managing MCP tool server metadata. Simplified implementation that only stores metadata about MCP servers. Each agent creates its own MCPStreamableHTTPTool instances and manages their lifecycle using async context managers, following Microsoft Agent Framework best practices. This avoids issues with: - Shared async context managers across different tasks - Cancel scope violations - Persistent connection management Reference: https://github.com/microsoft/agent-framework/blob/main/python/samples/getting_started/agents/openai/openai_chat_client_with_local_mcp.py """ def __init__(self) -> None: """Initialize the tool registry with server metadata.""" self._server_metadata: Dict[str, Dict[str, Any]] = {} self._initialize_metadata() logger.info("MCP tool registry initialized (metadata only)") def _initialize_metadata(self) -> None: """Initialize server metadata from configuration.""" for server_id, server_def in MCP_TOOLS_CONFIG.items(): config = server_def["config"] name = server_def["name"] # Store only metadata - no actual connections self._server_metadata[server_id] = { "id": server_id, "name": name, "url": config["url"], "type": config.get("type", "http"), "selected": server_id != "echo-ping", "access_token": config.get("accessToken"), } logger.info(f"Registered MCP server '{name}' ({server_id}) at {config['url']}") logger.info(f"Tool registry ready with {len(self._server_metadata)} MCP servers") def create_mcp_tool(self, server_id: McpServerName) -> Optional[MCPStreamableHTTPTool]: """Create a new MCP tool instance for a server. Each call creates a fresh MCPStreamableHTTPTool instance that should be used within an async context manager. This follows the pattern from Microsoft Agent Framework samples. Args: server_id: The ID of the MCP server Returns: New MCPStreamableHTTPTool instance or None if server not found Example: ```python tool = registry.create_mcp_tool("customer-query") if tool: async with tool: # Use tool with agent result = await agent.run(query, tools=tool) ``` """ metadata = self._server_metadata.get(server_id) if not metadata: logger.warning(f"MCP server '{server_id}' not found in registry") return None # Build headers if access token provided headers = None access_token = metadata.get("access_token") if access_token: headers = {"Authorization": f"Bearer {access_token}"} # Create a new MCPStreamableHTTPTool instance # The caller is responsible for using it in an async context manager return MCPStreamableHTTPTool( name=metadata["name"], url=metadata["url"], headers=headers, load_tools=True, load_prompts=False, request_timeout=30, ) def get_server_metadata(self, server_id: McpServerName) -> Optional[Dict[str, Any]]: """Get metadata for a server without creating a connection. Args: server_id: The ID of the MCP server Returns: Server metadata dictionary or None if not found """ return self._server_metadata.get(server_id) async def list_tools(self) -> Dict[str, Any]: """List all available MCP tools with reachability checks. Ports the TypeScript mcpToolsList implementation to: 1. Connect to each MCP server 2. List the actual tools available on each server 3. Return detailed information including tool definitions Returns response in the format expected by the frontend: { "tools": [ { "id": "customer-query", "name": "Customer Query", "url": "http://localhost:5001/mcp", "type": "http", "reachable": true, "selected": true, "tools": [...] # Actual tool definitions from the server } ] } """ async def check_server_and_list_tools(server_id: str, metadata: Dict[str, Any]) -> Dict[str, Any]: """Connect to MCP server and list its tools, mirroring TS mcpToolsList behavior.""" server_info = { "id": server_id, "name": metadata["name"], "url": metadata["url"], "type": metadata["type"], "reachable": False, "selected": metadata["selected"], "tools": [], } # Create MCP tool instance to connect and list tools try: logger.info(f"Connecting to MCP server {metadata['name']} at {metadata['url']}") mcp_tool = self.create_mcp_tool(server_id) if not mcp_tool: logger.warning(f"Could not create MCP tool for server '{server_id}'") return server_info # Use the tool in async context manager to connect async with mcp_tool: logger.info(f"MCP server {metadata['name']} is reachable") server_info["reachable"] = True # List tools from the server # The MCPStreamableHTTPTool loads tools on connection # Access them via the tool's internal state if hasattr(mcp_tool, "_tools") and mcp_tool._tools: tools_list = [] for tool in mcp_tool._tools: # Convert tool to dict format tool_info = { "name": tool.metadata.name if hasattr(tool, "metadata") else str(tool), "description": tool.metadata.description if hasattr(tool, "metadata") else "", } tools_list.append(tool_info) server_info["tools"] = tools_list logger.info(f"MCP server {metadata['name']} has {len(tools_list)} tools") else: logger.info(f"MCP server {metadata['name']} has 0 tools") except Exception as error: logger.error(f"MCP server {metadata['name']} is not reachable: {str(error)}") server_info["error"] = str(error) return server_info # Check all servers concurrently, matching TS Promise.all pattern tasks = [] for server_id, metadata in self._server_metadata.items(): task = asyncio.create_task(check_server_and_list_tools(server_id, metadata)) tasks.append(task) # Wait for all checks with overall timeout try: results = await asyncio.wait_for( asyncio.gather(*tasks, return_exceptions=True), timeout=10.0, # Increased timeout for actual MCP connections ) except asyncio.TimeoutError: logger.warning("Tool list overall timeout - returning partial results") results = [] for task in tasks: if task.done(): try: results.append(task.result()) except Exception as e: logger.debug(f"Error getting task result: {e}") else: task.cancel() # Build tools list from results tools_list = [] for result in results: if isinstance(result, dict): tools_list.append(result) elif isinstance(result, Exception): logger.debug(f"Error checking server: {result}") return {"tools": tools_list} async def close_all(self) -> None: """Cleanup resources. Since we don't maintain persistent connections, this just clears metadata. """ logger.info("Cleaning up tool registry...") self._server_metadata.clear() logger.info("Tool registry cleaned up") # Global tool registry instance # Import and use this singleton throughout the application tool_registry = ToolRegistry() ```` ## File: packages/api-maf-python/src/orchestrator/__init__.py ````python """Orchestration layer for Microsoft Agent Framework workflows.""" from .magentic_workflow import magentic_orchestrator __all__ = [ "magentic_orchestrator", ] ```` ## File: packages/api-maf-python/src/orchestrator/magentic_workflow.py ````python """Magentic Orchestration for Travel Planning using Microsoft Agent Framework. This module implements the Magentic orchestration pattern from Microsoft Agent Framework for coordinating multiple specialized travel planning agents. Agents work with MCP tools following the exact patterns from Microsoft Agent Framework samples. Reference: https://learn.microsoft.com/en-us/agent-framework/user-guide/workflows/orchestrations/magentic Sample: https://github.com/microsoft/agent-framework/blob/main/python/samples/getting_started/workflows/orchestration/magentic.py """ import asyncio import logging from typing import Any, AsyncGenerator, Dict, List, Optional from agent_framework import ( ChatAgent, MCPStreamableHTTPTool, MagenticAgentDeltaEvent, MagenticAgentMessageEvent, MagenticBuilder, MagenticCallbackEvent, MagenticCallbackMode, MagenticFinalResultEvent, MagenticOrchestratorMessageEvent, WorkflowOutputEvent, ) from agent_framework.exceptions import ServiceResponseException from src.orchestrator.providers import get_llm_client from src.orchestrator.tools.tool_registry import tool_registry from src.config import settings logger = logging.getLogger(__name__) class MagenticTravelOrchestrator: """Magentic-based travel planning orchestrator using Microsoft Agent Framework. Completely simplified implementation strictly following MAF best practices. Each workflow run creates fresh agents with their own MCP tool instances, properly managed using async context managers exactly as shown in MAF samples. Architecture: - CustomerQueryAgent: Handles customer inquiries with customer-query MCP tools - ItineraryAgent: Plans itineraries with itinerary-planning MCP tools - DestinationAgent: Recommends destinations (no MCP tools, uses LLM knowledge) Reference: - Magentic: https://github.com/microsoft/agent-framework/blob/main/python/samples/getting_started/workflows/orchestration/magentic.py - MCP tools: https://github.com/microsoft/agent-framework/blob/main/python/samples/getting_started/agents/openai/openai_responses_client_with_hosted_mcp.py """ def __init__(self): """Initialize the Magentic travel orchestrator.""" self.chat_client: Optional[Any] = None logger.info("Magentic Travel Orchestrator initialized") async def initialize(self) -> None: """Initialize the chat client for the workflow.""" logger.info("Initializing Magentic travel planning workflow...") # Get the chat client from Microsoft Agent Framework self.chat_client = await get_llm_client() logger.info(f"✓ Chat client initialized for provider: {settings.llm_provider}") logger.info("✓ Magentic workflow ready") async def process_request_stream( self, user_message: str, conversation_history: Optional[List[Dict[str, str]]] = None, ) -> AsyncGenerator[Dict[str, Any], None]: """Process a user request using the Magentic workflow with true streaming. Creates a fresh workflow for each request following MAF best practices. MCP tools are created once and passed to agents at creation time - MAF handles the connection lifecycle automatically through the workflow's async context manager. Args: user_message: The user's message/request conversation_history: Optional conversation history (not used in current implementation) Yields: Event dictionaries with type, agent, event, and data for UI consumption """ if not self.chat_client: raise RuntimeError("Chat client not initialized. Call initialize() first.") logger.info(f"Processing request with Magentic workflow: {user_message[:100]}...") # Get MCP server metadata customer_query_metadata = tool_registry.get_server_metadata("customer-query") itinerary_metadata = tool_registry.get_server_metadata("itinerary-planning") # Helper function to safely create MCP tool def create_mcp_tool(metadata: Optional[Dict[str, Any]]) -> Optional[MCPStreamableHTTPTool]: """Create MCP tool from metadata with error handling.""" if not metadata: return None try: headers = {} if metadata.get("access_token"): headers["Authorization"] = f"Bearer {metadata['access_token']}" # Create tool exactly as shown in MAF samples return MCPStreamableHTTPTool( name=metadata["name"], url=metadata["url"], headers=headers if headers else None, load_tools=True, load_prompts=False, request_timeout=30, approval_mode="never_require", # Auto-approve for seamless experience ) except Exception as e: logger.warning(f"⚠ Could not create MCP tool for {metadata.get('name')}: {e}") return None # Create MCP tool instances - will be passed to agents at creation customer_query_tool = create_mcp_tool(customer_query_metadata) itinerary_tool = create_mcp_tool(itinerary_metadata) # Log MCP tool availability if customer_query_tool: logger.info("✓ Customer Query MCP tool configured") else: logger.warning("⚠ Customer Query agent will run without MCP tools") if itinerary_tool: logger.info("✓ Itinerary MCP tool configured") else: logger.warning("⚠ Itinerary agent will run without MCP tools") try: # Create event queue for streaming event_queue: asyncio.Queue[Dict[str, Any]] = asyncio.Queue() workflow_done = False workflow_error: Optional[Exception] = None # Define streaming callback async def on_event(event: MagenticCallbackEvent) -> None: """Stream workflow events to UI in real-time.""" event_data = self._convert_workflow_event(event) if event_data: await event_queue.put(event_data) logger.debug(f"→ Event: {event_data.get('event')} from {event_data.get('agent')}") # Build workflow with agents and tools # Following exact pattern from MAF sample - tools passed at agent creation workflow = ( MagenticBuilder() .participants( CustomerQueryAgent=ChatAgent( name="CustomerQueryAgent", description="Handles customer questions and travel information", instructions=( "You are a Customer Query Agent for a travel planning system. " "Answer customer questions about destinations, hotels, and travel logistics. " + ( "Use the MCP tools to retrieve accurate information. " if customer_query_tool else "Use your knowledge. " ) + "Be helpful and customer-focused." ), chat_client=self.chat_client, tools=customer_query_tool, # Tool passed - MAF manages lifecycle ), ItineraryAgent=ChatAgent( name="ItineraryAgent", description="Creates detailed travel itineraries and schedules", instructions=( "You are an Itinerary Planning Agent for a travel planning system. " "Create detailed day-by-day travel itineraries. " + ("Use the MCP tools to plan itineraries. " if itinerary_tool else "Use your knowledge. ") + "Be thorough and organized." ), chat_client=self.chat_client, tools=itinerary_tool, # Tool passed - MAF manages lifecycle ), DestinationAgent=ChatAgent( name="DestinationAgent", description="Recommends travel destinations based on preferences", instructions=( "You are a Destination Recommendation Agent. " "Recommend destinations based on customer preferences. " "Be creative and match recommendations to customer needs." ), chat_client=self.chat_client, # No MCP tools - uses LLM knowledge ), ) .on_event(on_event, mode=MagenticCallbackMode.STREAMING) .with_standard_manager( chat_client=self.chat_client, max_round_count=8, max_stall_count=2, max_reset_count=1, ) .build() ) # Run workflow in background task async def run_workflow(): """Execute workflow and mark completion.""" nonlocal workflow_done, workflow_error try: # workflow.run_stream() properly manages all async contexts including MCP tools async for event in workflow.run_stream(user_message): event_data = self._convert_workflow_event(event) if event_data: await event_queue.put(event_data) except ServiceResponseException as e: # Handle timeout and service errors specially logger.error(f"Service error in workflow: {e}", exc_info=True) workflow_error = e error_event = { "type": "error", # This is the ChatEvent.type "event": "ServiceError", "error": { "message": "Request timed out or service unavailable. Please try again.", "statusCode": 504, "reason": { "message": str(e), }, }, } await event_queue.put(error_event) except Exception as e: logger.error(f"Error in workflow execution: {e}", exc_info=True) workflow_error = e error_event = { "type": "error", # This is the ChatEvent.type "agent": None, "event": "Error", "message": f"Workflow error: {str(e)}", "statusCode": 500, "data": { "agent": None, "error": str(e), }, } await event_queue.put(error_event) finally: workflow_done = True await event_queue.put(None) # Signal completion # Start workflow workflow_task = asyncio.create_task(run_workflow()) # Stream events as they arrive try: while True: try: event_data = await asyncio.wait_for(event_queue.get(), timeout=0.1) if event_data is None: # Completion signal break yield event_data except asyncio.TimeoutError: if workflow_done: # Drain remaining events while not event_queue.empty(): event_data = await event_queue.get() if event_data: yield event_data break continue # Wait for workflow task await workflow_task # If there was an error, it's already been sent if workflow_error: logger.error(f"✗ Workflow completed with error: {workflow_error}") else: logger.info("✓ Workflow completed successfully") except Exception as e: logger.error(f"Error streaming workflow events: {e}", exc_info=True) workflow_task.cancel() try: await workflow_task except asyncio.CancelledError: pass raise except ServiceResponseException as e: # Handle timeout and service errors specially logger.error(f"Service error in Magentic workflow: {e}", exc_info=True) yield { "type": "error", # ChatEvent.type "agent": None, "event": "ServiceError", "message": "Request timed out or service unavailable. Please try again.", "statusCode": 504, "data": { "agent": None, "error": str(e), }, } except Exception as e: logger.error(f"Error in Magentic workflow: {e}", exc_info=True) yield { "type": "error", # ChatEvent.type "agent": None, "event": "Error", "message": f"Workflow error: {str(e)}", "statusCode": 500, "data": { "agent": None, "error": str(e), }, } def _convert_workflow_event(self, event: Any) -> Optional[Dict[str, Any]]: """Convert a Magentic workflow event to our API event format. Expected UI format: { type: "metadata", agent: agent || null, event: event display name, data: { agent, ...event data } } Args: event: Workflow event from Magentic Returns: Event dictionary in our API format, or None if not relevant for UI """ # Handle different event types from Microsoft Agent Framework if isinstance(event, MagenticOrchestratorMessageEvent): # Orchestrator planning messages message_text = getattr(event.message, "text", "") if event.message else "" return { "type": "metadata", "agent": "Orchestrator", "event": f"Orchestrator{event.kind.title().replace('_', '')}", "data": { "agent": "Orchestrator", "message": message_text, "kind": event.kind, }, } elif isinstance(event, MagenticAgentDeltaEvent): # Token-by-token streaming from agents agent_id = event.agent_id or "UnknownAgent" return { "type": "metadata", "agent": agent_id, "event": "AgentDelta", "data": { "agent": agent_id, "delta": event.text, }, } elif isinstance(event, MagenticAgentMessageEvent): # Complete agent messages agent_id = event.agent_id or "UnknownAgent" message_text = getattr(event.message, "text", "") if event.message else "" return { "type": "metadata", "agent": agent_id, "event": "AgentMessage", "data": { "agent": agent_id, "message": message_text, "role": getattr(event.message, "role", None) if event.message else None, }, } elif isinstance(event, MagenticFinalResultEvent): # Final result from workflow result_text = getattr(event.message, "text", "") if event.message else "Task completed" return { "type": "metadata", "agent": None, "event": "FinalResult", "data": { "agent": None, "message": result_text, "completed": True, }, } elif isinstance(event, WorkflowOutputEvent): # Final workflow output output_data = str(event.data) if event.data else "Workflow completed" return { "type": "metadata", "agent": None, "event": "WorkflowComplete", "data": { "agent": None, "output": output_data, "completed": True, }, } # Return None for events we don't need to surface to UI return None # Global orchestrator instance magentic_orchestrator = MagenticTravelOrchestrator() ```` ## File: packages/api-maf-python/src/orchestrator/workflow.py ````python """MAF Workflow Orchestrator for travel planning agents with simplified MCP integration.""" import logging from typing import Any, AsyncGenerator, Dict, List, Optional from ..config import settings from .providers import get_llm_client from .agents.triage_agent import TriageAgent from .agents.customer_query_agent import CustomerQueryAgent from .agents.destination_recommendation_agent import DestinationRecommendationAgent from .agents.itinerary_planning_agent import ItineraryPlanningAgent from .agents.echo_agent import EchoAgent from .tools import MCP_TOOLS_CONFIG, tool_registry logger = logging.getLogger(__name__) class TravelWorkflowOrchestrator: """Orchestrates multi-agent workflow for travel planning using MAF. This class manages the initialization and coordination of all agents in the travel planning system using Microsoft Agent Framework with simplified MCP integration using MAF's built-in MCP support. """ def __init__(self): """Initialize the workflow orchestrator.""" self.chat_client: Optional[Any] = None self.all_tools: List[Any] = [] # Initialize agents (will be configured with tools during initialize()) self.triage_agent: Optional[TriageAgent] = None self.customer_query_agent: Optional[CustomerQueryAgent] = None self.destination_agent: Optional[DestinationRecommendationAgent] = None self.itinerary_agent: Optional[ItineraryPlanningAgent] = None self.echo_agent: Optional[EchoAgent] = None logger.info("Workflow orchestrator initialized") async def initialize(self, enabled_tools: Optional[List[str]] = None) -> None: """Initialize the workflow with LLM client, MCP tools, and all agents. Uses Microsoft Agent Framework's built-in MCP support via MCPStreamableHTTPTool. Args: enabled_tools: List of enabled tool IDs. If None, all tools are enabled. """ logger.info("Initializing MAF workflow with simplified MCP integration...") # Get the chat client from Microsoft Agent Framework self.chat_client = await get_llm_client() logger.info(f"Chat client initialized for provider: {settings.llm_provider}") # Determine which tools to enable (default: all except echo-ping for production) if enabled_tools is None: enabled_tools = [ "customer-query", "itinerary-planning", "destination-recommendation", "echo-ping", ] # Load MCP tools using the tool registry (which uses MAF's built-in MCP support) # This will continue even if some servers are unavailable self.all_tools = await tool_registry.get_all_tools(servers=enabled_tools) if self.all_tools: logger.info(f"✓ Loaded {len(self.all_tools)} tools - agents will have MCP capabilities") else: logger.warning(f"⚠ No MCP tools loaded - agents will run without MCP capabilities") logger.warning(f"⚠ Check if MCP servers are running and accessible") # Initialize specialized agents with their specific tools # Each agent gets the full tool list - the agent's system prompt determines usage # Triage Agent (orchestrator) self.triage_agent = TriageAgent(tools=self.all_tools) await self.triage_agent.initialize(self.chat_client) logger.info("TriageAgent initialized") # Customer Query Agent self.customer_query_agent = CustomerQueryAgent(tools=self.all_tools) await self.customer_query_agent.initialize(self.chat_client) logger.info("CustomerQueryAgent initialized") # Destination Recommendation Agent self.destination_agent = DestinationRecommendationAgent(tools=self.all_tools) await self.destination_agent.initialize(self.chat_client) logger.info("DestinationRecommendationAgent initialized") # Itinerary Planning Agent self.itinerary_agent = ItineraryPlanningAgent(tools=self.all_tools) await self.itinerary_agent.initialize(self.chat_client) logger.info("ItineraryPlanningAgent initialized") # Echo Agent (for testing) self.echo_agent = EchoAgent(tools=self.all_tools) await self.echo_agent.initialize(self.chat_client) logger.info("EchoAgent initialized") logger.info(f"MAF workflow fully initialized with {len(self.all_tools)} total tools") @property def agents(self) -> List[Any]: """Get list of all initialized agents.""" return [ agent for agent in [ self.triage_agent, self.customer_query_agent, self.destination_agent, self.itinerary_agent, self.echo_agent, ] if agent is not None ] async def process(self, message: str, context: Optional[Dict[str, Any]] = None) -> str: """Process a message through the multi-agent workflow. Args: message: User message to process context: Optional context information Returns: Response from the triage agent """ if not self.triage_agent: raise RuntimeError("Workflow not initialized. Call initialize() first.") logger.info(f"Processing message through MAF workflow: {message[:100]}...") # Use the triage agent to process the message # It will coordinate with other agents as needed through its tools response = await self.triage_agent.process(message, context) logger.info("Message processing complete") return response async def process_stream( self, message: str, context: Optional[Dict[str, Any]] = None ) -> AsyncGenerator[Dict[str, Any], None]: """Process a message with streaming response. Args: message: User message to process context: Optional context information Yields: Streaming response chunks """ if not self.triage_agent: raise RuntimeError("Workflow not initialized. Call initialize() first.") logger.info(f"Processing message with streaming: {message[:100]}...") # For now, use non-streaming and yield as single chunk # TODO: Implement true streaming with MAF's streaming capabilities response = await self.triage_agent.process(message, context) yield {"type": "response", "agent": self.triage_agent.name, "data": {"message": response}} logger.info("Streaming message processing complete") async def cleanup(self) -> None: """Clean up workflow resources.""" logger.info("Cleaning up workflow resources...") try: # Close all MCP tool connections await tool_registry.close_all() logger.info("MCP tool connections closed") except Exception as e: logger.error(f"Error closing MCP connections: {e}") logger.info("Workflow cleanup complete") self.triage_agent = None self.customer_query_agent = None self.destination_agent = None self.itinerary_agent = None self.echo_agent = None @property def agents(self) -> List[Any]: """Get list of all initialized agents.""" return [ agent for agent in [ self.triage_agent, self.customer_query_agent, self.destination_agent, self.itinerary_agent, self.echo_agent, ] if agent is not None ] async def process_request(self, message: str, context: Optional[Dict[str, Any]] = None) -> str: """Process a travel planning request through the workflow. Args: message: User message/request context: Optional context information Returns: Workflow response Raises: RuntimeError: If workflow not initialized """ if not self.chat_client or not self.triage_agent: raise RuntimeError("Workflow not initialized. Call initialize() first.") logger.info(f"Processing request: {message[:100]}...") try: # Use the triage agent to process the request # The triage agent will coordinate with other agents as needed result = await self.triage_agent.process(message, context) logger.info("Request processed successfully") return result except Exception as e: logger.error(f"Error processing request: {e}", exc_info=True) raise async def process_request_stream( self, message: str, context: Optional[Dict[str, Any]] = None ) -> AsyncGenerator[Dict[str, Any], None]: """Process a travel planning request through the workflow with streaming. Args: message: User message/request context: Optional context information Yields: Events in the format: { "agent": agent_name or None, "event": event_type, "data": event_data } Raises: RuntimeError: If workflow not initialized """ if not self.chat_client or not self.triage_agent: raise RuntimeError("Workflow not initialized. Call initialize() first.") logger.info(f"Processing streaming request: {message[:100]}...") try: # Send agent setup event yield { "agent": "TriageAgent", "event": "AgentSetup", "data": { "message": "Initializing travel planning workflow with MCP tools", "tool_count": len(self.all_tools), "timestamp": None, }, } # Send agent tool call event yield { "agent": "TriageAgent", "event": "AgentToolCall", "data": { "message": "Processing travel request with specialized agents", "toolName": "triage_agent_process", "timestamp": None, }, } # Process through triage agent result = await self.triage_agent.process(message, context) # Send streaming chunks of the response # Split the response into chunks for streaming effect chunk_size = 50 for i in range(0, len(result), chunk_size): chunk = result[i : i + chunk_size] yield {"agent": "TriageAgent", "event": "AgentStream", "data": {"delta": chunk, "timestamp": None}} # Send completion event yield { "agent": "TriageAgent", "event": "AgentComplete", "data": {"message": "Request processed successfully", "result": result, "timestamp": None}, } logger.info("Streaming request processed successfully") except Exception as e: logger.error(f"Error processing streaming request: {e}", exc_info=True) yield {"agent": None, "event": "Error", "data": {"error": str(e), "timestamp": None}} raise async def get_agent_by_name(self, name: str) -> Optional[Any]: """Get a specific agent by name. Args: name: Agent name Returns: Agent instance or None if not found """ for agent in self.agents: if agent.name == name: return agent return None async def handoff_to_agent(self, agent_name: str, message: str, context: Optional[Dict[str, Any]] = None) -> str: """Handoff request to a specific agent. Args: agent_name: Name of the agent to handoff to message: Message to send to the agent context: Optional context Returns: Agent response Raises: ValueError: If agent not found """ agent = await self.get_agent_by_name(agent_name) if not agent: raise ValueError(f"Agent {agent_name} not found") logger.info(f"Handing off to {agent_name}") return await agent.process(message, context) async def close(self) -> None: """Clean up MCP client resources.""" for loader in self.mcp_loaders.values(): await wrapper.close() logger.info("Closed all MCP client connections") # Global workflow orchestrator instance workflow_orchestrator = TravelWorkflowOrchestrator() ```` ## File: packages/api-maf-python/src/tests/test_agents.py ````python """Tests for MAF agent implementations.""" import pytest from unittest.mock import AsyncMock, MagicMock, patch from src.orchestrator.agents import ( BaseAgent, TriageAgent, CustomerQueryAgent, DestinationRecommendationAgent, ) @pytest.mark.asyncio async def test_base_agent_initialization(): """Test base agent initialization.""" agent = BaseAgent(name="TestAgent", description="Test agent", system_prompt="Test prompt") assert agent.name == "TestAgent" assert agent.description == "Test agent" assert agent.system_prompt == "Test prompt" assert agent.agent is None @pytest.mark.asyncio async def test_base_agent_initialize_with_llm(): """Test base agent initialization with LLM client.""" agent = BaseAgent(name="TestAgent", description="Test agent", system_prompt="Test prompt") mock_llm_client = MagicMock() with patch("src.orchestrator.agents.base_agent.Agent") as mock_agent_class: await agent.initialize(mock_llm_client) assert agent.agent is not None mock_agent_class.assert_called_once() @pytest.mark.asyncio async def test_base_agent_process_without_initialization(): """Test that processing without initialization raises error.""" agent = BaseAgent(name="TestAgent", description="Test agent", system_prompt="Test prompt") with pytest.raises(RuntimeError, match="not initialized"): await agent.process("test message") @pytest.mark.asyncio async def test_triage_agent_initialization(): """Test triage agent initialization.""" agent = TriageAgent() assert agent.name == "TriageAgent" assert "triage" in agent.description.lower() assert agent.system_prompt is not None @pytest.mark.asyncio async def test_customer_query_agent_initialization(): """Test customer query agent initialization.""" agent = CustomerQueryAgent() assert agent.name == "CustomerQueryAgent" assert "customer" in agent.description.lower() assert agent.system_prompt is not None @pytest.mark.asyncio async def test_destination_recommendation_agent_initialization(): """Test destination recommendation agent initialization.""" agent = DestinationRecommendationAgent() assert agent.name == "DestinationRecommendationAgent" assert "destination" in agent.description.lower() assert agent.system_prompt is not None @pytest.mark.asyncio async def test_destination_agent_with_tools(): """Test destination agent with tools.""" mock_tools = [MagicMock(), MagicMock()] agent = DestinationRecommendationAgent(tools=mock_tools) assert agent.tools == mock_tools assert len(agent.tools) == 2 ```` ## File: packages/api-maf-python/src/tests/test_config.py ````python """Test configuration module.""" import pytest from src.config import Settings def test_settings_defaults(): """Test default settings values.""" # This will use .env.sample or raise error if required fields are missing # For testing, we'll create a minimal settings object settings = Settings( azure_openai_endpoint="https://test.openai.azure.com/", azure_openai_api_key="test-key", azure_openai_deployment="gpt-5", mcp_customer_query_url="http://localhost:5001", mcp_destination_recommendation_url="http://localhost:5002", mcp_itinerary_planning_url="http://localhost:5003", mcp_echo_ping_url="http://localhost:5004", ) assert settings.port == 4000 assert settings.log_level == "INFO" assert settings.otel_service_name == "api-python" assert settings.azure_openai_api_version == "2024-02-15-preview" def test_settings_custom_port(): """Test custom port setting.""" settings = Settings( azure_openai_endpoint="https://test.openai.azure.com/", azure_openai_api_key="test-key", azure_openai_deployment="gpt-5", mcp_customer_query_url="http://localhost:5001", mcp_destination_recommendation_url="http://localhost:5002", mcp_itinerary_planning_url="http://localhost:5003", mcp_echo_ping_url="http://localhost:5004", port=5000, ) assert settings.port == 5000 ```` ## File: packages/api-maf-python/src/tests/test_mcp_client.py ````python """Test MCP tool integration using Microsoft Agent Framework SDK.""" import pytest from unittest.mock import AsyncMock, MagicMock, patch, PropertyMock @pytest.mark.asyncio async def test_mcp_tool_loader_initialization(): """Test MCPToolLoader initialization with Microsoft Agent Framework SDK.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader config = {"url": "http://localhost:8001/mcp", "type": "http", "verbose": True} loader = MCPToolLoader(config, "Test Server") assert loader.server_name == "Test Server" assert loader.base_url == "http://localhost:8001/mcp" assert loader._tools == [] await loader.close() @pytest.mark.asyncio async def test_tool_registry_initialization(): """Test ToolRegistry initialization.""" from orchestrator.tools.tool_registry import ToolRegistry # Create a new registry instance registry = ToolRegistry() # Should have loaders for configured servers assert len(registry.loaders) > 0 await registry.close_all() @pytest.mark.asyncio async def test_get_tools_with_maf_sdk(): """Test loading tools using Microsoft Agent Framework's MCPStreamableHTTPTool.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader from agent_framework import MCPStreamableHTTPTool config = {"url": "http://localhost:8001/mcp", "type": "http", "verbose": True} loader = MCPToolLoader(config, "Test Server") # Mock the MCPStreamableHTTPTool context manager mock_tool = MagicMock() mock_tool.functions = [MagicMock(name="test_tool")] with patch("orchestrator.tools.mcp_tool_wrapper.MCPStreamableHTTPTool") as mock_tool_class: # Make the mock return an async context manager mock_context = AsyncMock() mock_context.__aenter__.return_value = mock_tool mock_context.__aexit__.return_value = None mock_tool_class.return_value = mock_context tools = await loader.get_tools() # Should have called MCPStreamableHTTPTool with correct parameters mock_tool_class.assert_called_once() call_kwargs = mock_tool_class.call_args[1] assert call_kwargs["name"] == "Test Server" assert call_kwargs["url"] == "http://localhost:8001/mcp" assert call_kwargs["load_tools"] == True # Should return the tools assert len(tools) == 1 assert tools[0].name == "test_tool" await loader.close() @pytest.mark.asyncio async def test_tool_registry_get_all_tools(): """Test getting all tools from registry.""" from orchestrator.tools.tool_registry import tool_registry # Mock the tool loading with patch.object(tool_registry, "loaders") as mock_loaders: # Create a mock loader mock_loader = AsyncMock() mock_tool = MagicMock() mock_tool.name = "test_tool" mock_loader.get_tools.return_value = [mock_tool] mock_loaders.items.return_value = [("test-server", mock_loader)] mock_loaders.keys.return_value = ["test-server"] mock_loaders.__contains__ = lambda self, key: key == "test-server" # Get tools tools = await tool_registry.get_all_tools() # Should have called the loader mock_loader.get_tools.assert_called_once() # Should return the tools assert len(tools) == 1 assert tools[0] == mock_tool @pytest.mark.asyncio async def test_maf_sdk_import(): """Test that Microsoft Agent Framework SDK imports work correctly.""" try: from agent_framework import MCPStreamableHTTPTool assert MCPStreamableHTTPTool is not None except ImportError as e: # Expected if SDK not installed assert "agent-framework" in str(e).lower() @pytest.mark.asyncio async def test_mcp_tool_with_auth_header(): """Test MCPToolLoader with authentication header.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader config = {"url": "http://localhost:8001/mcp", "type": "http", "accessToken": "test-token-123"} loader = MCPToolLoader(config, "Authenticated Server") # Verify the authentication header is configured assert loader.access_token == "test-token-123" assert "Authorization" in loader.headers assert loader.headers["Authorization"] == "Bearer test-token-123" await loader.close() @pytest.mark.asyncio async def test_error_handling_on_connection_failure(): """Test error handling when MCP server connection fails.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader from agent_framework import MCPStreamableHTTPTool config = {"url": "http://localhost:8001/mcp", "type": "http"} loader = MCPToolLoader(config, "Test Server") # Mock connection failure with patch("orchestrator.tools.mcp_tool_wrapper.MCPStreamableHTTPTool") as mock_tool_class: # Make the context manager raise an exception mock_context = AsyncMock() mock_context.__aenter__.side_effect = Exception("Connection failed") mock_tool_class.return_value = mock_context tools = await loader.get_tools() # Should return empty list on error assert tools == [] await loader.close() @pytest.mark.asyncio async def test_context_manager_cleanup(): """Test that async context manager properly cleans up resources.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader from agent_framework import MCPStreamableHTTPTool config = {"url": "http://localhost:8001/mcp", "type": "http"} loader = MCPToolLoader(config, "Test Server") # Mock the MCPStreamableHTTPTool mock_tool = MagicMock() mock_tool.functions = [MagicMock(name="tool1")] mock_exit = AsyncMock() with patch("orchestrator.tools.mcp_tool_wrapper.MCPStreamableHTTPTool") as mock_tool_class: mock_context = AsyncMock() mock_context.__aenter__.return_value = mock_tool mock_context.__aexit__ = mock_exit mock_tool_class.return_value = mock_context await loader.get_tools() # Should have called __aexit__ for cleanup mock_exit.assert_called_once() await loader.close() ```` ## File: packages/api-maf-python/src/tests/test_mcp_graceful_degradation.py ````python """Test graceful degradation when MCP servers are unavailable.""" import pytest from unittest.mock import AsyncMock, MagicMock, patch import logging @pytest.mark.asyncio async def test_mcp_server_unavailable_graceful_degradation(): """Test that unavailable MCP servers are handled gracefully with warnings.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader config = {"url": "http://unavailable-server:9999/mcp", "type": "http"} loader = MCPToolLoader(config, "Unavailable Server") # Mock connection failure with patch("orchestrator.tools.mcp_tool_wrapper.MCPStreamableHTTPTool") as mock_tool_class: mock_context = AsyncMock() mock_context.__aenter__.side_effect = ConnectionError("Server not responding") mock_tool_class.return_value = mock_context # Should return empty list, not raise exception tools = await loader.get_tools() assert tools == [] # No exception should be raised @pytest.mark.asyncio async def test_tool_registry_continues_with_failed_servers(caplog): """Test that tool registry continues loading from available servers when some fail.""" from orchestrator.tools.tool_registry import ToolRegistry registry = ToolRegistry() # Mock loaders: one succeeds, one fails with patch.object(registry, "loaders") as mock_loaders: # Successful loader success_loader = AsyncMock() success_tool = MagicMock(name="working_tool") success_loader.get_tools.return_value = [success_tool] # Failed loader failed_loader = AsyncMock() failed_loader.get_tools.return_value = [] # Returns empty on failure mock_loaders.items.return_value = [("working-server", success_loader), ("failed-server", failed_loader)] mock_loaders.keys.return_value = ["working-server", "failed-server"] mock_loaders.__contains__ = lambda self, key: key in ["working-server", "failed-server"] # Get tools from all servers with caplog.at_level(logging.WARNING): tools = await registry.get_all_tools() # Should have tools from successful server assert len(tools) == 1 assert tools[0] == success_tool # Should log warning about failed server warning_logs = [record for record in caplog.records if record.levelname == "WARNING"] assert any("failed-server" in record.message for record in warning_logs) @pytest.mark.asyncio async def test_all_servers_unavailable_no_exception(caplog): """Test that when all MCP servers are unavailable, system continues without tools.""" from orchestrator.tools.tool_registry import ToolRegistry registry = ToolRegistry() # Mock all loaders to fail with patch.object(registry, "loaders") as mock_loaders: failed_loader1 = AsyncMock() failed_loader1.get_tools.return_value = [] failed_loader2 = AsyncMock() failed_loader2.get_tools.return_value = [] mock_loaders.items.return_value = [("server1", failed_loader1), ("server2", failed_loader2)] mock_loaders.keys.return_value = ["server1", "server2"] mock_loaders.__contains__ = lambda self, key: key in ["server1", "server2"] # Should return empty list, not raise exception with caplog.at_level(logging.WARNING): tools = await registry.get_all_tools() assert tools == [] # Should log warning about no tools warning_logs = [record for record in caplog.records if record.levelname == "WARNING"] assert any("No tools loaded" in record.message for record in warning_logs) @pytest.mark.asyncio async def test_partial_server_failure_continues(caplog): """Test that partial server failures don't stop the workflow.""" from orchestrator.tools.tool_registry import ToolRegistry registry = ToolRegistry() with patch.object(registry, "loaders") as mock_loaders: # Create 3 loaders: 2 succeed, 1 fails loader1 = AsyncMock() tool1 = MagicMock(name="tool1") loader1.get_tools.return_value = [tool1] loader2 = AsyncMock() loader2.get_tools.return_value = [] # Failed loader3 = AsyncMock() tool3 = MagicMock(name="tool3") loader3.get_tools.return_value = [tool3] mock_loaders.items.return_value = [("server1", loader1), ("server2", loader2), ("server3", loader3)] mock_loaders.keys.return_value = ["server1", "server2", "server3"] mock_loaders.__contains__ = lambda self, key: key in ["server1", "server2", "server3"] with caplog.at_level(logging.INFO): tools = await registry.get_all_tools() # Should have tools from successful servers assert len(tools) == 2 # Should log success for working servers info_logs = [record for record in caplog.records if record.levelname == "INFO"] assert any("server1" in record.message for record in info_logs) assert any("server3" in record.message for record in info_logs) @pytest.mark.asyncio async def test_workflow_initialization_with_no_tools(): """Test that workflow can initialize even when no MCP tools are available.""" from orchestrator.workflow import TravelWorkflowOrchestrator from orchestrator.tools.tool_registry import tool_registry orchestrator = TravelWorkflowOrchestrator() # Mock tool registry to return empty list with patch.object(tool_registry, "get_all_tools", new_callable=AsyncMock) as mock_get_tools: mock_get_tools.return_value = [] # Mock LLM client with patch("orchestrator.workflow.get_llm_client", new_callable=AsyncMock) as mock_llm: mock_llm.return_value = MagicMock() # Should initialize without error await orchestrator.initialize() # Should have empty tools list assert orchestrator.all_tools == [] # Agents should still be initialized (with no tools) assert orchestrator.triage_agent is not None assert orchestrator.customer_query_agent is not None @pytest.mark.asyncio async def test_connection_timeout_handled_gracefully(caplog): """Test that connection timeouts are handled gracefully.""" from orchestrator.tools.mcp_tool_wrapper import MCPToolLoader config = {"url": "http://slow-server:8000/mcp", "type": "http"} loader = MCPToolLoader(config, "Slow Server") # Mock timeout error with patch("orchestrator.tools.mcp_tool_wrapper.MCPStreamableHTTPTool") as mock_tool_class: mock_context = AsyncMock() mock_context.__aenter__.side_effect = TimeoutError("Connection timeout") mock_tool_class.return_value = mock_context with caplog.at_level(logging.WARNING): tools = await loader.get_tools() assert tools == [] # Should log warning, not error warning_logs = [record for record in caplog.records if record.levelname == "WARNING"] assert any("unavailable or not responding" in record.message for record in warning_logs) # Should not have any ERROR logs error_logs = [record for record in caplog.records if record.levelname == "ERROR"] assert len(error_logs) == 0 ```` ## File: packages/api-maf-python/src/tests/test_providers.py ````python """Test LLM provider implementations.""" import pytest from unittest.mock import AsyncMock, MagicMock, patch from src.orchestrator.providers import get_llm_client from src.orchestrator.providers.azure_openai import AzureOpenAIProvider from src.orchestrator.providers.github_models import GitHubModelsProvider from src.orchestrator.providers.docker_models import DockerModelsProvider from src.orchestrator.providers.ollama_models import OllamaModelsProvider from src.config import settings @pytest.mark.asyncio async def test_azure_openai_provider_local_docker(): """Test Azure OpenAI provider in local Docker mode.""" with patch("src.config.settings") as mock_settings: mock_settings.azure_openai_endpoint = "https://test.openai.azure.com/" mock_settings.azure_openai_deployment = "gpt-5" mock_settings.azure_openai_api_key = "test-key" mock_settings.azure_openai_api_version = "2024-02-15-preview" mock_settings.is_local_docker_env = True with patch("src.orchestrator.providers.azure_openai.AsyncAzureOpenAI") as mock_client: provider = AzureOpenAIProvider() await provider.get_client() mock_client.assert_called_once_with( api_key="test-key", api_version="2024-02-15-preview", azure_endpoint="https://test.openai.azure.com/", ) @pytest.mark.asyncio async def test_github_models_provider(): """Test GitHub Models provider.""" with patch("src.config.settings") as mock_settings: mock_settings.github_token = "test-token" mock_settings.github_model = "openai/gpt-5" with patch("src.orchestrator.providers.github_models.AsyncOpenAI") as mock_client: provider = GitHubModelsProvider() await provider.get_client() mock_client.assert_called_once_with( base_url="https://models.inference.ai.azure.com", api_key="test-token", ) @pytest.mark.asyncio async def test_docker_models_provider(): """Test Docker Models provider.""" with patch("src.config.settings") as mock_settings: mock_settings.docker_model_endpoint = "http://localhost:12434/v1" mock_settings.docker_model = "ai/phi4:14B-Q4_0" with patch("src.orchestrator.providers.docker_models.AsyncOpenAI") as mock_client: provider = DockerModelsProvider() await provider.get_client() mock_client.assert_called_once_with( base_url="http://localhost:12434/v1", api_key="DOCKER_API_KEY", ) @pytest.mark.asyncio async def test_ollama_models_provider(): """Test Ollama Models provider.""" with patch("src.config.settings") as mock_settings: mock_settings.ollama_model_endpoint = "http://localhost:11434/v1" mock_settings.ollama_model = "llama3.1" with patch("src.orchestrator.providers.ollama_models.AsyncOpenAI") as mock_client: provider = OllamaModelsProvider() await provider.get_client() mock_client.assert_called_once_with( base_url="http://localhost:11434/v1", api_key="OLLAMA_API_KEY", ) @pytest.mark.asyncio async def test_get_llm_client_azure_openai(): """Test get_llm_client with Azure OpenAI provider.""" with patch("src.config.settings") as mock_settings: mock_settings.llm_provider = "azure-openai" mock_settings.azure_openai_endpoint = "https://test.openai.azure.com/" mock_settings.azure_openai_deployment = "gpt-5" mock_settings.azure_openai_api_key = "test-key" mock_settings.is_local_docker_env = True with patch("src.orchestrator.providers.azure_openai.AsyncAzureOpenAI"): client = await get_llm_client() assert client is not None @pytest.mark.asyncio async def test_get_llm_client_invalid_provider(): """Test get_llm_client with invalid provider.""" with patch("src.config.settings") as mock_settings: mock_settings.llm_provider = "invalid-provider" with pytest.raises(ValueError, match="Unknown LLM_PROVIDER"): await get_llm_client() ```` ## File: packages/api-maf-python/src/tests/test_workflow.py ````python """Tests for MAF workflow orchestrator.""" import pytest from unittest.mock import AsyncMock, MagicMock, patch from src.orchestrator.workflow import TravelWorkflowOrchestrator @pytest.mark.asyncio async def test_workflow_orchestrator_initialization(): """Test workflow orchestrator initialization.""" orchestrator = TravelWorkflowOrchestrator() assert orchestrator.triage_agent is not None assert orchestrator.customer_query_agent is not None assert orchestrator.destination_agent is not None assert orchestrator.itinerary_agent is not None assert len(orchestrator.agents) == 8 @pytest.mark.asyncio async def test_workflow_orchestrator_initialize(): """Test workflow orchestrator full initialization.""" orchestrator = TravelWorkflowOrchestrator() mock_llm_client = MagicMock() with patch("src.orchestrator.workflow.get_llm_client", return_value=mock_llm_client): with patch.object(orchestrator.triage_agent, "initialize", new_callable=AsyncMock): with patch("src.orchestrator.workflow.Workflow") as mock_workflow_class: await orchestrator.initialize() assert orchestrator.llm_client is not None assert orchestrator.workflow is not None mock_workflow_class.assert_called_once() @pytest.mark.asyncio async def test_workflow_process_request_without_initialization(): """Test that processing without initialization raises error.""" orchestrator = TravelWorkflowOrchestrator() with pytest.raises(RuntimeError, match="not initialized"): await orchestrator.process_request("test message") @pytest.mark.asyncio async def test_workflow_get_agent_by_name(): """Test getting agent by name.""" orchestrator = TravelWorkflowOrchestrator() agent = await orchestrator.get_agent_by_name("TriageAgent") assert agent is not None assert agent.name == "TriageAgent" agent = await orchestrator.get_agent_by_name("NonExistentAgent") assert agent is None @pytest.mark.asyncio async def test_workflow_handoff_to_agent(): """Test handoff to specific agent.""" orchestrator = TravelWorkflowOrchestrator() mock_llm_client = MagicMock() with patch("src.orchestrator.workflow.get_llm_client", return_value=mock_llm_client): with patch.object(orchestrator.triage_agent, "initialize", new_callable=AsyncMock): with patch.object(orchestrator.triage_agent, "process", new_callable=AsyncMock) as mock_process: mock_process.return_value = "Test response" # Initialize agents first for agent in orchestrator.agents: with patch.object(agent, "initialize", new_callable=AsyncMock): await agent.initialize(mock_llm_client) response = await orchestrator.handoff_to_agent("TriageAgent", "test message") assert response == "Test response" mock_process.assert_called_once() @pytest.mark.asyncio async def test_workflow_handoff_to_nonexistent_agent(): """Test handoff to nonexistent agent raises error.""" orchestrator = TravelWorkflowOrchestrator() with pytest.raises(ValueError, match="not found"): await orchestrator.handoff_to_agent("NonExistentAgent", "test message") ```` ## File: packages/api-maf-python/src/config.py ````python """Configuration management for Azure AI Travel Agents API.""" from typing import Literal, Optional from pydantic_settings import BaseSettings, SettingsConfigDict # LLM Provider type LLMProvider = Literal[ "azure-openai", "github-models", "docker-models", "ollama-models", "foundry-local", ] class Settings(BaseSettings): """Application settings loaded from environment variables.""" model_config = SettingsConfigDict( env_file=".env", env_file_encoding="utf-8", case_sensitive=False, extra="ignore", ) # LLM Provider Selection llm_provider: LLMProvider = "azure-openai" # Azure OpenAI Configuration azure_openai_endpoint: Optional[str] = None azure_openai_api_key: Optional[str] = None azure_openai_deployment_name: Optional[str] = None azure_openai_api_version: str = "2024-02-15-preview" azure_client_id: Optional[str] = None is_local_docker_env: bool = False # GitHub Models Configuration github_token: Optional[str] = None github_model: Optional[str] = None # Docker Models Configuration docker_model_endpoint: Optional[str] = None docker_model: Optional[str] = None # Ollama Models Configuration ollama_model_endpoint: Optional[str] = None ollama_model: Optional[str] = None # Foundry Local Configuration azure_foundry_local_model_alias: str = "phi-3.5-mini" # MCP Server URLs (optional with defaults from docker-compose) mcp_customer_query_url: str = "http://mcp-customer-query:5001" mcp_destination_recommendation_url: str = "http://mcp-destination-recommendation:5002" mcp_itinerary_planning_url: str = "http://mcp-itinerary-planning:5003" mcp_echo_ping_url: str = "http://mcp-echo-ping:5004" mcp_echo_ping_access_token: Optional[str] = "123-this-is-a-fake-token-please-use-a-token-provider" # Server Configuration port: int = 4010 log_level: str = "INFO" # OpenTelemetry Configuration otel_service_name: str = "api-maf-python" otel_exporter_otlp_endpoint: Optional[str] = None otel_exporter_otlp_headers: Optional[str] = None # Global settings instance settings = Settings() ```` ## File: packages/api-maf-python/src/main.py ````python """Main FastAPI application for Azure AI Travel Agents (Python).""" import asyncio import json import logging from contextlib import asynccontextmanager from typing import AsyncGenerator from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware from fastapi.responses import StreamingResponse from pydantic import BaseModel from .config import settings from .orchestrator.magentic_workflow import magentic_orchestrator from .orchestrator.tools.tool_registry import tool_registry from agent_framework.observability import setup_observability setup_observability(enable_sensitive_data=True) # Configure logging logging.basicConfig( level=settings.log_level, format='{"timestamp": "%(asctime)s", "level": "%(levelname)s", "message": "%(message)s"}', ) logger = logging.getLogger(__name__) @asynccontextmanager async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]: """Manage application lifespan events.""" # Startup logger.info("Starting Azure AI Travel Agents API (Python)") logger.info(f"Service: {settings.otel_service_name}") logger.info(f"Port: {settings.port}") logger.info(f"LLM Provider: {settings.llm_provider}") # Initialize Magentic workflow orchestrator logger.info("Initializing Magentic workflow orchestrator...") try: await magentic_orchestrator.initialize() logger.info("✓ Magentic workflow orchestrator ready") except Exception as e: logger.error(f"❌ Error initializing workflow: {e}", exc_info=True) logger.warning("⚠ Application will start with degraded functionality") yield # Shutdown logger.info("Shutting down Azure AI Travel Agents API (Python)") try: await tool_registry.close_all() logger.info("✓ Cleanup complete") except Exception as e: logger.error(f"Error during shutdown: {e}") # Create FastAPI application app = FastAPI( title="Azure AI Travel Agents API (Python)", description="Multi-agent travel planning system using Microsoft Agent Framework", version="1.0.0", lifespan=lifespan, ) # Configure CORS app.add_middleware( CORSMiddleware, allow_origins=["*"], # Configure appropriately for production allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) class ChatRequest(BaseModel): """Request model for chat endpoint.""" message: str context: dict = {} class ChatResponse(BaseModel): """Response model for chat endpoint.""" response: str agent: str = "TravelPlanningWorkflow" @app.get("/api/health") async def health() -> dict: """Health check endpoint. Returns: Health status including MCP server availability """ # Get MCP server status mcp_status = { "total_servers": len(tool_registry._server_metadata), "configured_servers": list(tool_registry._server_metadata.keys()), } return { "status": "OK", "service": settings.otel_service_name, "version": "1.0.0", "llm_provider": settings.llm_provider, "mcp": mcp_status, } @app.get("/api/tools") async def list_tools() -> dict: """List all available MCP tools. Mirrors the TypeScript mcpToolsList implementation by: - Connecting to each configured MCP server - Listing the actual tools available on each server - Checking reachability status Returns: Dictionary with tools array matching frontend format: { "tools": [ { "id": "customer-query", "name": "Customer Query", "url": "http://localhost:5001/mcp", "type": "http", "reachable": true, "selected": true, "tools": [...] # Actual tool definitions from the server }, {...} ] } """ try: tools_info = await tool_registry.list_tools() return tools_info except Exception as e: logger.error(f"Error listing tools: {e}", exc_info=True) return {"tools": [], "error": str(e)} @app.post("/api/chat") async def chat(request: ChatRequest) -> StreamingResponse: """Process a chat request through the Magentic workflow with SSE streaming. Args: request: Chat request with message and optional context Returns: StreamingResponse with Server-Sent Events Raises: HTTPException: If processing fails """ async def event_generator() -> AsyncGenerator[str, None]: """Generate Server-Sent Events for the chat response. Format matches UI ChatStreamState: { type: 'START' | 'END' | 'ERROR' | 'MESSAGE', event: ChatEvent, kind: 'maf-python', error?: { type, message, statusCode } } """ try: logger.info(f"Processing chat request with Magentic: {request.message[:100]}...") # Send START event start_event = { "type": "metadata", "event": "WorkflowStarted", "kind": "maf-python", "data": {"agent": "Orchestrator", "message": "Starting workflow"}, } yield f"data: {json.dumps(start_event)}\n\n" # Process through Magentic workflow with streaming async for internal_event in magentic_orchestrator.process_request_stream( user_message=request.message, conversation_history=request.context ): # Wrap internal event in ChatStreamState format if internal_event.get("type") == "error": # Error event - extract message and statusCode from the event error_message = internal_event.get("message", "An error occurred") error_status_code = internal_event.get("statusCode", 500) stream_state = { "type": "metadata", "kind": "maf-python", "event": internal_event, "error": {"message": error_message, "statusCode": error_status_code}, } else: # Regular message/metadata event stream_state = internal_event yield f"data: {json.dumps(stream_state)}\n\n" # Send END event end_event = { "type": "metadata", "kind": "maf-python", "agent": "TravelPlanningWorkflow", "event": "Complete", "data": {"message": "Request processed successfully"}, } yield f"data: {json.dumps(end_event)}\n\n" logger.info("Request processed successfully") except Exception as e: logger.error(f"Error processing chat request: {e}", exc_info=True) error_stream_state = { "type": "metadata", "kind": "maf-python", "event": { "type": "error", "agent": None, "event": "Error", "data": { "agent": None, "error": str(e), "message": f"An error occurred: {str(e)}", "statusCode": 500, }, }, "error": {"type": "general", "message": f"An error occurred: {str(e)}", "statusCode": 500}, } yield f"data: {json.dumps(error_stream_state)}\n\n" return StreamingResponse( event_generator(), media_type="text/event-stream", headers={ "Cache-Control": "no-cache", "Connection": "keep-alive", "X-Accel-Buffering": "no", # Disable nginx buffering }, ) if __name__ == "__main__": import uvicorn uvicorn.run( "main:app", host="0.0.0.0", port=settings.port, reload=True, log_level=settings.log_level.lower(), ) ```` ## File: packages/api-maf-python/.dockerignore ```` .env .env.* !.env.sample __pycache__ *.pyc *.pyo *.pyd .Python *.so *.egg *.egg-info dist build .pytest_cache .coverage htmlcov .mypy_cache .ruff_cache *.log .git .gitignore README.md tests/ .vscode .idea ```` ## File: packages/api-maf-python/Dockerfile ```` FROM python:3.12-slim WORKDIR /app # Install system dependencies RUN apt-get update && \ apt-get install -y --no-install-recommends \ gcc \ && rm -rf /var/lib/apt/lists/* # Copy project files COPY pyproject.toml ./ COPY src/ ./src/ # Install Python dependencies RUN pip install --no-cache-dir -e . # Expose port EXPOSE 4010 # Health check HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD python -c "import httpx; httpx.get('http://localhost:4010/api/health').raise_for_status()" # Run the application CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "4010"] ```` ## File: packages/api-maf-python/pyproject.toml ````toml [project] name = "azure-ai-travel-agents-api-python" version = "1.0.0" description = "Azure AI Travel Agents API with Microsoft Agent Framework" requires-python = ">=3.11" dependencies = [ # Core web framework and server "fastapi>=0.115.0", "uvicorn[standard]>=0.34.0", # Server-Sent Events for streaming "sse-starlette>=2.2.1", # Configuration and validation "pydantic>=2.10.6", "pydantic-settings>=2.7.0", "python-dotenv>=1.0.1", # Observability with OpenTelemetry "opentelemetry-api>=1.30.0", "opentelemetry-sdk>=1.30.0", "opentelemetry-exporter-otlp-proto-grpc>=1.30.0", "opentelemetry-instrumentation-fastapi>=0.51b0", # Azure authentication and AI "azure-identity>=1.20.0", "openai>=1.59.5", # Microsoft Agent Framework - includes built-in MCP support "agent-framework>=1.0.0b251001", ] [project.optional-dependencies] dev = [ "pytest>=8.3.0", "pytest-asyncio>=0.24.0", "pytest-cov>=6.0.0", "ruff>=0.8.0", "mypy>=1.13.0", ] [tool.ruff] line-length = 120 target-version = "py311" [tool.ruff.lint] select = ["E", "F", "I", "N", "W", "UP"] ignore = ["E501"] [tool.pytest.ini_options] asyncio_mode = "auto" testpaths = ["src/tests"] python_files = ["test_*.py"] python_classes = ["Test*"] python_functions = ["test_*"] [tool.mypy] python_version = "3.11" warn_return_any = true warn_unused_configs = true disallow_untyped_defs = false [build-system] requires = ["setuptools>=61.0", "wheel"] build-backend = "setuptools.build_meta" [tool.setuptools] package-dir = {"" = "src"} [tool.setuptools.packages.find] where = ["src"] include = ["*"] namespaces = false ```` ## File: packages/api-maf-python/test.http ```` POST http://127.0.0.1:4000/api/chat HTTP/1.1 content-type: application/json { "message": "Plan a 7-day trip to Japan", "context": {} } ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.CustomerQueryServer/Models/CustomerQueryAnalysisResult.cs ````csharp namespace AITravelAgent.CustomerQueryServer.Models; public class CustomerQueryAnalysisResult { public string? CustomerQuery { get; set; } public string? Emotion { get; set; } public string? Intent { get; set; } public string? Requirements { get; set; } public string? Preferences { get; set; } } ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.CustomerQueryServer/Properties/launchSettings.json ````json { "$schema": "https://json.schemastore.org/launchsettings.json", "profiles": { "http": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": false, "applicationUrl": "http://localhost:5001", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } }, "https": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": false, "applicationUrl": "https://localhost:45001;http://localhost:5001", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development" } } } } ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.CustomerQueryServer/Tools/CustomerQueryTool.cs ````csharp using System.ComponentModel; using AITravelAgent.CustomerQueryServer.Models; using ModelContextProtocol.Server; namespace AITravelAgent.CustomerQueryServer.Tools; [McpServerToolType] public class CustomerQueryTool(ILogger logger) { private static readonly string[] emotions = [ "happy", "sad", "angry", "neutral" ]; private static readonly string[] intents = [ "book_flight", "cancel_flight", "change_flight", "inquire", "complaint" ]; private static readonly string[] requirements = [ "business", "economy", "first_class" ]; private static readonly string[] preferences = [ "window", "aisle", "extra_legroom" ]; private static readonly Random random = Random.Shared; [McpServerTool(Name = "analyze_customer_query", Title = "Analyze Customer Query")] [Description("Analyzes the customer query and provides a response.")] public async Task AnalyzeCustomerQueryAsync( [Description("The customer query to analyze")] string customerQuery) { // Simulate some processing time await Task.Delay(1000); // Log the received customer query logger.LogInformation("Received customer query: {customerQuery}", customerQuery); // Return a simple response for demonstration purposes var result = new CustomerQueryAnalysisResult { CustomerQuery = customerQuery, Emotion = emotions[random.Next(emotions.Length)], Intent = intents[random.Next(intents.Length)], Requirements = requirements[random.Next(requirements.Length)], Preferences = preferences[random.Next(preferences.Length)] }; return result; } } ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.CustomerQueryServer/Tools/EchoTool.cs ````csharp using System.ComponentModel; using ModelContextProtocol.Server; namespace AITravelAgent.CustomerQueryServer.Tools; [McpServerToolType] public static class EchoTool { [McpServerTool(Name = "echo", Title = "Echo Tool")] [Description("Echoes the message back to the client.")] public static string Echo(string message) => $"hello from .NET: {message}"; } ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.CustomerQueryServer/AITravelAgent.CustomerQueryServer.csproj ```` net9.0 enable enable d59d0f8e-1e4f-41dd-8e1e-e21b537771ff ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.CustomerQueryServer/Program.cs ````csharp var builder = WebApplication.CreateBuilder(args); builder.AddServiceDefaults(); builder.Services.AddMcpServer() .WithHttpTransport(o => o.Stateless = true) .WithToolsFromAssembly(); builder.Services.AddProblemDetails(); var app = builder.Build(); app.MapDefaultEndpoints(); app.MapMcp("/mcp"); await app.RunAsync(); ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.ServiceDefaults/AITravelAgent.ServiceDefaults.csproj ```` net9.0 enable enable true ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.ServiceDefaults/Extensions.cs ````csharp using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Diagnostics.HealthChecks; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Diagnostics.HealthChecks; using Microsoft.Extensions.Logging; using Microsoft.Extensions.ServiceDiscovery; using OpenTelemetry; using OpenTelemetry.Metrics; using OpenTelemetry.Trace; namespace Microsoft.Extensions.Hosting; // Adds common .NET Aspire services: service discovery, resilience, health checks, and OpenTelemetry. // This project should be referenced by each service project in your solution. // To learn more about using this project, see https://aka.ms/dotnet/aspire/service-defaults public static class Extensions { public static TBuilder AddServiceDefaults(this TBuilder builder) where TBuilder : IHostApplicationBuilder { builder.ConfigureOpenTelemetry(); builder.AddDefaultHealthChecks(); builder.Services.AddServiceDiscovery(); builder.Services.ConfigureHttpClientDefaults(http => { // Turn on resilience by default http.AddStandardResilienceHandler(); // Turn on service discovery by default http.AddServiceDiscovery(); }); // Uncomment the following to restrict the allowed schemes for service discovery. // builder.Services.Configure(options => // { // options.AllowedSchemes = ["https"]; // }); return builder; } public static TBuilder ConfigureOpenTelemetry(this TBuilder builder) where TBuilder : IHostApplicationBuilder { builder.Logging.AddOpenTelemetry(logging => { logging.IncludeFormattedMessage = true; logging.IncludeScopes = true; }); builder.Services.AddOpenTelemetry() .WithMetrics(metrics => { metrics.AddAspNetCoreInstrumentation() .AddHttpClientInstrumentation() .AddRuntimeInstrumentation(); }) .WithTracing(tracing => { tracing.AddSource(builder.Environment.ApplicationName) .AddAspNetCoreInstrumentation() // Uncomment the following line to enable gRPC instrumentation (requires the OpenTelemetry.Instrumentation.GrpcNetClient package) //.AddGrpcClientInstrumentation() .AddHttpClientInstrumentation(); }); builder.AddOpenTelemetryExporters(); return builder; } private static TBuilder AddOpenTelemetryExporters(this TBuilder builder) where TBuilder : IHostApplicationBuilder { var useOtlpExporter = !string.IsNullOrWhiteSpace(builder.Configuration["OTEL_EXPORTER_OTLP_ENDPOINT"]); if (useOtlpExporter) { builder.Services.AddOpenTelemetry().UseOtlpExporter(); } // Uncomment the following lines to enable the Azure Monitor exporter (requires the Azure.Monitor.OpenTelemetry.AspNetCore package) //if (!string.IsNullOrEmpty(builder.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"])) //{ // builder.Services.AddOpenTelemetry() // .UseAzureMonitor(); //} return builder; } public static TBuilder AddDefaultHealthChecks(this TBuilder builder) where TBuilder : IHostApplicationBuilder { builder.Services.AddHealthChecks() // Add a default liveness check to ensure app is responsive .AddCheck("self", () => HealthCheckResult.Healthy(), ["live"]); return builder; } public static WebApplication MapDefaultEndpoints(this WebApplication app) { // Adding health checks endpoints to applications in non-development environments has security implications. // See https://aka.ms/dotnet/aspire/healthchecks for details before enabling these endpoints in non-development environments. if (app.Environment.IsDevelopment()) { // All health checks must pass for app to be considered ready to accept traffic after starting app.MapHealthChecks("/health"); // Only health checks tagged with the "live" tag must pass for app to be considered alive app.MapHealthChecks("/alive", new HealthCheckOptions { Predicate = r => r.Tags.Contains("live") }); } return app; } } ```` ## File: packages/mcp-servers/customer-query/AITravelAgent.sln ```` Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio Version 17 VisualStudioVersion = 17.0.31903.59 MinimumVisualStudioVersion = 10.0.40219.1 Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "AITravelAgent.CustomerQueryServer", "AITravelAgent.CustomerQueryServer\AITravelAgent.CustomerQueryServer.csproj", "{B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}" EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "AITravelAgent.ServiceDefaults", "AITravelAgent.ServiceDefaults\AITravelAgent.ServiceDefaults.csproj", "{A30C54AF-25EE-49F8-A671-506E7B681BA4}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU Debug|x64 = Debug|x64 Debug|x86 = Debug|x86 Release|Any CPU = Release|Any CPU Release|x64 = Release|x64 Release|x86 = Release|x86 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Debug|Any CPU.Build.0 = Debug|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Debug|x64.ActiveCfg = Debug|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Debug|x64.Build.0 = Debug|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Debug|x86.ActiveCfg = Debug|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Debug|x86.Build.0 = Debug|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Release|Any CPU.ActiveCfg = Release|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Release|Any CPU.Build.0 = Release|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Release|x64.ActiveCfg = Release|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Release|x64.Build.0 = Release|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Release|x86.ActiveCfg = Release|Any CPU {B8376ECB-BE3E-4B98-B8E1-2870ACE19D95}.Release|x86.Build.0 = Release|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Debug|Any CPU.Build.0 = Debug|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Debug|x64.ActiveCfg = Debug|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Debug|x64.Build.0 = Debug|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Debug|x86.ActiveCfg = Debug|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Debug|x86.Build.0 = Debug|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Release|Any CPU.ActiveCfg = Release|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Release|Any CPU.Build.0 = Release|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Release|x64.ActiveCfg = Release|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Release|x64.Build.0 = Release|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Release|x86.ActiveCfg = Release|Any CPU {A30C54AF-25EE-49F8-A671-506E7B681BA4}.Release|x86.Build.0 = Release|Any CPU EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection EndGlobal ```` ## File: packages/mcp-servers/customer-query/Dockerfile ```` # Build stage FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build WORKDIR /app # Copy csproj and restore dependencies COPY ["AITravelAgent.CustomerQueryServer/AITravelAgent.CustomerQueryServer.csproj", "AITravelAgent.CustomerQueryServer/"] RUN dotnet restore "AITravelAgent.CustomerQueryServer/AITravelAgent.CustomerQueryServer.csproj" # Copy the rest of the source code COPY . . WORKDIR /app/AITravelAgent.CustomerQueryServer # Build and publish the app RUN dotnet publish "AITravelAgent.CustomerQueryServer.csproj" -c Release -o /app/publish # Runtime stage FROM mcr.microsoft.com/dotnet/aspnet:9.0 AS runtime WORKDIR /app COPY --from=build /app/publish . EXPOSE 8080 ENTRYPOINT ["dotnet", "AITravelAgent.CustomerQueryServer.dll"] ```` ## File: packages/mcp-servers/destination-recommendation/.mvn/wrapper/maven-wrapper.properties ```` distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.9.6/apache-maven-3.9.6-bin.zip wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar ```` ## File: packages/mcp-servers/destination-recommendation/.mvn/wrapper/MavenWrapperDownloader.java ````java /* * Copyright 2007-present the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.net.*; import java.io.*; import java.nio.channels.*; import java.util.Properties; public class MavenWrapperDownloader { private static final String WRAPPER_VERSION = "0.5.6"; /** * Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided. */ private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/" + WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar"; /** * Path to the maven-wrapper.properties file, which might contain a downloadUrl property to * use instead of the default one. */ private static final String MAVEN_WRAPPER_PROPERTIES_PATH = ".mvn/wrapper/maven-wrapper.properties"; /** * Path where the maven-wrapper.jar will be saved to. */ private static final String MAVEN_WRAPPER_JAR_PATH = ".mvn/wrapper/maven-wrapper.jar"; /** * Name of the property which should be used to override the default download url for the wrapper. */ private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl"; public static void main(String args[]) { System.out.println("- Downloader started"); File baseDirectory = new File(args[0]); System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath()); // If the maven-wrapper.properties exists, read it and check if it contains a custom // wrapperUrl parameter. File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH); String url = DEFAULT_DOWNLOAD_URL; if(mavenWrapperPropertyFile.exists()) { FileInputStream mavenWrapperPropertyFileInputStream = null; try { mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile); Properties mavenWrapperProperties = new Properties(); mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream); url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url); } catch (IOException e) { System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'"); } finally { try { if(mavenWrapperPropertyFileInputStream != null) { mavenWrapperPropertyFileInputStream.close(); } } catch (IOException e) { // Ignore ... } } } System.out.println("- Downloading from: " + url); File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH); if(!outputFile.getParentFile().exists()) { if(!outputFile.getParentFile().mkdirs()) { System.out.println( "- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'"); } } System.out.println("- Downloading to: " + outputFile.getAbsolutePath()); try { downloadFileFromURL(url, outputFile); System.out.println("Done"); System.exit(0); } catch (Throwable e) { System.out.println("- Error downloading"); e.printStackTrace(); System.exit(1); } } private static void downloadFileFromURL(String urlString, File destination) throws Exception { if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) { String username = System.getenv("MVNW_USERNAME"); char[] password = System.getenv("MVNW_PASSWORD").toCharArray(); Authenticator.setDefault(new Authenticator() { @Override protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); } URL website = new URL(urlString); ReadableByteChannel rbc; rbc = Channels.newChannel(website.openStream()); FileOutputStream fos = new FileOutputStream(destination); fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE); fos.close(); rbc.close(); } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/config/StartupConfig.java ````java package com.microsoft.mcp.sample.server.config; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; /** * Configuration class that displays welcome and usage information at application startup. */ @Configuration public class StartupConfig { @Value("${destination.service.welcome:Welcome to the Destination Recommendation Service!}") private String welcomeMessage; @Value("${destination.service.usage:}") private String usageMessage; /** * Display startup information when the application launches. */ @Bean public CommandLineRunner startupInfo() { return args -> { System.out.println("\n" + "=".repeat(80)); System.out.println(welcomeMessage); System.out.println("=".repeat(80)); if (usageMessage != null && !usageMessage.isEmpty()) { System.out.println("\nUsage Information:"); System.out.println(usageMessage); System.out.println("\nEndpoint: http://localhost:8080/mcp"); System.out.println("\nSee the README.md for more information on how to use the service."); } System.out.println("\nThe destination recommendation service is now ready to accept requests!"); System.out.println("=".repeat(80) + "\n"); }; } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/controller/HealthController.java ````java package com.microsoft.mcp.sample.server.controller; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; import com.microsoft.mcp.sample.server.service.DestinationService; import java.time.LocalDateTime; import java.util.HashMap; import java.util.Map; /** * Controller for health check and information endpoints. */ @RestController public class HealthController { private final DestinationService destinationService; @Autowired public HealthController(DestinationService destinationService) { this.destinationService = destinationService; } /** * Simple health check endpoint. * * @return Health status information */ @GetMapping("/health") public ResponseEntity> healthCheck() { Map response = new HashMap<>(); response.put("status", "UP"); response.put("timestamp", LocalDateTime.now().toString()); response.put("service", "Destination Recommendation Service"); return ResponseEntity.ok(response); } /** * Information endpoint about the service. * * @return Service information */ @GetMapping("/info") public ResponseEntity> serviceInfo() { Map response = new HashMap<>(); response.put("service", "Destination Recommendation Service"); response.put("version", "1.0.0"); response.put("endpoint", "/v1/tools"); Map tools = new HashMap<>(); tools.put("getDestinationsByActivity", "Get destinations by activity type (BEACH, ADVENTURE, etc.)"); tools.put("getDestinationsByBudget", "Get destinations by budget (BUDGET, MODERATE, LUXURY)"); tools.put("getDestinationsBySeason", "Get destinations by season (SPRING, SUMMER, etc.)"); tools.put("getDestinationsByPreferences", "Get destinations matching multiple criteria"); tools.put("getAllDestinations", "Get all available destinations"); response.put("availableTools", tools); return ResponseEntity.ok(response); } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/exception/GlobalExceptionHandler.java ````java package com.microsoft.mcp.sample.server.exception; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.bind.annotation.RestControllerAdvice; /** * Global exception handler for the destination recommendation service. */ @RestControllerAdvice public class GlobalExceptionHandler { /** * Handle IllegalArgumentException which occurs when invalid input is provided. * * @param ex The exception that was thrown * @return A response with error details */ @ExceptionHandler(IllegalArgumentException.class) public ResponseEntity handleIllegalArgumentException(IllegalArgumentException ex) { ErrorResponse error = new ErrorResponse( "Invalid_Input", "Invalid input parameter: " + ex.getMessage(), "Please check your input values and try again."); return new ResponseEntity<>(error, HttpStatus.BAD_REQUEST); } /** * Handle generic exceptions that are not specifically handled elsewhere. * * @param ex The exception that was thrown * @return A response with error details */ @ExceptionHandler(Exception.class) public ResponseEntity handleGenericException(Exception ex) { ErrorResponse error = new ErrorResponse( "Internal_Error", "An unexpected error occurred: " + ex.getMessage(), "Please try again later or contact support if the issue persists."); return new ResponseEntity<>(error, HttpStatus.INTERNAL_SERVER_ERROR); } /** * Simple error response class for consistent error reporting. */ public static class ErrorResponse { private String code; private String message; private String resolution; public ErrorResponse(String code, String message, String resolution) { this.code = code; this.message = message; this.resolution = resolution; } public String getCode() { return code; } public String getMessage() { return message; } public String getResolution() { return resolution; } } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/model/ActivityType.java ````java package com.microsoft.mcp.sample.server.model; /** * Enum representing different types of activities available at destinations. */ public enum ActivityType { BEACH, ADVENTURE, CULTURAL, RELAXATION, URBAN_EXPLORATION, NATURE, WINTER_SPORTS } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/model/BudgetCategory.java ````java package com.microsoft.mcp.sample.server.model; /** * Enum representing budget categories for destinations. */ public enum BudgetCategory { BUDGET, MODERATE, LUXURY } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/model/PreferenceRequest.java ````java package com.microsoft.mcp.sample.server.model; /** * Class representing user preferences for destination recommendations. * Using String types instead of enum references to avoid compilation issues. */ public class PreferenceRequest { private String preferredActivity; // Changed from ActivityType to String private String budgetCategory; // Changed from BudgetCategory to String private String preferredSeason; // Changed from Season to String private Boolean familyFriendly; private Integer numberOfDestinations; // Default constructor public PreferenceRequest() { this.numberOfDestinations = 3; // Default to returning 3 destinations } // Constructor public PreferenceRequest(String preferredActivity, String budgetCategory, String preferredSeason, Boolean familyFriendly, Integer numberOfDestinations) { this.preferredActivity = preferredActivity; this.budgetCategory = budgetCategory; this.preferredSeason = preferredSeason; this.familyFriendly = familyFriendly; this.numberOfDestinations = numberOfDestinations != null ? numberOfDestinations : 3; } // Getters and setters public String getPreferredActivity() { return preferredActivity; } public void setPreferredActivity(String preferredActivity) { this.preferredActivity = preferredActivity; } public String getBudgetCategory() { return budgetCategory; } public void setBudgetCategory(String budgetCategory) { this.budgetCategory = budgetCategory; } public String getPreferredSeason() { return preferredSeason; } public void setPreferredSeason(String preferredSeason) { this.preferredSeason = preferredSeason; } public Boolean getFamilyFriendly() { return familyFriendly; } public void setFamilyFriendly(Boolean familyFriendly) { this.familyFriendly = familyFriendly; } public Integer getNumberOfDestinations() { return numberOfDestinations; } public void setNumberOfDestinations(Integer numberOfDestinations) { this.numberOfDestinations = numberOfDestinations != null ? numberOfDestinations : 3; } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/model/Season.java ````java package com.microsoft.mcp.sample.server.model; /** * Enum representing seasons when destinations are best to visit. */ public enum Season { SPRING, SUMMER, AUTUMN, WINTER, ALL_YEAR } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/service/DestinationService.java ````java package com.microsoft.mcp.sample.server.service; import org.springframework.ai.tool.annotation.Tool; import org.springframework.stereotype.Service; /** * Service for providing travel destination recommendations. */ @Service public class DestinationService { // Constants for activity types public static final String BEACH = "BEACH"; public static final String ADVENTURE = "ADVENTURE"; public static final String CULTURAL = "CULTURAL"; public static final String RELAXATION = "RELAXATION"; public static final String URBAN_EXPLORATION = "URBAN_EXPLORATION"; public static final String NATURE = "NATURE"; public static final String WINTER_SPORTS = "WINTER_SPORTS"; // Constants for budget categories public static final String BUDGET = "BUDGET"; public static final String MODERATE = "MODERATE"; public static final String LUXURY = "LUXURY"; // Constants for seasons public static final String SPRING = "SPRING"; public static final String SUMMER = "SUMMER"; public static final String AUTUMN = "AUTUMN"; public static final String WINTER = "WINTER"; public static final String ALL_YEAR = "ALL_YEAR"; /** * Echo back the input message * @param message The message to echo * @return The original message */ @Tool(description = "Echo back the input message exactly as received") public String echoMessage(String message) { return message; } /** * Recommend destinations based on activity type * @param activityType The preferred activity type (BEACH, ADVENTURE, CULTURAL, RELAXATION, URBAN_EXPLORATION, NATURE, WINTER_SPORTS) * @return A list of recommended destinations */ @Tool(description = "Get travel destination recommendations based on preferred activity type") public String getDestinationsByActivity(String activityType) { try { String activity = activityType.toUpperCase(); // Validate activity type if (!isValidActivityType(activity)) { return "Invalid activity type. Please use one of: BEACH, ADVENTURE, CULTURAL, RELAXATION, URBAN_EXPLORATION, NATURE, WINTER_SPORTS"; } return getDestinationsByPreference(activity, null, null, null); } catch (Exception e) { return "Invalid activity type. Please use one of: BEACH, ADVENTURE, CULTURAL, RELAXATION, URBAN_EXPLORATION, NATURE, WINTER_SPORTS"; } } // Helper method to validate activity types private boolean isValidActivityType(String activityType) { return activityType.equals(BEACH) || activityType.equals(ADVENTURE) || activityType.equals(CULTURAL) || activityType.equals(RELAXATION) || activityType.equals(URBAN_EXPLORATION) || activityType.equals(NATURE) || activityType.equals(WINTER_SPORTS); } /** * Recommend destinations based on budget category * @param budget The budget category (BUDGET, MODERATE, LUXURY) * @return A list of recommended destinations */ @Tool(description = "Get travel destination recommendations based on budget category") public String getDestinationsByBudget(String budget) { try { String budgetCategory = budget.toUpperCase(); // Validate budget category if (!isValidBudgetCategory(budgetCategory)) { return "Invalid budget category. Please use one of: BUDGET, MODERATE, LUXURY"; } return getDestinationsByPreference(null, budgetCategory, null, null); } catch (Exception e) { return "Invalid budget category. Please use one of: BUDGET, MODERATE, LUXURY"; } } // Helper method to validate budget categories private boolean isValidBudgetCategory(String budgetCategory) { return budgetCategory.equals(BUDGET) || budgetCategory.equals(MODERATE) || budgetCategory.equals(LUXURY); } /** * Recommend destinations based on season * @param season The preferred season (SPRING, SUMMER, AUTUMN, WINTER, ALL_YEAR) * @return A list of recommended destinations */ @Tool(description = "Get travel destination recommendations based on preferred season") public String getDestinationsBySeason(String season) { try { String preferredSeason = season.toUpperCase(); // Validate season if (!isValidSeason(preferredSeason)) { return "Invalid season. Please use one of: SPRING, SUMMER, AUTUMN, WINTER, ALL_YEAR"; } return getDestinationsByPreference(null, null, preferredSeason, null); } catch (Exception e) { return "Invalid season. Please use one of: SPRING, SUMMER, AUTUMN, WINTER, ALL_YEAR"; } } // Helper method to validate seasons private boolean isValidSeason(String season) { return season.equals(SPRING) || season.equals(SUMMER) || season.equals(AUTUMN) || season.equals(WINTER) || season.equals(ALL_YEAR); } /** * Recommend destinations based on multiple preferences * @param activity The preferred activity type * @param budget The budget category * @param season The preferred season * @param familyFriendly Whether the destination needs to be family-friendly * @return A list of recommended destinations */ @Tool(description = "Get travel destination recommendations based on multiple criteria") public String getDestinationsByPreferences(String activity, String budget, String season, Boolean familyFriendly) { try { // Set preferences if provided if (activity != null && !activity.isEmpty()) { String activityUpper = activity.toUpperCase(); if (!isValidActivityType(activityUpper)) { return "Invalid activity type. Please use one of: BEACH, ADVENTURE, CULTURAL, RELAXATION, URBAN_EXPLORATION, NATURE, WINTER_SPORTS"; } } if (budget != null && !budget.isEmpty()) { String budgetUpper = budget.toUpperCase(); if (!isValidBudgetCategory(budgetUpper)) { return "Invalid budget category. Please use one of: BUDGET, MODERATE, LUXURY"; } } if (season != null && !season.isEmpty()) { String seasonUpper = season.toUpperCase(); if (!isValidSeason(seasonUpper)) { return "Invalid season. Please use one of: SPRING, SUMMER, AUTUMN, WINTER, ALL_YEAR"; } } return getDestinationsByPreference(activity, budget, season, familyFriendly); } catch (Exception e) { return "Invalid input. Please check your parameters and try again.\n" + "Activity types: BEACH, ADVENTURE, CULTURAL, RELAXATION, URBAN_EXPLORATION, NATURE, WINTER_SPORTS\n" + "Budget categories: BUDGET, MODERATE, LUXURY\n" + "Seasons: SPRING, SUMMER, AUTUMN, WINTER, ALL_YEAR"; } } /** * Get all available destinations * @return A list of all destinations */ @Tool(description = "Get a list of all available travel destinations") public String getAllDestinations() { return "Here are some popular travel destinations:\n\n" + "📍 Bali, Indonesia\n" + "⭐️ Beautiful beaches with vibrant culture and lush landscapes.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: SUMMER | Family Friendly: Yes\n\n" + "📍 Cancun, Mexico\n" + "⭐️ White sandy beaches with crystal clear waters and vibrant nightlife.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: WINTER | Family Friendly: Yes\n\n" + "📍 Maldives, Maldives\n" + "⭐️ Luxurious overwater bungalows and pristine beaches perfect for relaxation.\n" + "🏷️ Activity: BEACH | Budget: LUXURY | Best Season: ALL_YEAR | Family Friendly: Yes"; } /** * Helper method to get destinations based on preference */ private String getDestinationsByPreference(String activity, String budget, String season, Boolean familyFriendly) { // We'll return some hardcoded results based on the preferences if (activity != null && activity.equals(BEACH)) { return "Here are some beach destinations for you:\n\n" + "📍 Bali, Indonesia\n" + "⭐️ Beautiful beaches with vibrant culture and lush landscapes.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: SUMMER | Family Friendly: Yes\n\n" + "📍 Cancun, Mexico\n" + "⭐️ White sandy beaches with crystal clear waters and vibrant nightlife.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: WINTER | Family Friendly: Yes\n\n" + "📍 Maldives, Maldives\n" + "⭐️ Luxurious overwater bungalows and pristine beaches perfect for relaxation.\n" + "🏷️ Activity: BEACH | Budget: LUXURY | Best Season: ALL_YEAR | Family Friendly: Yes"; } else if (activity != null && activity.equals(CULTURAL)) { return "Here are some cultural destinations for you:\n\n" + "📍 Kyoto, Japan\n" + "⭐️ Ancient temples, traditional gardens, and rich cultural heritage.\n" + "🏷️ Activity: CULTURAL | Budget: MODERATE | Best Season: SPRING | Family Friendly: Yes\n\n" + "📍 Rome, Italy\n" + "⭐️ Historic city with ancient ruins, art, and delicious cuisine.\n" + "🏷️ Activity: CULTURAL | Budget: MODERATE | Best Season: SPRING | Family Friendly: Yes\n\n" + "📍 Prague, Czech Republic\n" + "⭐️ Historic architecture, affordable dining, and rich cultural experiences.\n" + "🏷️ Activity: CULTURAL | Budget: BUDGET | Best Season: SPRING | Family Friendly: Yes"; } else if (budget != null && budget.equals(LUXURY)) { return "Here are some luxury destinations for you:\n\n" + "📍 Maldives, Maldives\n" + "⭐️ Luxurious overwater bungalows and pristine beaches perfect for relaxation.\n" + "🏷️ Activity: BEACH | Budget: LUXURY | Best Season: ALL_YEAR | Family Friendly: Yes\n\n" + "📍 Santorini, Greece\n" + "⭐️ Beautiful sunsets, white-washed buildings, and Mediterranean cuisine.\n" + "🏷️ Activity: RELAXATION | Budget: LUXURY | Best Season: SUMMER | Family Friendly: Yes\n\n" + "📍 Aspen, USA\n" + "⭐️ World-class skiing, snowboarding, and luxurious alpine village.\n" + "🏷️ Activity: WINTER_SPORTS | Budget: LUXURY | Best Season: WINTER | Family Friendly: No"; } else if (season != null && season.equals(WINTER)) { return "Here are some winter destinations for you:\n\n" + "📍 Aspen, USA\n" + "⭐️ World-class skiing, snowboarding, and luxurious alpine village.\n" + "🏷️ Activity: WINTER_SPORTS | Budget: LUXURY | Best Season: WINTER | Family Friendly: No\n\n" + "📍 Chamonix, France\n" + "⭐️ Epic skiing and snowboarding with stunning Mont Blanc views.\n" + "🏷️ Activity: WINTER_SPORTS | Budget: LUXURY | Best Season: WINTER | Family Friendly: Yes\n\n" + "📍 Cancun, Mexico\n" + "⭐️ White sandy beaches with crystal clear waters and vibrant nightlife.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: WINTER | Family Friendly: Yes"; } else if (familyFriendly != null && familyFriendly) { return "Here are some family-friendly destinations for you:\n\n" + "📍 Bali, Indonesia\n" + "⭐️ Beautiful beaches with vibrant culture and lush landscapes.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: SUMMER | Family Friendly: Yes\n\n" + "📍 Cancun, Mexico\n" + "⭐️ White sandy beaches with crystal clear waters and vibrant nightlife.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: WINTER | Family Friendly: Yes\n\n" + "📍 Kyoto, Japan\n" + "⭐️ Ancient temples, traditional gardens, and rich cultural heritage.\n" + "🏷️ Activity: CULTURAL | Budget: MODERATE | Best Season: SPRING | Family Friendly: Yes"; } else { return "Here are some popular travel destinations:\n\n" + "📍 Bali, Indonesia\n" + "⭐️ Beautiful beaches with vibrant culture and lush landscapes.\n" + "🏷️ Activity: BEACH | Budget: MODERATE | Best Season: SUMMER | Family Friendly: Yes\n\n" + "📍 Kyoto, Japan\n" + "⭐️ Ancient temples, traditional gardens, and rich cultural heritage.\n" + "🏷️ Activity: CULTURAL | Budget: MODERATE | Best Season: SPRING | Family Friendly: Yes\n\n" + "📍 New York City, USA\n" + "⭐️ Iconic skyline, diverse neighborhoods, world-class museums, and entertainment.\n" + "🏷️ Activity: URBAN_EXPLORATION | Budget: LUXURY | Best Season: ALL_YEAR | Family Friendly: Yes"; } } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/java/com/microsoft/mcp/sample/server/McpServerApplication.java ````java package com.microsoft.mcp.sample.server; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.ai.tool.ToolCallbackProvider; import org.springframework.ai.tool.method.MethodToolCallbackProvider; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; import com.microsoft.mcp.sample.server.service.DestinationService; @SpringBootApplication public class McpServerApplication { private static final Logger logger = LoggerFactory.getLogger(McpServerApplication.class); public static void main(String[] args) { SpringApplication.run(McpServerApplication.class, args); } @Bean public ToolCallbackProvider destinationTools(DestinationService destinationService) { return MethodToolCallbackProvider.builder().toolObjects(destinationService).build(); } } ```` ## File: packages/mcp-servers/destination-recommendation/src/main/resources/application.yml ````yaml # Using spring-ai-starter-mcp-server-streamable-webmvc spring: ai: mcp: server: protocol: STREAMABLE name: streamable-mcp-server version: 1.0.0 type: SYNC instructions: "This streamable server provides real-time notifications" resource-change-notification: true mcp-change-notification: true prompt-change-notification: true streamable-http: mcp-endpoint: /mcp keep-alive-interval: 30s ```` ## File: packages/mcp-servers/destination-recommendation/src/main/resources/banner.txt ```` _____ _ _ _ _ | __ \ | | (_) | | (_) | | | | ___ ___| |_ _ _ __ __ _| |_ _ ___ _ __ ___ | | | |/ _ \/ __| __| | '_ \ / _` | __| |/ _ \| '_ \/ __| | |__| | __/\__ \ |_| | | | | (_| | |_| | (_) | | | \__ \ |_____/ \___||___/\__|_|_| |_|\__,_|\__|_|\___/|_| |_|___/ Recommendation Service v1.0 Spring AI MCP-enabled Travel Assistant ```` ## File: packages/mcp-servers/destination-recommendation/.gitignore ```` # Compiled class files *.class # Log files *.log # BlueJ files *.ctxt # Mobile Tools for Java (J2ME) .mtj.tmp/ # Package Files *.jar *.war *.nar *.ear *.zip *.tar.gz *.rar # virtual machine crash logs hs_err_pid* replay_pid* # Maven target/ pom.xml.tag pom.xml.releaseBackup pom.xml.versionsBackup pom.xml.next release.properties dependency-reduced-pom.xml buildNumber.properties .mvn/timing.properties .mvn/wrapper/maven-wrapper.jar # Gradle .gradle/ build/ !gradle/wrapper/gradle-wrapper.jar # IntelliJ IDEA .idea/ *.iws *.iml *.ipr out/ # Eclipse .apt_generated .classpath .factorypath .project .settings .springBeans .sts4-cache bin/ # NetBeans /nbproject/private/ /nbbuild/ /dist/ /nbdist/ /.nb-gradle/ # VS Code .vscode/ !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json # Spring Boot .spring-boot-devtools # Misc .DS_Store Thumbs.db ```` ## File: packages/mcp-servers/destination-recommendation/Dockerfile ```` # Build stage FROM maven:3.9.11-eclipse-temurin-24-noble AS build WORKDIR /app # Copy only the POM file first to cache dependencies COPY pom.xml ./ # Download dependencies - this layer will be cached unless pom.xml changes RUN mvn dependency:go-offline # Now copy the source code (which changes more frequently) COPY src ./src/ # Build the application RUN mvn clean package -DskipTests # Runtime stage FROM eclipse-temurin:25-jdk-noble WORKDIR /app # Copy the built artifact from the build stage COPY --from=build /app/target/destination-server-0.0.1-SNAPSHOT.jar /app/application.jar # Expose the port your application runs on EXPOSE 8080 # Run the application ENTRYPOINT ["java", "-jar", "/app/application.jar"] ```` ## File: packages/mcp-servers/destination-recommendation/pom.xml ````xml 4.0.0 org.springframework.boot spring-boot-starter-parent 3.5.5 com.example destination-server 0.0.1-SNAPSHOT Destination Server Sample Spring Boot application demonstrating MCP client and server usage Apache License, Version 2.0 https://www.apache.org/licenses/LICENSE-2.0 repo org.springframework.ai spring-ai-bom 1.1.0-SNAPSHOT pom import org.springframework.ai spring-ai-starter-mcp-server-webmvc org.springframework.boot spring-boot-starter-actuator org.springframework.boot spring-boot-starter-test test org.springframework.boot spring-boot-starter-webflux test io.projectreactor reactor-test test org.springframework.ai spring-ai-starter-mcp-client-webflux test org.springframework.boot spring-boot-maven-plugin org.apache.maven.plugins maven-compiler-plugin 21 21 21 21 Central Portal Snapshots central-portal-snapshots https://central.sonatype.com/repository/maven-snapshots/ false true spring-milestones Spring Milestones https://repo.spring.io/milestone false spring-snapshots Spring Snapshots https://repo.spring.io/snapshot false ```` ## File: packages/mcp-servers/echo-ping/src/index.ts ````typescript import dotenv from "dotenv"; dotenv.config(); import { meter } from "./instrumentation.js"; import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import express, { Request, Response } from "express"; import { EchoMCPServer } from "./server.js"; import { tokenProvider } from "./token-provider.js"; const server = new EchoMCPServer( new Server( { name: "echo-ping-http-server", version: "1.0.0", }, { capabilities: { tools: {}, }, } ) ); const messageMeter = meter.createCounter("message", { description: "Number of messages sent", }); const connectionCount = meter.createCounter("connection", { description: "Number of connections to the server", }); const app = express(); app.use(express.json()); const router = express.Router(); const MCP_ENDPOINT = "/mcp"; // Add logging middleware router.use((req, res, next) => { console.log(`Received ${req.method} request for ${req.originalUrl}`); // log headers console.log("Request Headers:", JSON.stringify(req.headers, null, 2)); // log body if present if (req.body && Object.keys(req.body).length > 0) { console.log("Request Body:", JSON.stringify(req.body, null, 2)); } else { console.log("Request Body: (empty)"); } // log query parameters if (Object.keys(req.query).length > 0) { console.log("Query Parameters:", JSON.stringify(req.query, null, 2)); } else { console.log("Query Parameters: (none)"); } next(); }); // Breaker token authentication middleware router.use((req, res, next) => { console.log(`Received ${req.method} request for ${req.originalUrl}`); const expectedToken = tokenProvider().getToken(); if (!expectedToken) { console.error("MCP_ECHO_PING_ACCESS_TOKEN is not set in environment."); res.status(500).json({ error: "Server misconfiguration." }); return; } const authHeader = (req.headers["authorization"] || req.headers["Authorization"]) as string; if (!authHeader || !authHeader.startsWith("Bearer ")) { console.error( 'Missing or invalid Authorization header. Got "' + authHeader + '"' ); res.status(401).json({ error: "Missing or invalid Authorization header." }); return; } const token = authHeader.substring("Bearer ".length); if (token !== expectedToken) { res.status(401).json({ error: "Invalid breaker token." }); return; } console.log(`Successfully authenticated request with bearer token.`); next(); }); router.get("/", (req: Request, res: Response) => { res.status(200).json({ message: "MCP Stateless Streamable HTTP Server is running", endpoint: MCP_ENDPOINT, }); }); router.post(MCP_ENDPOINT, async (req: Request, res: Response) => { messageMeter.add(1, { method: "POST", }); await server.handlePostRequest(req, res); }); router.get(MCP_ENDPOINT, async (req: Request, res: Response) => { connectionCount.add(1, { method: "GET", }); await server.handleGetRequest(req, res); }); app.use("/", router); const PORT = process.env.PORT || 3000; app.listen(PORT, () => { console.log(`MCP Stateless Streamable HTTP Server`); console.log(`MCP endpoint: http://localhost:${PORT}${MCP_ENDPOINT}`); console.log(`Press Ctrl+C to stop the server`); }); process.on("SIGINT", async () => { console.error("Shutting down server..."); await server.close(); process.exit(0); }); ```` ## File: packages/mcp-servers/echo-ping/src/instrumentation.ts ````typescript import { metrics, trace } from "@opentelemetry/api"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; import { OTLPLogExporter } from "@opentelemetry/exporter-logs-otlp-grpc"; import { OTLPMetricExporter } from "@opentelemetry/exporter-metrics-otlp-grpc"; import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-grpc"; import { OTLPExporterConfigBase } from "@opentelemetry/otlp-exporter-base"; import { resourceFromAttributes } from "@opentelemetry/resources"; import { LoggerProvider, SimpleLogRecordProcessor } from "@opentelemetry/sdk-logs"; import { PeriodicExportingMetricReader } from "@opentelemetry/sdk-metrics"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION, } from "@opentelemetry/semantic-conventions"; const otlpEndpoint = process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4317"; const otlpHeaders = process.env.OTEL_EXPORTER_OTLP_HEADERS || "x-otlp-header=header-value"; const otlpServiceName = process.env.OTEL_SERVICE_NAME || "mcp-echo-ping"; const otlpServiceVersion = process.env.OTLP_SERVICE_VERSION || "1.0"; const otlpOptions = { url: otlpEndpoint, headers: otlpHeaders.split(",").reduce((acc, header) => { const [key, value] = header.split("="); acc[key] = value; return acc; }, {} as Record), } as OTLPExporterConfigBase; const resource = resourceFromAttributes({ [ATTR_SERVICE_NAME]: otlpServiceName, [ATTR_SERVICE_VERSION]: otlpServiceVersion, }); const sdk = new NodeSDK({ resource, logRecordProcessor: new SimpleLogRecordProcessor( new OTLPLogExporter(otlpOptions) ), traceExporter: new OTLPTraceExporter(otlpOptions), metricReader: new PeriodicExportingMetricReader({ exporter: new OTLPMetricExporter(otlpOptions), }), instrumentations: [getNodeAutoInstrumentations()], }); sdk.start(); const loggerProvider = new LoggerProvider({ resource }); const logExporter = new OTLPLogExporter(otlpOptions); loggerProvider.addLogRecordProcessor(new SimpleLogRecordProcessor(logExporter)); const tracer = trace.getTracer(otlpServiceName, otlpServiceVersion); const meter = metrics.getMeter(otlpServiceName, otlpServiceVersion); const logger = loggerProvider.getLogger(otlpServiceName); function log( message: string, attributes: Record = {}, level: string = "INFO" ) { logger.emit({ severityText: level, body: message, attributes: { service: otlpServiceName, version: otlpServiceVersion, ...attributes, }, }); } export { log, meter, tracer }; ```` ## File: packages/mcp-servers/echo-ping/src/server.ts ````typescript import { Server } from '@modelcontextprotocol/sdk/server/index.js'; import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js'; import { CallToolRequestSchema, JSONRPCError, JSONRPCNotification, ListToolsRequestSchema, LoggingMessageNotification, Notification, } from '@modelcontextprotocol/sdk/types.js'; import { Request, Response } from 'express'; import { randomUUID } from 'node:crypto'; import { EchoTools } from './tools.js'; const JSON_RPC = '2.0'; const JSON_RPC_ERROR = -32603; export class EchoMCPServer { server: Server; constructor(server: Server) { this.server = server; this.setupServerRequestHandlers(); } async close() { console.log('Shutting down server...'); await this.server.close(); console.log('Server shutdown complete.'); } async handleGetRequest(req: Request, res: Response) { console.log('Responded to GET with 405 Method Not Allowed'); res.status(405).json(this.createRPCErrorResponse('Method not allowed.')); } async handlePostRequest(req: Request, res: Response) { console.log(`POST ${req.originalUrl} (${req.ip}) - payload:`, req.body); try { const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined, }); console.log('Connecting transport to server...'); await this.server.connect(transport); console.log('Transport connected. Handling request...'); await transport.handleRequest(req, res, req.body); res.on('close', () => { console.log('Request closed by client'); transport.close(); this.server.close(); }); await this.sendMessages(transport); console.log( `POST request handled successfully (status=${res.statusCode})` ); } catch (error) { console.error('Error handling MCP request:', error); if (!res.headersSent) { res .status(500) .json(this.createRPCErrorResponse('Internal server error.')); console.error('Responded with 500 Internal Server Error'); } } } private setupServerRequestHandlers() { this.server.setRequestHandler(ListToolsRequestSchema, async (_request) => { return { jsonrpc: JSON_RPC, tools: EchoTools, }; }); this.server.setRequestHandler( CallToolRequestSchema, async (request, _extra) => { const args = request.params.arguments; const toolName = request.params.name; const tool = EchoTools.find((tool) => tool.name === toolName); console.log(`Handling CallToolRequest for tool: ${toolName}`); if (!tool) { console.error(`Tool ${toolName} not found.`); return this.createRPCErrorResponse(`Tool ${toolName} not found.`); } try { const result = await tool.execute(args as any); console.log(`Tool ${toolName} executed. Result:`, result); return { jsonrpc: JSON_RPC, content: [ { type: 'text', text: `Tool ${toolName} executed with arguments ${JSON.stringify( args )}. Result: ${JSON.stringify(result)}`, }, ], }; } catch (error) { console.error(`Error executing tool ${toolName}:`, error); return this.createRPCErrorResponse( `Error executing tool ${toolName}: ${error}` ); } } ); } private async sendMessages(transport: StreamableHTTPServerTransport) { const message: LoggingMessageNotification = { method: 'notifications/message', params: { level: 'info', data: 'SSE Connection established' }, }; console.log('Sending SSE connection established notification.'); this.sendNotification(transport, message); } private async sendNotification( transport: StreamableHTTPServerTransport, notification: Notification ) { const rpcNotificaiton: JSONRPCNotification = { ...notification, jsonrpc: JSON_RPC, }; console.log(`Sending notification: ${notification.method}`); await transport.send(rpcNotificaiton); } private createRPCErrorResponse(message: string): JSONRPCError { return { jsonrpc: JSON_RPC, error: { code: JSON_RPC_ERROR, message: message, }, id: randomUUID(), }; } } ```` ## File: packages/mcp-servers/echo-ping/src/token-provider.ts ````typescript // This a sample token privider, in a real application this would be replaced with a more secure implementation export function tokenProvider() { return { getToken: () => { const token = process.env["MCP_ECHO_PING_ACCESS_TOKEN"]; if (!token) { console.error("MCP_ECHO_PING_ACCESS_TOKEN is not set in environment."); throw new Error( "Server misconfiguration: MCP_ECHO_PING_ACCESS_TOKEN is not set." ); } return token; }, }; } ```` ## File: packages/mcp-servers/echo-ping/src/tools.ts ````typescript import { log, tracer, meter } from "./instrumentation.js"; export const EchoTools = [ { name: "echo", description: "Echo back the input values. Useful for testing and debugging.", inputSchema: { type: "object", properties: { text: { type: "string" }, }, required: ["text"], }, outputSchema: { type: "object", properties: { content: { type: "array", items: { type: "object", properties: { type: { type: "string" }, text: { type: "string" } }, }, }, }, }, async execute({ text }: { text: string }) { return tracer.startActiveSpan("echo", async (span) => { log("Received request to echo:", { text }); span.addEvent("echo"); span.end(); return await Promise.resolve({ content: [ { type: "text" as const, text: `Echoed text: ${text} - from the server at ${new Date().toISOString()}`, }, ], }); }); }, }, ]; ```` ## File: packages/mcp-servers/echo-ping/Dockerfile ```` # Build stage FROM node:24-slim AS builder WORKDIR /app COPY package*.json ./ RUN npm i COPY . . RUN npm run build # Production stage FROM node:24-slim AS production WORKDIR /app COPY --from=builder /app/package*.json ./ COPY --from=builder /app/dist ./dist # Remove dev dependencies and cache RUN npm ci --omit=dev && npm cache clean --force # Use a non-root user for security RUN addgroup --system appgroup && adduser --system --ingroup appgroup appuser USER appuser EXPOSE 3000 CMD ["node", "./dist/index.js"] ```` ## File: packages/mcp-servers/itinerary-planning/src/app.py ````python import logging import uvicorn from starlette.responses import HTMLResponse from starlette.routing import Route from mcp_server import mcp logger = logging.getLogger(__name__) # Not necessary for MCP, but demonstrates the inclusion of non-MCP routes async def homepage(request): return HTMLResponse("Itinerary planning MCP server") app = mcp.streamable_http_app() app.router.routes.append(Route("/", endpoint=homepage)) def run(): """Start the Starlette server""" uvicorn.run(app, host="0.0.0.0", port=8000) if __name__ == "__main__": run() ```` ## File: packages/mcp-servers/itinerary-planning/src/mcp_server.py ````python import random import re import uuid from dataclasses import dataclass from datetime import datetime, timedelta from typing import Annotated from faker import Faker from mcp.server.fastmcp import FastMCP from pydantic import Field mcp = FastMCP("Itinerary planning") fake = Faker() @dataclass class Hotel: name: str address: str location: str rating: float price_per_night: float hotel_type: str amenities: list[str] available_rooms: int @dataclass class HotelSuggestions: hotels: list[Hotel] @dataclass class Airport: code: str name: str city: str @dataclass class FlightSegment: flight_number: str from_airport: Airport to_airport: Airport departure: str arrival: str duration_minutes: int @dataclass class FlightConnection: airport_code: str duration_minutes: int @dataclass class Flight: flight_id: str airline: str flight_number: str aircraft: str from_airport: Airport to_airport: Airport departure: str arrival: str duration_minutes: int is_direct: bool price: float currency: str available_seats: int cabin_class: str segments: list[FlightSegment] connection: FlightConnection | None @dataclass class FlightSuggestions: departure_flights: list[Flight] return_flights: list[Flight] def validate_iso_date(date_str: str, param_name: str) -> datetime.date: """ Validates that a string is in ISO format (YYYY-MM-DD) and returns the parsed date. Args: date_str: The date string to validate param_name: Name of the parameter for error messages Returns: The parsed date object Raises: ValueError: If the date is not in ISO format or is invalid """ iso_pattern = re.compile(r"^\d{4}-\d{2}-\d{2}$") if not iso_pattern.match(date_str): raise ValueError(f"{param_name} must be in ISO format (YYYY-MM-DD), got: {date_str}") try: return datetime.strptime(date_str, "%Y-%m-%d").date() except ValueError as e: raise ValueError(f"Invalid {param_name}: {e}") @mcp.tool() async def suggest_hotels( location: Annotated[str, Field(description="Location (city or area) to search for hotels")], check_in: Annotated[str, Field(description="Check-in date in ISO format (YYYY-MM-DD)")], check_out: Annotated[str, Field(description="Check-out date in ISO format (YYYY-MM-DD)")], ) -> HotelSuggestions: """ Suggest hotels based on location and dates. """ # Validate dates check_in_date = validate_iso_date(check_in, "check_in") check_out_date = validate_iso_date(check_out, "check_out") # Ensure check_out is after check_in if check_out_date <= check_in_date: raise ValueError("check_out date must be after check_in date") # Create realistic mock data for hotels hotel_types = ["Luxury", "Boutique", "Budget", "Business"] amenities = ["Free WiFi", "Pool", "Spa", "Gym", "Restaurant", "Bar", "Room Service", "Parking"] # Generate a rating between 3.0 and 5.0 def generate_rating(): return round(random.uniform(3.0, 5.0), 1) # Generate a price based on hotel type def generate_price(hotel_type): price_ranges = { "Luxury": (250, 600), "Boutique": (180, 350), "Budget": (80, 150), "Resort": (200, 500), "Business": (150, 300), } min_price, max_price = price_ranges.get(hotel_type, (100, 300)) return round(random.uniform(min_price, max_price)) # Generate between 3 and 8 hotels num_hotels = random.randint(3, 8) hotels = [] neighborhoods = [ "Downtown", "Historic District", "Waterfront", "Business District", "Arts District", "University Area", ] for i in range(num_hotels): hotel_type = random.choice(hotel_types) hotel_amenities = random.sample(amenities, random.randint(3, 6)) neighborhood = random.choice(neighborhoods) hotel = Hotel( name=f"{hotel_type} {['Hotel', 'Inn', 'Suites', 'Resort', 'Plaza'][random.randint(0, 4)]}", address=fake.street_address(), location=f"{neighborhood}, {location}", rating=generate_rating(), price_per_night=generate_price(hotel_type), hotel_type=hotel_type, amenities=hotel_amenities, available_rooms=random.randint(1, 15), ) hotels.append(hotel) # Sort by rating to show best hotels first hotels.sort(key=lambda x: x.rating, reverse=True) return HotelSuggestions(hotels=hotels) @mcp.tool() async def suggest_flights( from_location: Annotated[str, Field(description="Departure location (city or airport)")], to_location: Annotated[str, Field(description="Destination location (city or airport)")], departure_date: Annotated[str, Field(description="Departure date in ISO format (YYYY-MM-DD)")], return_date: Annotated[str, Field(description="Return date in ISO format (YYYY-MM-DD)")], ) -> FlightSuggestions: """ Suggest flights based on locations and dates. """ # Validate dates dep_date = validate_iso_date(departure_date, "departure_date") ret_date = validate_iso_date(return_date, "return_date") # Ensure return date is after departure date if ret_date <= dep_date: return "Error: return_date must be after departure_date" # Create realistic mock data for flights airlines = [ "SkyWings", "Global Air", "Atlantic Airways", "Pacific Express", "Mountain Jets", "Stellar Airlines", "Sunshine Airways", "Northern Flights", ] aircraft_types = ["Boeing 737", "Airbus A320", "Boeing 787", "Airbus A350", "Embraer E190", "Bombardier CRJ900"] # Generate airport codes based on locations def generate_airport_code(city): # Simple simulation of airport codes # In reality, this would use a database of real airport codes vowels = "AEIOU" consonants = "BCDFGHJKLMNPQRSTVWXYZ" # Use first letter of city if possible first_char = city[0].upper() if first_char in consonants: code = first_char else: code = random.choice(consonants) # Add two random letters, preferring consonants for _ in range(2): if random.random() < 0.7: # 70% chance of consonant code += random.choice(consonants) else: code += random.choice(vowels) return code from_code = generate_airport_code(from_location) to_code = generate_airport_code(to_location) # Generate departure flights departure_flights = [] num_dep_flights = random.randint(3, 7) for _ in range(num_dep_flights): # Generate departure time (between 6 AM and 10 PM) hour = random.randint(6, 22) minute = random.choice([0, 15, 30, 45]) # Convert date to datetime before setting hour and minute dep_time = datetime.combine(dep_date, datetime.min.time()).replace(hour=hour, minute=minute) # Flight duration between 1 and 8 hours flight_minutes = random.randint(60, 480) arr_time = dep_time + timedelta(minutes=flight_minutes) # Determine if this is a direct or connecting flight is_direct = random.random() < 0.6 # 60% chance of direct flight from_airport = Airport( code=from_code, name=f"{from_location} International Airport", city=from_location, ) to_airport = Airport( code=to_code, name=f"{to_location} International Airport", city=to_location, ) # Add connection info for non-direct flights flight_segments = [] connection_airport = None connection_duration_minutes = 0 if not is_direct: # Create a connection point connection_codes = ["ATL", "ORD", "DFW", "LHR", "CDG", "DXB", "AMS", "FRA"] connection_code = random.choice(connection_codes) # Split the flight into segments segment1_duration = round(flight_minutes * random.uniform(0.3, 0.7)) segment2_duration = flight_minutes - segment1_duration connection_time = random.randint(45, 180) # between 45 minutes and 3 hours segment1_arrival = dep_time + timedelta(minutes=segment1_duration) segment2_departure = segment1_arrival + timedelta(minutes=connection_time) flight_segments = [ FlightSegment( flight_number=f"{random.choice('ABCDEFG')}{random.randint(100, 9999)}", from_airport=from_airport, to_airport=Airport( code=connection_code, name=f"{connection_code} International Airport", city=connection_code, ), departure=dep_time.isoformat(), arrival=segment1_arrival.isoformat(), duration_minutes=segment1_duration, ), FlightSegment( flight_number=f"{random.choice('ABCDEFG')}{random.randint(100, 9999)}", from_airport=Airport( code=connection_code, name=f"{connection_code} International Airport", city=connection_code, ), to_airport=to_airport, departure=segment2_departure.isoformat(), arrival=arr_time.isoformat(), duration_minutes=segment2_duration, ), ] connection_airport = connection_code connection_duration_minutes = connection_time flight = Flight( flight_id=str(uuid.uuid4())[:8].upper(), airline=random.choice(airlines), flight_number=f"{random.choice('ABCDEFG')}{random.randint(100, 9999)}", aircraft=random.choice(aircraft_types), from_airport=from_airport, to_airport=to_airport, departure=dep_time.isoformat(), arrival=arr_time.isoformat(), duration_minutes=flight_minutes, is_direct=is_direct, price=round(random.uniform(99, 999), 2), currency="USD", available_seats=random.randint(1, 30), cabin_class=random.choice(["Economy", "Premium Economy", "Business", "First"]), segments=flight_segments, connection=FlightConnection( airport_code=connection_airport, duration_minutes=connection_duration_minutes, ) if not is_direct else None, ) departure_flights.append(flight) # Generate return flights if return_date is provided return_flights = [] if ret_date: num_ret_flights = random.randint(3, 7) for _ in range(num_ret_flights): # Similar logic as departure flights but for return hour = random.randint(6, 22) minute = random.choice([0, 15, 30, 45]) # Convert date to datetime before setting hour and minute dep_time = datetime.combine(ret_date, datetime.min.time()).replace(hour=hour, minute=minute) flight_minutes = random.randint(60, 480) arr_time = dep_time + timedelta(minutes=flight_minutes) is_direct = random.random() < 0.6 from_airport=Airport( code=to_code, name=f"{to_location} International Airport", city=to_location ) to_airport=Airport( code=from_code, name=f"{from_location} International Airport", city=from_location ) # Add connection info for non-direct flights flight_segments = [] connection_airport = None connection_duration_minutes = None if not is_direct: connection_codes = ["ATL", "ORD", "DFW", "LHR", "CDG", "DXB", "AMS", "FRA"] connection_code = random.choice(connection_codes) segment1_duration = round(flight_minutes * random.uniform(0.3, 0.7)) segment2_duration = flight_minutes - segment1_duration connection_time = random.randint(45, 180) segment1_arrival = dep_time + timedelta(minutes=segment1_duration) segment2_departure = segment1_arrival + timedelta(minutes=connection_time) flight_segments = [ FlightSegment( flight_number=f"{random.choice('ABCDEFG')}{random.randint(100, 9999)}", from_airport=from_airport, to_airport=Airport( code=connection_code, name=f"{connection_code} International Airport", city=connection_code, ), departure=dep_time.isoformat(), arrival=segment1_arrival.isoformat(), duration_minutes=segment1_duration, ), FlightSegment( flight_number=f"{random.choice('ABCDEFG')}{random.randint(100, 9999)}", from_airport=Airport( code=connection_code, name=f"{connection_code} International Airport", city=connection_code, ), to_airport=to_airport, departure=segment2_departure.isoformat(), arrival=arr_time.isoformat(), duration_minutes=segment2_duration, ), ] connection_airport = connection_code connection_duration_minutes = connection_time flight = Flight( flight_id=str(uuid.uuid4())[:8].upper(), airline=random.choice(airlines), flight_number=f"{random.choice('ABCDEFG')}{random.randint(100, 9999)}", aircraft=random.choice(aircraft_types), from_airport=from_airport, to_airport=to_airport, departure=dep_time.isoformat(), arrival=arr_time.isoformat(), duration_minutes=flight_minutes, is_direct=is_direct, price=round(random.uniform(99, 999), 2), currency="USD", available_seats=random.randint(1, 30), cabin_class=random.choice(["Economy", "Premium Economy", "Business", "First"]), segments=flight_segments, connection=FlightConnection( airport_code=connection_airport, duration_minutes=connection_duration_minutes, ) if not is_direct else None, ) return_flights.append(flight) # Combine into a single response response = FlightSuggestions( departure_flights=departure_flights, return_flights=return_flights if ret_date else [] ) # Return the flights return response if __name__ == "__main__": mcp.run(transport="sse") ```` ## File: packages/mcp-servers/itinerary-planning/.dockerignore ```` .venv build .ruff_cache ```` ## File: packages/mcp-servers/itinerary-planning/Dockerfile ```` FROM python:3.13-slim-bookworm # The installer requires curl (and certificates) to download the release archive RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates # Download the latest installer ADD https://astral.sh/uv/install.sh /uv-installer.sh # Run the installer then remove it RUN sh /uv-installer.sh && rm /uv-installer.sh # Ensure the installed binary is on the `PATH` ENV PATH="/root/.local/bin/:$PATH" # Copy the project into the image ADD . /app # Sync the project into a new environment, using the frozen lockfile WORKDIR /app RUN uv sync --frozen EXPOSE 8000 CMD ["uv", "run", "start"] ```` ## File: packages/mcp-servers/itinerary-planning/pyproject.toml ````toml [project] name = "itinerary-planning-mcp-server" version = "0.2.0" description = "A working example of an MCP server for itinerary planning." requires-python = ">=3.12" dependencies = [ "httpx>=0.28.1", "mcp[cli]>=1.10.1", "starlette>=0.46.1", "uvicorn>=0.34.0", "faker>=37.1.0" ] [project.scripts] start = "app:run" [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [tool.setuptools] package-dir = {"" = "src"} [tool.ruff] line-length = 120 target-version = "py313" lint.select = ["E", "F", "I", "UP", "A"] lint.ignore = ["D203"] ```` ## File: packages/ui-angular/src/app/chat-conversation/chat-conversation.component.css ````css :host { display: flex; justify-content: center; padding: 1rem; min-height: 100%; box-sizing: border-box; } .message-bubble { transition: opacity 0.3s ease; } .user-message { background-color: hsl(var(--primary)); color: hsl(var(--primary-foreground)); } :host-context(.dark) .timestamp { color: hsl(210 40% 90%); /* Brighter timestamp in dark mode */ opacity: 0.85; /* Slightly higher opacity for better readability */ } :host-context(.dark) *[hlmInput] { border-color: white !important; /* Override the border color in dark mode */ } .timestamp { opacity: 0.7; font-size: 0.75rem; } ```` ## File: packages/ui-angular/src/app/chat-conversation/chat-conversation.component.spec.ts ````typescript import { ComponentFixture, TestBed } from '@angular/core/testing'; import { ChatConversationComponent } from './chat-conversation.component'; describe('ChatConversationComponent', () => { let component: ChatConversationComponent; let fixture: ComponentFixture; beforeEach(async () => { await TestBed.configureTestingModule({ imports: [ChatConversationComponent] }) .compileComponents(); fixture = TestBed.createComponent(ChatConversationComponent); component = fixture.componentInstance; fixture.detectChanges(); }); it('should create', () => { expect(component).toBeTruthy(); }); }); ```` ## File: packages/ui-angular/src/app/components/alert/alert.component.ts ````typescript import { Component } from '@angular/core'; import { HlmAlertDescriptionDirective, HlmAlertDirective, HlmAlertIconDirective, HlmAlertTitleDirective, } from '@spartan-ng/ui-alert-helm'; import { HlmIconDirective } from '@spartan-ng/ui-icon-helm'; import { NgIcon, provideIcons } from '@ng-icons/core'; import { lucideBox } from '@ng-icons/lucide'; @Component({ selector: 'alert-preview', standalone: true, imports: [ HlmAlertDirective, HlmAlertDescriptionDirective, HlmAlertIconDirective, HlmAlertTitleDirective, HlmIconDirective, NgIcon, ], providers: [provideIcons({ lucideBox })], template: `

Thinking...

`, }) export class AlertComponent {} ```` ## File: packages/ui-angular/src/app/components/theme-toggle/theme-toggle.component.ts ````typescript import { Component } from '@angular/core'; import { ThemeService } from '../../services/theme.service'; import { CommonModule } from '@angular/common'; @Component({ selector: 'app-theme-toggle', standalone: true, imports: [CommonModule], template: ` `, styles: [` button { display: flex; align-items: center; justify-content: center; } `] }) export class ThemeToggleComponent { constructor(public themeService: ThemeService) {} toggleTheme(): void { this.themeService.toggleTheme(); } } ```` ## File: packages/ui-angular/src/app/services/theme.service.ts ````typescript import { Injectable, signal, WritableSignal, effect, PLATFORM_ID, Inject } from '@angular/core'; import { isPlatformBrowser } from '@angular/common'; @Injectable({ providedIn: 'root' }) export class ThemeService { private readonly THEME_KEY = 'ui-theme-preference'; private readonly DARK_CLASS = 'dark'; private HTML_ELEMENT: HTMLElement | null = null; private readonly isBrowser: boolean; // Theme state managed through signals private _isDarkMode: WritableSignal; constructor(@Inject(PLATFORM_ID) private platformId: Object) { this.isBrowser = isPlatformBrowser(this.platformId); if (this.isBrowser) { this.HTML_ELEMENT = document.documentElement; // Initialize with stored preference or use light as default const storedTheme = localStorage.getItem(this.THEME_KEY); // Use stored preference if available, otherwise default to light mode const initialTheme = storedTheme ? storedTheme === 'dark' : false; // Default to light mode this._isDarkMode = signal(initialTheme); // Setup effect to update theme when signal changes effect(() => { this.applyTheme(this._isDarkMode()); }); // TODO: Listen for system preference changes // window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', (e) => { // // Only update if user hasn't explicitly set a preference // if (!localStorage.getItem(this.THEME_KEY)) { // this._isDarkMode.set(e.matches); // } // }); // Apply theme on initialization this.applyTheme(initialTheme); } else { // Default theme signal for server-side rendering - now defaulting to light this._isDarkMode = signal(false); } } // Public getter for theme state get isDarkMode() { return this._isDarkMode.asReadonly(); } // Public method to toggle theme toggleTheme(): void { if (!this.isBrowser) return; const newTheme = !this._isDarkMode(); this._isDarkMode.set(newTheme); localStorage.setItem(this.THEME_KEY, newTheme ? 'dark' : 'light'); } // Public method to explicitly set theme setTheme(dark: boolean): void { if (!this.isBrowser) return; this._isDarkMode.set(dark); localStorage.setItem(this.THEME_KEY, dark ? 'dark' : 'light'); } // Method to apply theme to the HTML element private applyTheme(isDark: boolean): void { if (!this.isBrowser || !this.HTML_ELEMENT) return; if (isDark) { this.HTML_ELEMENT.classList.add(this.DARK_CLASS); } else { this.HTML_ELEMENT.classList.remove(this.DARK_CLASS); } } } ```` ## File: packages/ui-angular/src/app/app.component.html ````html
```` ## File: packages/ui-angular/src/app/app.component.spec.ts ````typescript import { TestBed } from '@angular/core/testing'; import { AppComponent } from './app.component'; describe('AppComponent', () => { beforeEach(async () => { await TestBed.configureTestingModule({ imports: [AppComponent], }).compileComponents(); }); it('should create the app', () => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.componentInstance; expect(app).toBeTruthy(); }); it(`should have the 'ai-travel-agents-ui' title`, () => { const fixture = TestBed.createComponent(AppComponent); const app = fixture.componentInstance; expect(app.title).toEqual('ai-travel-agents-ui'); }); it('should render title', () => { const fixture = TestBed.createComponent(AppComponent); fixture.detectChanges(); const compiled = fixture.nativeElement as HTMLElement; expect(compiled.querySelector('h1')?.textContent).toContain('Hello, ai-travel-agents-ui'); }); }); ```` ## File: packages/ui-angular/src/app/app.component.ts ````typescript import { Component } from '@angular/core'; import { RouterOutlet } from '@angular/router'; import { ChatConversationComponent } from './chat-conversation/chat-conversation.component'; import { ThemeToggleComponent } from './components/theme-toggle/theme-toggle.component'; @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet, ChatConversationComponent, ThemeToggleComponent], templateUrl: './app.component.html', styleUrl: './app.component.css' }) export class AppComponent { title = 'ai-travel-agents-ui'; } ```` ## File: packages/ui-angular/src/app/app.config.server.ts ````typescript import { provideServerRendering, withRoutes } from '@angular/ssr'; import { mergeApplicationConfig, ApplicationConfig } from '@angular/core'; import { appConfig } from './app.config'; import { serverRoutes } from './app.routes.server'; const serverConfig: ApplicationConfig = { providers: [provideServerRendering(withRoutes(serverRoutes))] }; export const config = mergeApplicationConfig(appConfig, serverConfig); ```` ## File: packages/ui-angular/src/app/app.routes.server.ts ````typescript import { RenderMode, ServerRoute } from '@angular/ssr'; export const serverRoutes: ServerRoute[] = [ { path: '**', renderMode: RenderMode.Prerender } ]; ```` ## File: packages/ui-angular/src/app/app.routes.ts ````typescript import { Routes } from '@angular/router'; export const routes: Routes = []; ```` ## File: packages/ui-angular/src/index.html ````html Azure AI Travel Agents ```` ## File: packages/ui-angular/src/main.ts ````typescript import { bootstrapApplication } from '@angular/platform-browser'; import { appConfig } from './app/app.config'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent, appConfig) .catch((err) => console.error(err)); ```` ## File: packages/ui-angular/src/server.ts ````typescript import { AngularNodeAppEngine, createNodeRequestHandler, isMainModule, writeResponseToNodeResponse, } from '@angular/ssr/node'; import express from 'express'; import { dirname, resolve } from 'node:path'; import { fileURLToPath } from 'node:url'; const serverDistFolder = dirname(fileURLToPath(import.meta.url)); const browserDistFolder = resolve(serverDistFolder, '../browser'); const app = express(); const angularApp = new AngularNodeAppEngine(); /** * Example Express Rest API endpoints can be defined here. * Uncomment and define endpoints as necessary. * * Example: * ```ts * app.get('/api/**', (req, res) => { * // Handle API request * }); * ``` */ /** * Serve static files from /browser */ app.use( express.static(browserDistFolder, { maxAge: '1y', index: false, redirect: false, }), ); /** * Handle all other requests by rendering the Angular application. */ app.use('/**', (req, res, next) => { angularApp .handle(req) .then((response) => response ? writeResponseToNodeResponse(response, res) : next(), ) .catch(next); }); /** * Start the server if this module is the main entry point. * The server listens on the port defined by the `PORT` environment variable, or defaults to 4000. */ if (isMainModule(import.meta.url)) { const port = process.env['PORT'] || 4000; app.listen(port, () => { console.log(`Node Express server listening on http://localhost:${port}`); }); } /** * Request handler used by the Angular CLI (for dev-server and during build) or Firebase Cloud Functions. */ export const reqHandler = createNodeRequestHandler(app); ```` ## File: packages/ui-angular/src/styles.css ````css /* You can add global styles to this file, and also import other style files */ @import "@angular/cdk/overlay-prebuilt.css"; @tailwind base; @tailwind components; @tailwind utilities; /* Set up variable CSS properties */ :root { --font-family: "Inter", system-ui, sans-serif; --ng-icon__size: 32px; } /* Light mode support */ @media (prefers-color-scheme: light) { :root { --background: 0 0% 100%; --foreground: 240 10% 3.9%; --card: 0 0% 100%; --card-foreground: 240 10% 3.9%; --popover: 0 0% 100%; --popover-foreground: 240 10% 3.9%; --primary: 240 5.9% 10%; --primary-foreground: 0 0% 98%; --secondary: 240 4.8% 95.9%; --secondary-foreground: 240 5.9% 10%; --muted: 240 4.8% 95.9%; --muted-foreground: 240 3.8% 46.1%; --accent: 240 4.8% 95.9%; --accent-foreground: 240 5.9% 10%; --destructive: 0 84.2% 60.2%; --destructive-foreground: 0 0% 98%; --border: 240 5.9% 90%; --input: 240 5.9% 90%; --ring: 240 5.9% 10%; color-scheme: light; } } /* Dark mode class for manual toggling */ .dark { --background: 240 10% 3.9%; --foreground: 0 0% 98%; --card: 240 10% 3.9%; --card-foreground: 0 0% 98%; --popover: 240 10% 3.9%; --popover-foreground: 0 0% 98%; --primary: 0 0% 98%; --primary-foreground: 240 5.9% 10%; --secondary: 240 3.7% 15.9%; --secondary-foreground: 0 0% 98%; --muted: 240 3.7% 15.9%; --muted-foreground: 210 40% 96.1%; /* Increased contrast for better readability */ --accent: 240 3.7% 15.9%; --accent-foreground: 0 0% 98%; --destructive: 0 62.8% 30.6%; --destructive-foreground: 0 0% 98%; --border: 240 3.7% 15.9%; --input: 240 3.7% 15.9%; --ring: 240 4.9% 83.9%; color-scheme: dark; } ```` ## File: packages/ui-angular/.editorconfig ```` # Editor configuration, see https://editorconfig.org root = true [*] charset = utf-8 indent_style = space indent_size = 2 insert_final_newline = true trim_trailing_whitespace = true [*.ts] quote_type = single ij_typescript_use_double_quotes = false [*.md] max_line_length = off trim_trailing_whitespace = false ```` ## File: packages/ui-angular/.gitignore ```` # See https://docs.github.com/get-started/getting-started-with-git/ignoring-files for more about ignoring files. # Compiled output /dist /tmp /out-tsc /bazel-out # Node /node_modules npm-debug.log yarn-error.log # IDEs and editors .idea/ .project .classpath .c9/ *.launch .settings/ *.sublime-workspace # Visual Studio Code .vscode/* !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json .history/* # Miscellaneous /.angular/cache .sass-cache/ /connect.lock /coverage /libpeerconnection.log testem.log /typings # System files .DS_Store Thumbs.db ```` ## File: packages/ui-angular/components.json ````json { "componentsPath": "libs/ui" } ```` ## File: packages/ui-angular/Dockerfile ```` FROM node:22-alpine WORKDIR /usr/src/app COPY . /usr/src/app RUN npm install -g @angular/cli RUN npm install CMD ["ng", "serve", "--host", "0.0.0.0"] ```` ## File: packages/ui-angular/Dockerfile.production ```` FROM node:22.16-alpine AS build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build:production FROM nginx:alpine COPY --from=build /app/dist/app/browser /usr/share/nginx/html RUN mv /usr/share/nginx/html/index.csr.html /usr/share/nginx/html/index.html EXPOSE 80 # Start NGINX server CMD ["nginx", "-g", "daemon off;"] ```` ## File: packages/ui-angular/tailwind.config.js ````javascript /** @type {import('tailwindcss').Config} */ module.exports = { content: [ "./src/**/*.{html,ts}", "./libs/**/*.{html,js,ts}", ], darkMode: 'class', theme: { extend: { colors: { border: "hsl(var(--border))", input: "hsl(var(--input))", ring: "hsl(var(--ring))", background: "hsl(var(--background))", foreground: "hsl(var(--foreground))", primary: { DEFAULT: "hsl(var(--primary))", foreground: "hsl(var(--primary-foreground))", }, secondary: { DEFAULT: "hsl(var(--secondary))", foreground: "hsl(var(--secondary-foreground))", }, destructive: { DEFAULT: "hsl(var(--destructive))", foreground: "hsl(var(--destructive-foreground))", }, muted: { DEFAULT: "hsl(var(--muted))", foreground: "hsl(var(--muted-foreground))", }, accent: { DEFAULT: "hsl(var(--accent))", foreground: "hsl(var(--accent-foreground))", }, popover: { DEFAULT: "hsl(var(--popover))", foreground: "hsl(var(--popover-foreground))", }, card: { DEFAULT: "hsl(var(--card))", foreground: "hsl(var(--card-foreground))", }, }, borderRadius: { lg: "var(--radius)", md: "calc(var(--radius) - 2px)", sm: "calc(var(--radius) - 4px)", }, }, }, plugins: [ require("tailwindcss-animate"), ], } ```` ## File: packages/ui-angular/tsconfig.app.json ````json /* To learn more about Typescript configuration file: https://www.typescriptlang.org/docs/handbook/tsconfig-json.html. */ /* To learn more about Angular compiler options: https://angular.dev/reference/configs/angular-compiler-options. */ { "extends": "./tsconfig.json", "compilerOptions": { "outDir": "./out-tsc/app", "types": [ "node" ] }, "files": [ "src/main.ts", "src/main.server.ts", "src/server.ts" ], "include": [ "src/**/*.d.ts" ] } ```` ## File: packages/ui-angular/tsconfig.spec.json ````json /* To learn more about Typescript configuration file: https://www.typescriptlang.org/docs/handbook/tsconfig-json.html. */ /* To learn more about Angular compiler options: https://angular.dev/reference/configs/angular-compiler-options. */ { "extends": "./tsconfig.json", "compilerOptions": { "outDir": "./out-tsc/spec", "types": [ "jasmine" ] }, "include": [ "src/**/*.spec.ts", "src/**/*.d.ts" ] } ```` ## File: infra/hooks/api/setup.ps1 ````powershell # Install dependencies for the API service Write-Host '>> Installing dependencies for the API service...' $nodeModules = './packages/api/node_modules' if (-not (Test-Path $nodeModules)) { Write-Host 'Installing dependencies for the API service...' npm ci --prefix=src/api --legacy-peer-deps } else { Write-Host 'Dependencies for the API service already installed.' } ```` ## File: infra/hooks/ui/setup.ps1 ````powershell # Install dependencies for the UI service Write-Host '>> Installing dependencies for the UI service...' $nodeModules = './packages/ui/node_modules' if (-not (Test-Path $nodeModules)) { Write-Host 'Installing dependencies for the UI service...' npm ci --prefix=src/ui } else { Write-Host 'Dependencies for the UI service already installed.' } ```` ## File: infra/main.bicep ```` targetScope = 'subscription' @minLength(1) @maxLength(64) @description('Name of the environment that can be used as part of naming resource convention') param environmentName string @minLength(1) @description('Primary location for all resources') param location string param apiLangchainJsExists bool @secure() param apiLangchainJsDefinition object param apiLlamaindexTsExists bool @secure() param apiLlamaindexTsDefinition object param apiMafPythonExists bool @secure() param apiMafPythonDefinition object param uiAngularExists bool @secure() param uiAngularDefinition object param itineraryPlanningExists bool @secure() param itineraryPlanningDefinition object param customerQueryExists bool @secure() param customerQueryDefinition object param destinationRecommendationExists bool @secure() param destinationRecommendationDefinition object param echoPingExists bool @secure() param echoPingDefinition object @description('Id of the user or app to assign application roles') param principalId string param isContinuousIntegration bool // Set in main.parameters.json // Tags that should be applied to all resources. // // Note that 'azd-service-name' tags should be applied separately to service host resources. // Example usage: // tags: union(tags, { 'azd-service-name': }) var tags = { 'azd-env-name': environmentName } // Organize resources in a resource group resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = { name: 'rg-${environmentName}' location: location tags: tags } var orchestratorConfig = { chat: { model: 'gpt-5' version: '2025-08-07' capacity: 50 } embedding: { model: 'text-embedding-3-large' version: '1' dim: '1024' capacity: 10 } sampleAccessTokens: { echo: '123-this-is-a-fake-token-please-use-a-token-provider' } model_provider: 'openai' llm_temperature: '1' llm_max_tokens: '100' top_k: '3' } module resources 'resources.bicep' = { scope: rg name: 'resources' params: { location: location tags: tags principalId: principalId apiLangchainJsExists: apiLangchainJsExists apiLangchainJsDefinition: apiLangchainJsDefinition apiLlamaindexTsExists: apiLlamaindexTsExists apiLlamaindexTsDefinition: apiLlamaindexTsDefinition apiMafPythonExists: apiMafPythonExists apiMafPythonDefinition: apiMafPythonDefinition uiAngularExists: uiAngularExists uiAngularDefinition: uiAngularDefinition orchestratorConfig: orchestratorConfig isContinuousIntegration: isContinuousIntegration itineraryPlanningExists: itineraryPlanningExists itineraryPlanningDefinition: itineraryPlanningDefinition customerQueryExists: customerQueryExists customerQueryDefinition: customerQueryDefinition destinationRecommendationExists: destinationRecommendationExists destinationRecommendationDefinition: destinationRecommendationDefinition echoPingExists: echoPingExists echoPingDefinition: echoPingDefinition } } output AZURE_CONTAINER_REGISTRY_ENDPOINT string = resources.outputs.AZURE_CONTAINER_REGISTRY_ENDPOINT output AZURE_RESOURCE_API_LANGCHAIN_JS_ID string = resources.outputs.AZURE_RESOURCE_API_LANGCHAIN_JS_ID output AZURE_RESOURCE_API_LLAMAINDEX_TS_ID string = resources.outputs.AZURE_RESOURCE_API_LLAMAINDEX_TS_ID output AZURE_RESOURCE_API_MAF_PYTHON_ID string = resources.outputs.AZURE_RESOURCE_API_MAF_PYTHON_ID output AZURE_RESOURCE_UI_ANGULAR_ID string = resources.outputs.AZURE_RESOURCE_UI_ANGULAR_ID output AZURE_RESOURCE_MCP_ITINERARY_PLANNING_ID string = resources.outputs.AZURE_RESOURCE_MCP_ITINERARY_PLANNING_ID output AZURE_RESOURCE_MCP_CUSTOMER_QUERY_ID string = resources.outputs.AZURE_RESOURCE_MCP_CUSTOMER_QUERY_ID output AZURE_RESOURCE_MCP_DESTINATION_RECOMMENDATION_ID string = resources.outputs.AZURE_RESOURCE_MCP_DESTINATION_RECOMMENDATION_ID output AZURE_RESOURCE_MCP_ECHO_PING_ID string = resources.outputs.AZURE_RESOURCE_MCP_ECHO_PING_ID output NG_API_URL_LANGCHAIN_JS string = resources.outputs.NG_API_URL_LANGCHAIN_JS output NG_API_URL_LLAMAINDEX_TS string = resources.outputs.NG_API_URL_LLAMAINDEX_TS output NG_API_URL_MAF_PYTHON string = resources.outputs.NG_API_URL_MAF_PYTHON output AZURE_OPENAI_ENDPOINT string = resources.outputs.AZURE_OPENAI_ENDPOINT output AZURE_OPENAI_DEPLOYMENT string = orchestratorConfig.chat.model output AZURE_OPENAI_API_VERSION string = orchestratorConfig.chat.version // Orchestrator configuration output EMBEDDING_MODEL string = orchestratorConfig.embedding.model output EMBEDDING_DIM string = orchestratorConfig.embedding.dim output AZURE_CLIENT_ID string = resources.outputs.AZURE_CLIENT_ID output AZURE_TENANT_ID string = tenant().tenantId output MCP_ECHO_PING_ACCESS_TOKEN string = orchestratorConfig.sampleAccessTokens.echo ```` ## File: infra/resources.bicep ```` @description('The location used for all deployed resources') param location string = resourceGroup().location @description('Tags that will be applied to all resources') param tags object = {} param apiLangchainJsExists bool @secure() param apiLangchainJsDefinition object param apiLlamaindexTsExists bool @secure() param apiLlamaindexTsDefinition object param apiMafPythonExists bool @secure() param apiMafPythonDefinition object param uiAngularExists bool @secure() param uiAngularDefinition object param itineraryPlanningExists bool @secure() param itineraryPlanningDefinition object param customerQueryExists bool @secure() param customerQueryDefinition object param destinationRecommendationExists bool @secure() param destinationRecommendationDefinition object param echoPingExists bool @secure() param echoPingDefinition object @description('Id of the user or app to assign application roles') param principalId string @description('The configuration for the orchestrator applications') param orchestratorConfig object = {} param isContinuousIntegration bool var principalType = isContinuousIntegration ? 'ServicePrincipal' : 'User' var abbrs = loadJsonContent('./abbreviations.json') var resourceToken = uniqueString(subscription().id, resourceGroup().id, location) // Monitor application with Azure Monitor module monitoring 'br/public:avm/ptn/azd/monitoring:0.1.0' = { name: 'monitoring' params: { logAnalyticsName: '${abbrs.operationalInsightsWorkspaces}${resourceToken}' applicationInsightsName: '${abbrs.insightsComponents}${resourceToken}' applicationInsightsDashboardName: '${abbrs.portalDashboards}${resourceToken}' location: location tags: tags } } // Container registry module containerRegistry 'br/public:avm/res/container-registry/registry:0.1.1' = { name: 'registry' params: { name: '${abbrs.containerRegistryRegistries}${resourceToken}' location: location tags: tags publicNetworkAccess: 'Enabled' roleAssignments:[ { principalId: apiLangchainJsIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: apiLlamaindexTsIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: apiMafPythonIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: uiAngularIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: itineraryPlanningIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: customerQueryIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: destinationRecommendationIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } { principalId: echoPingIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d') } ] } } // Container apps environment module containerAppsEnvironment 'br/public:avm/res/app/managed-environment:0.4.5' = { name: 'container-apps-environment' params: { logAnalyticsWorkspaceResourceId: monitoring.outputs.logAnalyticsWorkspaceResourceId name: '${abbrs.appManagedEnvironments}${resourceToken}' location: location zoneRedundant: false } } module apiLangchainJsIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'apiLangchainJsidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}api-langchain-js-${resourceToken}' location: location } } module apiLangchainJsFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'api-langchain-js-fetch-image' params: { exists: apiLangchainJsExists name: 'api-langchain-js' } } var apiLangchainJsAppSettingsArray = filter(array(apiLangchainJsDefinition.settings), i => i.name != '') var apiLangchainJsSecrets = map(filter(apiLangchainJsAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var apiLangchainJsEnv = map(filter(apiLangchainJsAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module apiLangchainJs 'br/public:avm/res/app/container-app:0.8.0' = { name: 'apiLangchainJs' params: { name: 'api-langchain-js' ingressTargetPort: 4000 corsPolicy: { allowedOrigins: [ 'https://ui-angular.${containerAppsEnvironment.outputs.defaultDomain}' ] allowedMethods: [ 'GET', 'POST' ] } scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(apiLangchainJsSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: apiLangchainJsFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'DEBUG' value: 'true' } { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: apiLangchainJsIdentity.outputs.clientId } { name: 'LLM_PROVIDER' value: 'azure-openai' } { name: 'AZURE_OPENAI_ENDPOINT' value: openAi.outputs.endpoint } { name: 'AZURE_OPENAI_DEPLOYMENT' value: orchestratorConfig.chat.model } { name: 'MCP_ITINERARY_PLANNING_URL' value: 'https://itinerary-planning.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_CUSTOMER_QUERY_URL' value: 'https://customer-query.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_DESTINATION_RECOMMENDATION_URL' value: 'https://destination-recommendation.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_ECHO_PING_URL' value: 'https://echo-ping.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_ECHO_PING_ACCESS_TOKEN' value: orchestratorConfig.sampleAccessTokens.echo } { name: 'PORT' value: '4000' } ], apiLangchainJsEnv, map(apiLangchainJsSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [apiLangchainJsIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: apiLangchainJsIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'api-langchain-js' }) } } module apiLlamaindexTsIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'apiLlamaindexTsidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}api-llamaindex-ts-${resourceToken}' location: location } } module apiLlamaindexTsFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'api-llamaindex-ts-fetch-image' params: { exists: apiLlamaindexTsExists name: 'api-llamaindex-ts' } } var apiLlamaindexTsAppSettingsArray = filter(array(apiLlamaindexTsDefinition.settings), i => i.name != '') var apiLlamaindexTsSecrets = map(filter(apiLlamaindexTsAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var apiLlamaindexTsEnv = map(filter(apiLlamaindexTsAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module apiLlamaindexTs 'br/public:avm/res/app/container-app:0.8.0' = { name: 'apiLlamaindexTs' params: { name: 'api-llamaindex-ts' ingressTargetPort: 4000 corsPolicy: { allowedOrigins: [ 'https://ui-angular.${containerAppsEnvironment.outputs.defaultDomain}' ] allowedMethods: [ 'GET', 'POST' ] } scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(apiLlamaindexTsSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: apiLlamaindexTsFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'DEBUG' value: 'true' } { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: apiLlamaindexTsIdentity.outputs.clientId } { name: 'LLM_PROVIDER' value: 'azure-openai' } { name: 'AZURE_OPENAI_ENDPOINT' value: openAi.outputs.endpoint } { name: 'AZURE_OPENAI_DEPLOYMENT' value: orchestratorConfig.chat.model } { name: 'MCP_ITINERARY_PLANNING_URL' value: 'https://itinerary-planning.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_CUSTOMER_QUERY_URL' value: 'https://customer-query.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_DESTINATION_RECOMMENDATION_URL' value: 'https://destination-recommendation.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_ECHO_PING_URL' value: 'https://echo-ping.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_ECHO_PING_ACCESS_TOKEN' value: orchestratorConfig.sampleAccessTokens.echo } { name: 'PORT' value: '4000' } ], apiLlamaindexTsEnv, map(apiLlamaindexTsSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [apiLlamaindexTsIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: apiLlamaindexTsIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'api-llamaindex-ts' }) } } module apiMafPythonIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'apiMafPythonidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}api-maf-python-${resourceToken}' location: location } } module apiMafPythonFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'api-maf-python-fetch-image' params: { exists: apiMafPythonExists name: 'api-maf-python' } } var apiMafPythonAppSettingsArray = filter(array(apiMafPythonDefinition.settings), i => i.name != '') var apiMafPythonSecrets = map(filter(apiMafPythonAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var apiMafPythonEnv = map(filter(apiMafPythonAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module apiMafPython 'br/public:avm/res/app/container-app:0.8.0' = { name: 'apiMafPython' params: { name: 'api-maf-python' ingressTargetPort: 4000 corsPolicy: { allowedOrigins: [ 'https://ui-angular.${containerAppsEnvironment.outputs.defaultDomain}' ] allowedMethods: [ 'GET', 'POST' ] } scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(apiMafPythonSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: apiMafPythonFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'DEBUG' value: 'true' } { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: apiMafPythonIdentity.outputs.clientId } { name: 'LLM_PROVIDER' value: 'azure-openai' } { name: 'AZURE_OPENAI_ENDPOINT' value: openAi.outputs.endpoint } { name: 'AZURE_OPENAI_DEPLOYMENT' value: orchestratorConfig.chat.model } { name: 'MCP_ITINERARY_PLANNING_URL' value: 'https://itinerary-planning.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_CUSTOMER_QUERY_URL' value: 'https://customer-query.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_DESTINATION_RECOMMENDATION_URL' value: 'https://destination-recommendation.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_ECHO_PING_URL' value: 'https://echo-ping.internal.${containerAppsEnvironment.outputs.defaultDomain}' } { name: 'MCP_ECHO_PING_ACCESS_TOKEN' value: orchestratorConfig.sampleAccessTokens.echo } { name: 'PORT' value: '4000' } ], apiMafPythonEnv, map(apiMafPythonSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [apiMafPythonIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: apiMafPythonIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'api-maf-python' }) } } module uiAngularIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'uiAngularidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}ui-angular-${resourceToken}' location: location } } module uiAngularFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'ui-angular-fetch-image' params: { exists: uiAngularExists name: 'ui-angular' } } var uiAngularAppSettingsArray = filter(array(uiAngularDefinition.settings), i => i.name != '') var uiAngularSecrets = map(filter(uiAngularAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var uiAngularEnv = map(filter(uiAngularAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module uiAngular 'br/public:avm/res/app/container-app:0.8.0' = { name: 'uiAngular' params: { name: 'ui-angular' ingressTargetPort: 80 scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(uiAngularSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: uiAngularFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: uiAngularIdentity.outputs.clientId } { name: 'PORT' value: '80' } ], uiAngularEnv, map(uiAngularSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [uiAngularIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: uiAngularIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'ui-angular' }) } } module itineraryPlanningIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'itineraryPlanningidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}itineraryPlanning-${resourceToken}' location: location } } module itineraryPlanningFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'itineraryPlanning-fetch-image' params: { exists: itineraryPlanningExists name: 'itinerary-planning' } } var itineraryPlanningAppSettingsArray = filter(array(itineraryPlanningDefinition.settings), i => i.name != '') var itineraryPlanningSecrets = map(filter(itineraryPlanningAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var itineraryPlanningEnv = map(filter(itineraryPlanningAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module itineraryPlanning 'br/public:avm/res/app/container-app:0.8.0' = { name: 'itineraryPlanning' params: { name: 'itinerary-planning' ingressTargetPort: 8000 ingressExternal: false stickySessionsAffinity: 'none' ingressTransport: 'http' scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(itineraryPlanningSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: itineraryPlanningFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: itineraryPlanningIdentity.outputs.clientId } { name: 'PORT' value: '8000' } ], itineraryPlanningEnv, map(itineraryPlanningSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [itineraryPlanningIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: itineraryPlanningIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'mcp-itinerary-planning' }) } } module customerQueryIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'customerQueryidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}customerQuery-${resourceToken}' location: location } } module customerQueryFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'customerQuery-fetch-image' params: { exists: customerQueryExists name: 'customer-query' } } var customerQueryAppSettingsArray = filter(array(customerQueryDefinition.settings), i => i.name != '') var customerQuerySecrets = map(filter(customerQueryAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var customerQueryEnv = map(filter(customerQueryAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module customerQuery 'br/public:avm/res/app/container-app:0.8.0' = { name: 'customerQuery' params: { name: 'customer-query' ingressTargetPort: 8080 ingressExternal: false stickySessionsAffinity: 'none' ingressTransport: 'http' scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(customerQuerySecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: customerQueryFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: customerQueryIdentity.outputs.clientId } { name: 'PORT' value: '8080' } ], customerQueryEnv, map(customerQuerySecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [customerQueryIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: customerQueryIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'mcp-customer-query' }) } } module destinationRecommendationIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'destinationRecommendationidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}destinationRecommendation-${resourceToken}' location: location } } module destinationRecommendationFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'destinationRecommendation-fetch-image' params: { exists: destinationRecommendationExists name: 'destination-recommendation' } } var destinationRecommendationAppSettingsArray = filter(array(destinationRecommendationDefinition.settings), i => i.name != '') var destinationRecommendationSecrets = map(filter(destinationRecommendationAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var destinationRecommendationEnv = map(filter(destinationRecommendationAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module destinationRecommendation 'br/public:avm/res/app/container-app:0.8.0' = { name: 'destinationRecommendation' params: { name: 'destination-recommendation' ingressTargetPort: 8080 ingressExternal: false stickySessionsAffinity: 'none' ingressTransport: 'http' scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(destinationRecommendationSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: destinationRecommendationFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: destinationRecommendationIdentity.outputs.clientId } { name: 'PORT' value: '8080' } ], destinationRecommendationEnv, map(destinationRecommendationSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [destinationRecommendationIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: destinationRecommendationIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'mcp-destination-recommendation' }) } } module echoPingIdentity 'br/public:avm/res/managed-identity/user-assigned-identity:0.2.1' = { name: 'echoPingidentity' params: { name: '${abbrs.managedIdentityUserAssignedIdentities}echoPing-${resourceToken}' location: location } } module echoPingFetchLatestImage './modules/fetch-container-image.bicep' = { name: 'echoPing-fetch-image' params: { exists: echoPingExists name: 'echo-ping' } } var echoPingAppSettingsArray = filter(array(echoPingDefinition.settings), i => i.name != '') var echoPingSecrets = map(filter(echoPingAppSettingsArray, i => i.?secret != null), i => { name: i.name value: i.value secretRef: i.?secretRef ?? take(replace(replace(toLower(i.name), '_', '-'), '.', '-'), 32) }) var echoPingEnv = map(filter(echoPingAppSettingsArray, i => i.?secret == null), i => { name: i.name value: i.value }) module echoPing 'br/public:avm/res/app/container-app:0.8.0' = { name: 'echoPing' params: { name: 'echo-ping' ingressTargetPort: 5000 ingressExternal: false stickySessionsAffinity: 'none' ingressTransport: 'http' scaleMinReplicas: 1 scaleMaxReplicas: 1 secrets: { secureList: union([ ], map(echoPingSecrets, secret => { name: secret.secretRef value: secret.value })) } containers: [ { image: echoPingFetchLatestImage.outputs.?containers[?0].?image ?? 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest' name: 'main' resources: { cpu: json('0.5') memory: '1.0Gi' } env: union([ { name: 'APPLICATIONINSIGHTS_CONNECTION_STRING' value: monitoring.outputs.applicationInsightsConnectionString } { name: 'AZURE_CLIENT_ID' value: echoPingIdentity.outputs.clientId } { name: 'MCP_ECHO_PING_ACCESS_TOKEN' value: orchestratorConfig.sampleAccessTokens.echo } { name: 'PORT' value: '5000' } ], echoPingEnv, map(echoPingSecrets, secret => { name: secret.name secretRef: secret.secretRef })) } ] managedIdentities:{ systemAssigned: false userAssignedResourceIds: [echoPingIdentity.outputs.resourceId] } registries:[ { server: containerRegistry.outputs.loginServer identity: echoPingIdentity.outputs.resourceId } ] environmentResourceId: containerAppsEnvironment.outputs.resourceId location: location tags: union(tags, { 'azd-service-name': 'mcp-echo-ping' }) } } module openAi 'br/public:avm/res/cognitive-services/account:0.10.2' = { name: 'openai' params: { name: '${abbrs.cognitiveServicesAccounts}${resourceToken}' tags: tags location: location kind: 'AIServices' // kind: 'OpenAI' disableLocalAuth: false customSubDomainName: '${abbrs.cognitiveServicesAccounts}${resourceToken}' publicNetworkAccess: 'Enabled' deployments: [ { name: orchestratorConfig.chat.model model: { format: 'OpenAI' name: orchestratorConfig.chat.model version: orchestratorConfig.chat.version } sku: { capacity: orchestratorConfig.chat.capacity name: 'GlobalStandard' } versionUpgradeOption: 'OnceCurrentVersionExpired' } ] roleAssignments: [ { principalId: principalId principalType: principalType roleDefinitionIdOrName: 'Cognitive Services OpenAI User' } { principalId: apiLangchainJsIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: 'Cognitive Services OpenAI User' } { principalId: apiLlamaindexTsIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: 'Cognitive Services OpenAI User' } { principalId: apiMafPythonIdentity.outputs.principalId principalType: 'ServicePrincipal' roleDefinitionIdOrName: 'Cognitive Services OpenAI User' } ] } } output AZURE_CONTAINER_REGISTRY_ENDPOINT string = containerRegistry.outputs.loginServer output AZURE_RESOURCE_API_LANGCHAIN_JS_ID string = apiLangchainJs.outputs.resourceId output AZURE_RESOURCE_API_LLAMAINDEX_TS_ID string = apiLlamaindexTs.outputs.resourceId output AZURE_RESOURCE_API_MAF_PYTHON_ID string = apiMafPython.outputs.resourceId output AZURE_RESOURCE_UI_ANGULAR_ID string = uiAngular.outputs.resourceId output AZURE_RESOURCE_MCP_ITINERARY_PLANNING_ID string = itineraryPlanning.outputs.resourceId output AZURE_RESOURCE_MCP_CUSTOMER_QUERY_ID string = customerQuery.outputs.resourceId output AZURE_RESOURCE_MCP_DESTINATION_RECOMMENDATION_ID string = destinationRecommendation.outputs.resourceId output AZURE_RESOURCE_MCP_ECHO_PING_ID string = echoPing.outputs.resourceId output AZURE_OPENAI_ENDPOINT string = openAi.outputs.endpoint output NG_API_URL_LANGCHAIN_JS string = 'https://api-langchain-js.${containerAppsEnvironment.outputs.defaultDomain}' output NG_API_URL_LLAMAINDEX_TS string = 'https://api-llamaindex-ts.${containerAppsEnvironment.outputs.defaultDomain}' output NG_API_URL_MAF_PYTHON string = 'https://api-maf-python.${containerAppsEnvironment.outputs.defaultDomain}' output AZURE_CLIENT_ID string = apiLangchainJsIdentity.outputs.clientId ```` ## File: packages/api-langchain-js/src/agents/index.ts ````typescript import { BaseChatModel } from "@langchain/core/language_models/chat_models"; import { BaseMessage } from "@langchain/core/messages"; import { DynamicStructuredTool } from "@langchain/core/tools"; import { createSupervisor } from "@langchain/langgraph-supervisor"; import { createAgent } from "langchain"; import { McpToolsConfig } from "../tools/index.js"; import { createMcpToolsFromDefinition } from "../tools/mcp-bridge.js"; import { McpServerDefinition } from "../utils/types.js"; // Define the state for our graph export interface AgentState { messages: BaseMessage[]; currentAgent?: string; toolsOutput?: any[]; } export interface AgentConfig { name: string; systemsystemPrompt: string; tools: DynamicStructuredTool[]; model: BaseChatModel; } export type AgentType = typeof createAgent; // Helper function to create MCP tools based on server configuration export const createMcpTools = async (mcpServerConfig: McpServerDefinition): Promise => { return createMcpToolsFromDefinition(mcpServerConfig); }; // Function to setup agents with filtered tools export const setupAgents = async ( filteredTools: McpServerDefinition[] = [], model: BaseChatModel ) => { const tools = Object.fromEntries( filteredTools.map((tool) => [tool.id, true]) ); console.log("Filtered tools:", tools); const agents: any[] = []; let mcpTools: DynamicStructuredTool[] = []; const mcpToolsConfig = McpToolsConfig(); // Create agents based on available tools if (tools["echo-ping"]) { const mcpServerConfig = mcpToolsConfig["echo-ping"]; const echoTools = await createMcpTools(mcpServerConfig); const agent = createAgent({ model, name: "EchoAgent", tools: echoTools, systemPrompt: "Echo back the received input. Do not respond with anything else. Always call the tools.", }); agents.push(agent); mcpTools.push(...echoTools); } if (tools["customer-query"]) { const mcpServerConfig = mcpToolsConfig["customer-query"]; const customerTools = await createMcpTools(mcpServerConfig); const agent = createAgent({ model, name: "CustomerQueryAgent", tools: customerTools, systemPrompt: "Assists employees in better understanding customer needs, facilitating more accurate and personalized service. This agent is particularly useful for handling nuanced queries, such as specific travel preferences or budget constraints, which are common in travel agency interactions.", }); agents.push(agent); mcpTools.push(...customerTools); } if (tools["itinerary-planning"]) { const mcpServerConfig = mcpToolsConfig["itinerary-planning"]; const itineraryTools = await createMcpTools(mcpServerConfig); const agent = createAgent({ model, name: "ItineraryPlanningAgent", tools: itineraryTools, systemPrompt: "Creates a travel itinerary based on user preferences and requirements.", }); agents.push(agent); mcpTools.push(...itineraryTools); } if (tools["destination-recommendation"]) { const mcpServerConfig = mcpToolsConfig["destination-recommendation"]; const destinationTools = await createMcpTools(mcpServerConfig); const agent = createAgent({ model, name: "DestinationRecommendationAgent", tools: destinationTools, systemPrompt: "Suggests destinations based on customer preferences and requirements.", }); agents.push(agent); mcpTools.push(...destinationTools); } console.log("Creating supervisor with agents: ", agents.map((a) => a.name)); const supervisor = createSupervisor( __PATCH_LANGGRAPH_SUPERVISOR_V1_BUG__({ agents, llm: model, prompt: "Acts as a triage agent to determine the best course of action for the user's query. If you cannot handle the query, please pass it to the next agent. If you can handle the query, please do so.", outputMode: "full_history" }), ); console.log("Agents created:", Object.keys(agents)); console.log("All tools count:", mcpTools.length); return { supervisor, agents, mcpTools }; }; function __PATCH_LANGGRAPH_SUPERVISOR_V1_BUG__(config: { agents: any[]; llm: BaseChatModel; prompt?: string; outputMode?: string }) { // Temporary patch for langgraph-supervisor v1 bug where agents are not recognized correctly // See https://github.com/langchain-ai/langgraphjs/issues/1739 config.agents = config.agents.map((agent) => { return { ...agent, name: agent.options.name }; }) return config as any; } ```` ## File: packages/api-langchain-js/src/graph/index.ts ````typescript import { BaseChatModel } from "@langchain/core/language_models/chat_models"; import { BaseMessage, HumanMessage } from "@langchain/core/messages"; import { McpServerDefinition } from "../utils/types.js"; import { AgentState, setupAgents } from "../agents/index.js"; export interface WorkflowState extends AgentState { messages: BaseMessage[]; currentAgent?: string; toolsOutput?: any[]; next?: string; } export class TravelAgentsWorkflow { private llm: BaseChatModel; private agents: any[] = []; private supervisor: any; constructor(llm: BaseChatModel) { this.llm = llm; } async initialize(filteredTools: McpServerDefinition[] = []) { console.log("Initializing Langchain workflow with filtered tools:", filteredTools.map(t => t.id)); // Setup agents and tools const { agents, supervisor } = await setupAgents(filteredTools, this.llm); this.agents = agents; this.supervisor = supervisor; console.log("Langchain workflow initialized successfully"); console.log("Available agents:", Object.keys(this.agents)); } async *run(input: string) { if (Object.keys(this.agents).length === 0) { throw new Error("Workflow not initialized. Call initialize() first."); } // Compile and run const app = this.supervisor.compile(); const agent = "Supervisor"; try { // Create initial state with HumanMessage const messages = [new HumanMessage(input)]; // Stream events from the agent execution using streamEvents const eventStream = app.streamEvents( { messages }, { version: "v2" } ); for await (const event of eventStream) { // Filter and yield relevant events if (event.event === "on_chat_model_stream") { // Stream LLM tokens yield { eventName: "llm_token", data: { agent, chunk: event.data?.chunk, }, }; } else if (event.event === "on_tool_start") { // Tool execution started yield { eventName: "tool_start", data: { agent, tool: event.name, input: event.data?.input, }, }; } else if (event.event === "on_tool_end") { // Tool execution completed yield { eventName: "tool_end", data: { agent, tool: event.name, output: event.data?.output, }, }; } else if (event.event === "on_chain_end" && event.name === "LangGraph") { // Agent completed - yield final response yield { eventName: "agent_complete", data: { agent, messages: event.data?.output?.messages || [], }, }; } else { // Yield other events for debugging yield { eventName: event.event, data: { agent, ...event.data, }, }; } } } catch (error) { console.error("Error in workflow execution:", error); yield { eventName: "workflow_error", data: { agent, error: error instanceof Error ? error.message : String(error), }, }; } } } ```` ## File: packages/api-langchain-js/package.json ````json { "name": "@azure-ai-travel-agents/api-langchain-js", "version": "1.0.0", "type": "module", "main": "dist/index.js", "types": "dist/index.d.ts", "scripts": { "start": "tsx --watch src/server.ts", "build": "tsc", "clean": "rm -rf dist" }, "files": [ "dist" ], "dependencies": { "@azure/identity": "^4.13.0", "@langchain/community": "^1.0.0", "@langchain/core": "^1.0.0", "@langchain/langgraph": "^1.0.0", "@langchain/langgraph-supervisor": "^1.0.0", "@langchain/mcp-adapters": "^1.0.0", "@langchain/openai": "^1.0.0", "@modelcontextprotocol/sdk": "^1.10.2", "@opentelemetry/api": "^1.9.0", "@opentelemetry/auto-instrumentations-node": "^0.57.0", "@opentelemetry/exporter-metrics-otlp-grpc": "^0.200.0", "@opentelemetry/exporter-trace-otlp-grpc": "^0.200.0", "@opentelemetry/instrumentation-express": "^0.48.0", "@opentelemetry/instrumentation-http": "^0.200.0", "@opentelemetry/sdk-metrics": "^2.0.0", "@opentelemetry/sdk-node": "^0.200.0", "@opentelemetry/sdk-trace-node": "^2.0.0", "@types/cors": "^2.8.17", "@types/express": "^5.0.1", "cors": "^2.8.5", "dotenv": "^16.4.7", "express": "^5.0.1", "foundry-local-sdk": "^0.3.1", "langchain": "^1.0.0", "openai": "^4.96.0", "zod": "^3.24.2" }, "devDependencies": { "@types/node": "^20.11.24", "tsx": "^4.19.4", "typescript": "^5.3.3" }, "volta": { "node": "22.20.0" } } ```` ## File: packages/api-llamaindex-ts/src/graph/index.ts ````typescript import { ToolCallLLM } from "@llamaindex/core/llms"; import { AgentWorkflow } from "@llamaindex/workflow"; import "@llamaindex/workflow-core"; import { setupAgents } from "../agents/index.js"; import { McpServerDefinition } from "../utils/types.js"; export class TravelAgentsWorkflow { private llm: ToolCallLLM; private agents: any[] = []; private supervisor: AgentWorkflow | null = null; constructor(llm: ToolCallLLM) { this.llm = llm; } async initialize(filteredTools: McpServerDefinition[] = []) { console.log( "Initializing Langchain workflow with filtered tools:", filteredTools.map((t) => t.id) ); // Setup agents and tools const { agents, supervisor } = await setupAgents(filteredTools, this.llm); this.agents = agents; this.supervisor = supervisor; console.log("Langchain workflow initialized successfully"); console.log("Available agents:", Object.keys(this.agents)); } async *run(input: string): AsyncGenerator<{ eventName: string; data: any; }, void, unknown> { if (!this.supervisor) { throw new Error("Supervisor not initialized. Call initialize() first."); } try { const eventStream = this.supervisor.runStream(input); for await (let event of eventStream) { const evt = { ...event, eventName: event.toString(), data: { ...event.data, agent: (event.data as any).currentAgentName, }, }; yield evt; } } catch (error) { console.error("Error in workflow execution:", error); yield { eventName: "error", data: error instanceof Error ? error.message : String(error), }; } } } ```` ## File: packages/api-llamaindex-ts/src/tools/index.ts ````typescript import { McpServerDefinition } from "./../utils/types.js"; export type McpServerName = | "echo-ping" | "customer-query" | "itinerary-planning" | "destination-recommendation"; const MCP_API_HTTP_PATH = "/mcp"; export const McpToolsConfig = (): { [k in McpServerName]: McpServerDefinition; } => ({ "echo-ping": { config: { url: process.env["MCP_ECHO_PING_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, requestInit: { headers: { "Authorization": "Bearer " + process.env["MCP_ECHO_PING_ACCESS_TOKEN"], } }, useSSETransport: false }, id: "echo-ping", name: "Echo Test", }, "customer-query": { config: { url: process.env["MCP_CUSTOMER_QUERY_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, useSSETransport: false }, id: "customer-query", name: "Customer Query", }, "itinerary-planning": { config: { url: process.env["MCP_ITINERARY_PLANNING_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, useSSETransport: false }, id: "itinerary-planning", name: "Itinerary Planning", }, "destination-recommendation": { config: { url: process.env["MCP_DESTINATION_RECOMMENDATION_URL"] + MCP_API_HTTP_PATH, type: "http", verbose: true, useSSETransport: false }, id: "destination-recommendation", name: "Destination Recommendation", }, }); ```` ## File: packages/api-maf-python/src/orchestrator/tools/tool_config.py ````python """MCP Tool Configuration following TypeScript implementation patterns.""" from typing import Literal, TypedDict from src.config import Settings # MCP Server Names matching TypeScript implementation McpServerName = Literal[ "echo-ping", "customer-query", "itinerary-planning", "destination-recommendation", ] MCP_API_HTTP_PATH = "/mcp" class MCPServerConfig(TypedDict): """MCP Server Configuration.""" url: str type: Literal["http", "sse"] verbose: bool class MCPServerDefinition(TypedDict): """MCP Server Definition.""" config: MCPServerConfig id: McpServerName name: str def get_mcp_tools_config() -> dict[McpServerName, MCPServerDefinition]: """ Get MCP tools configuration following TypeScript implementation pattern. Mirrors packages/api/src/orchestrator/*/tools/index.ts Returns: Dictionary mapping server names to their configurations """ settings = Settings() return { "echo-ping": { "config": { "url": f"{settings.mcp_echo_ping_url}{MCP_API_HTTP_PATH}", "type": "http", "verbose": True, }, "id": "echo-ping", "name": "Echo Test", }, "customer-query": { "config": { "url": f"{settings.mcp_customer_query_url}{MCP_API_HTTP_PATH}", "type": "http", "verbose": True, }, "id": "customer-query", "name": "Customer Query", }, "itinerary-planning": { "config": { "url": f"{settings.mcp_itinerary_planning_url}{MCP_API_HTTP_PATH}", "type": "http", "verbose": True, }, "id": "itinerary-planning", "name": "Itinerary Planning", }, "destination-recommendation": { "config": { "url": f"{settings.mcp_destination_recommendation_url}{MCP_API_HTTP_PATH}", "type": "http", "verbose": True, }, "id": "destination-recommendation", "name": "Destination Recommendation", }, } # Export singleton instance MCP_TOOLS_CONFIG = get_mcp_tools_config() ```` ## File: packages/mcp-servers/echo-ping/package.json ````json { "name": "mcp-echo-ping", "type": "module", "license": "MIT", "author": "Microsoft Corporation", "version": "1.0.0", "description": "A simple echo tool for the Model Context Protocol (MCP).", "keywords": [ "model", "context", "protocol", "mcp", "agent", "echo" ], "main": "dist/index.js", "bin": { "mcp-echo-ping": "dist/index.js" }, "scripts": { "start": "tsx --watch src/index.ts", "build": "tsc" }, "files": [ "dist" ], "dependencies": { "@modelcontextprotocol/sdk": "^1.12.1", "@opentelemetry/api": "^1.9.0", "@opentelemetry/auto-instrumentations-node": "^0.57.0", "@opentelemetry/exporter-metrics-otlp-grpc": "^0.200.0", "@opentelemetry/exporter-trace-otlp-grpc": "^0.200.0", "@opentelemetry/instrumentation-express": "^0.48.0", "@opentelemetry/instrumentation-http": "^0.200.0", "@opentelemetry/sdk-metrics": "^2.0.0", "@opentelemetry/sdk-node": "^0.200.0", "@opentelemetry/sdk-trace-node": "^2.0.0", "dotenv": "^16.5.0", "express": "^5.0.1", "openai": "^6.6.0", "type": "^2.7.3", "zod": "^3.23.8" }, "devDependencies": { "@types/express": "^5.0.0", "@types/node": "^20.11.24", "shx": "^0.3.4", "tsx": "^4.19.4", "typescript": "^5.3.3" } } ```` ## File: packages/ui-angular/src/app/chat-conversation/chat-conversation.component.ts ````typescript import { CommonModule, JsonPipe } from '@angular/common'; import { ChangeDetectionStrategy, Component, effect, ElementRef, HostListener, OnInit, signal, viewChild, viewChildren, } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { NgIcon, provideIcons } from '@ng-icons/core'; import { lucideBot, lucideBrain, lucideCheck, lucideRefreshCw, lucideSendHorizontal, } from '@ng-icons/lucide'; import { BrnAlertDialogImports } from '@spartan-ng/brain/alert-dialog'; import { BrnSelectImports } from '@spartan-ng/brain/select'; import { BrnSeparatorImports } from '@spartan-ng/brain/separator'; import { HlmAlertImports } from '@spartan-ng/helm/alert'; import { HlmAlertDialogImports } from '@spartan-ng/helm/alert-dialog'; import { HlmButtonImports } from '@spartan-ng/helm/button'; import { HlmSelectImports } from '@spartan-ng/helm/select'; import { HlmBadgeImports } from '@spartan-ng/helm/badge'; import { HlmToasterImports } from '@spartan-ng/helm/sonner'; import { HlmCardImports } from '@spartan-ng/helm/card'; import { HlmIcon } from '@spartan-ng/helm/icon'; import { BrnPopoverImports } from '@spartan-ng/brain/popover'; import { HlmFormFieldImports } from '@spartan-ng/helm/form-field'; import { HlmInputImports } from '@spartan-ng/helm/input'; import { HlmLabelImports } from '@spartan-ng/helm/label'; import { HlmPopoverImports } from '@spartan-ng/helm/popover'; import { HlmScrollAreaImports } from '@spartan-ng/helm/scroll-area'; import { HlmSeparatorImports } from '@spartan-ng/helm/separator'; import { HlmSwitch } from '@spartan-ng/helm/switch'; import { AccordionPreviewComponent } from '../components/accordion/accordion.component'; import { SkeletonPreviewComponent } from '../components/skeleton-preview/skeleton-preview.component'; import { ApiService, ChatEvent, ChatMessage } from '../services/api.service'; import { ChatService } from './chat-conversation.service'; const SAMPLE_PROMPT_1 = `Hello! I'm planning a trip to Iceland and would like your expertise to create a custom itinerary. Please use your destination planning tools and internal resources to suggest a day-by-day plan based on: • Top must-see natural sites (glaciers, waterfalls, geothermal spots, etc.) • Unique local experiences (culture, food, hidden gems) • Efficient travel routes and realistic timing • A mix of adventure and relaxation I'm aiming for an itinerary that balances scenic exploration with comfort. Feel free to tailor recommendations based on the best time to visit and local logistics. Thank you!`; const SAMPLE_PROMPT_2 = `Hi there! I'd love help planning a trip to Iceland. I'm looking for destination suggestions and a full itinerary tailored to an unforgettable experience. Please use your planning tools and destination insights to recommend: • Where to go and why • What to do each day (including any unique or off-the-beaten-path experiences) • Best ways to get around and where to stay I'm open to all kinds of adventures—whether it's chasing waterfalls, soaking in hot springs, or discovering small Icelandic towns. A mcp-informed, creative itinerary would be amazing!`; const SAMPLE_PROMPT_3 = `I'm planning a trip to Morocco and would appreciate a complete, mcp-assisted itinerary. Please use your travel planning systems to recommend key destinations, daily activities, and a logical route. I'm looking for a balanced experience that includes cultural landmarks, natural scenery, and time to relax. Efficient travel logistics and seasonal considerations would be great to include. Travel Dates: as soon as possible. Starting Point: from Paris, France. Duration: 10 days. Budget: 5000 euros.`; @Component({ selector: 'app-chat-conversation', standalone: true, imports: [ CommonModule, FormsModule, NgIcon, JsonPipe, HlmButtonImports, HlmInputImports, HlmFormFieldImports, HlmCardImports, HlmScrollAreaImports, HlmIcon, HlmBadgeImports, HlmPopoverImports, BrnPopoverImports, HlmAlertImports, BrnAlertDialogImports, HlmAlertDialogImports, HlmToasterImports, HlmSeparatorImports, BrnSeparatorImports, HlmLabelImports, HlmSwitch, AccordionPreviewComponent, SkeletonPreviewComponent, BrnSelectImports, HlmSelectImports, ], providers: [ provideIcons({ lucideBrain, lucideBot, lucideSendHorizontal, lucideRefreshCw, lucideCheck, }), ], templateUrl: './chat-conversation.component.html', styleUrl: './chat-conversation.component.css', changeDetection: ChangeDetectionStrategy.OnPush, }) export class ChatConversationComponent implements OnInit { agents = signal<{}>({}); availableApiUrls = signal<{ label: string; url: string, isOnline: boolean }[]>([]); eot = viewChild>('eot'); agentMessages = viewChildren>('agentMessages'); samplePrompts = [SAMPLE_PROMPT_1, SAMPLE_PROMPT_2, SAMPLE_PROMPT_3]; messages: ChatMessage[] = []; selectedApiUrl = signal(''); selectedApiUrlChange = effect(() => { this.chatService.setApiUrl(this.selectedApiUrl()); }); constructor(public chatService: ChatService) { this.chatService.messagesStream.subscribe((messages) => { this.messages = messages; if (messages.length === 0) return; setTimeout(() => { this.scrollToBottom(); }, 0); }); } async ngOnInit() { this.resetChat(); this.availableApiUrls.set(await this.chatService.fetchAvailableApiUrls()); this.selectedApiUrl.set(this.availableApiUrls().find(api => api.isOnline)?.url || this.availableApiUrls()[0]?.url || ''); await this.chatService.fetchAvailableTools(); } @HostListener('window:keyup.shift.enter', ['$event']) sendMessage(event: any) { event.preventDefault(); this.chatService.sendMessage(event); } onApiUrlChange(url: string) { this.selectedApiUrl.set(url); this.chatService.setApiUrl(url); } printAgentsGraph(evt: ChatEvent) { const tools = evt.data.output?.update?.messages.at(1)?.kwargs?.response_metadata?.tools; if (tools) { return ' → ' + tools.map((t: any) => t.name).join(' → '); } return ''; } scrollToBottom() { this.eot()?.nativeElement.scrollIntoView({ behavior: 'auto', }); } toggleTool() {} cancelReset(ctx: any) { ctx.close(); } confirmReset(ctx: any) { ctx.close(); this.resetChat(); } private resetChat() { this.chatService.resetChat(); } } ```` ## File: packages/ui-angular/src/app/chat-conversation/chat-conversation.service.ts ````typescript import { Injectable, signal } from '@angular/core'; import { marked } from 'marked'; import { toast } from 'ngx-sonner'; import { BehaviorSubject } from 'rxjs'; import { ApiService, ChatEvent, ChatMessage, Tools, } from '../services/api.service'; @Injectable({ providedIn: 'root', }) export class ChatService { userMessage = signal(''); private _lastMessageContent$ = new BehaviorSubject(''); lastMessageContent$ = this._lastMessageContent$.asObservable(); messagesStream = new BehaviorSubject([]); private messagesBuffer: ChatMessage[] = []; private agentEventsBuffer: ChatEvent[] = []; isLoading = signal(false); tools = signal([]); agent = signal(null); assistantMessageInProgress = signal(false); agentMessageBuffer: string = ''; constructor(private apiService: ApiService) {} async fetchAvailableTools() { const toolsResult = await this.apiService.fetchAvailableTools(); if (toolsResult) { this.tools.set(toolsResult.tools); } } async fetchAvailableApiUrls() { return await this.apiService.getAvailableApiUrls(); } setApiUrl(url: string) { this.apiService.setApiUrl(url); } async sendMessage(event: Event) { if ((event as KeyboardEvent).shiftKey) { return; } const messageText = this.userMessage(); if (!messageText.trim()) return; this.messagesBuffer.push({ role: 'user', content: messageText, reasoning: [], timestamp: new Date(), }); this.userMessage.set(''); this.isLoading.set(true); this.assistantMessageInProgress.set(false); this.apiService .stream( messageText, this.tools().filter((tool) => tool.selected) ) .subscribe({ next: (state) => { switch (state.type) { case 'START': this.agentEventsBuffer = []; const message: ChatMessage = { role: 'assistant', content: '', reasoning: [], timestamp: new Date(), }; this.messagesBuffer.push(message); this.messagesStream.next(this.messagesBuffer); this._lastMessageContent$.next(''); break; case 'END': this.updateAndNotifyAgentChatMessageState('', { metadata: { events: this.agentEventsBuffer, }, }); break; case 'MESSAGE': this.processAgentEvents(state.event); break; case 'ERROR': this.showErrorMessage(state.error); this.isLoading.set(false); break; default: break; } }, error: (error) => { this.showErrorMessage(error); this.isLoading.set(false); }, }); } showErrorMessage(error: unknown) { const err = (error as any).error ?? error; toast.error('Oops! Something went wrong.', { duration: 10000, description: err.message, action: { label: 'Close', onClick: () => console.log('Closed'), }, }); } private processAgentEvents(state?: ChatEvent) { if (state && state.type === 'metadata') { this.agent.set(state.data?.agent || null); this.agentEventsBuffer.push(state); let message = ''; // MAF events if (typeof state.data?.message === 'string') { message += state.data.message + '\n'; } let delta: string = state.data?.delta || // Llamaindex.TS event message || // Microsoft Agent Framework (MAF) event ''; const eventType = typeof state.event === 'string' ? state.event : state.event?.['type']; switch (eventType) { // LlamaIndex events case 'llamaindex-start': // Microsoft Agent Framework (MAF) events case 'WorkflowStarted': case 'OrchestratorUserTask': case 'OrchestratorInstruction': this.updateAndNotifyAgentChatMessageState(delta, { metadata: { events: this.agentEventsBuffer, }, }); this.assistantMessageInProgress.set(false); break; // LlamaIndex events case 'llamaindex-stop': // Microsoft Agent Framework (MAF) events case 'Complete': // LangChain events case 'agent_complete': if (state.event === 'llamaindex-stop') { // LlamaIndex llamaindex-stop event contains the final message content delta = state.data?.message.content; } this.updateAndNotifyAgentChatMessageState(delta, { metadata: { events: this.agentEventsBuffer, }, }); this.assistantMessageInProgress.set(false); break; // LlamaIndex events case 'AgentToolCallResult': let messageState = {}; if (typeof state.data.raw === 'string') { messageState = { reasoning: [ { content: state.data.raw, }, ], }; } this.updateAndNotifyAgentChatMessageState(delta, messageState); break; // LlamaIndex events case 'AgentOutput': case 'AgentInput': case 'AgentSetup': case 'AgentStepEvent': case 'AgentToolCall': case 'ToolResultsEvent': case 'ToolCallsEvent': this.updateAndNotifyAgentChatMessageState(delta, { metadata: { events: this.agentEventsBuffer, }, }); break; // Note: the following events are sent very frequently (per token) // LlamaIndex events case '4': // Microsoft Agent Framework (MAF) events case 'AgentDelta': // LangChain events case 'llm_token': if (state.event === 'llm_token') { const chunk = state.data?.chunk; const content = chunk?.kwargs?.content || []; if (Array.isArray(content)) { delta = content.map((c) => c.text).join(''); } } if (delta.trim()) { this.isLoading.set(false); // this.assistantMessageInProgress.set(true); // this.agentMessageStream.next(this.agentMessageBuffer); this.agentEventsBuffer.push(state); this.updateAndNotifyAgentChatMessageState(delta, { metadata: { events: this.agentEventsBuffer, }, }); } break; } } } async updateAndNotifyAgentChatMessageState( delta: string, state?: Partial ) { const lastMessage = this.messagesBuffer[this.messagesBuffer.length - 1]; if (lastMessage?.role === 'assistant') { lastMessage.content += delta; lastMessage.metadata = { ...lastMessage.metadata, ...state?.metadata, events: state?.metadata?.events, }; lastMessage.reasoning = [ ...(lastMessage.reasoning || []), ...(state?.reasoning || []), ]; lastMessage.timestamp = new Date(); this.messagesStream.next(this.messagesBuffer); const md = marked.setOptions({}); const htmlContent = md.parse(lastMessage.content); this._lastMessageContent$.next(await htmlContent); } } resetChat() { this.userMessage.set(''); this._lastMessageContent$.next(''); this.messagesBuffer = []; this.messagesStream.next(this.messagesBuffer); this.agentEventsBuffer = []; this.isLoading.set(false); this.assistantMessageInProgress.set(false); this.agent.set(null); } } ```` ## File: packages/ui-angular/src/app/components/accordion/accordion.component.ts ````typescript import { Component, input } from '@angular/core'; import { NgIcon, provideIcons } from '@ng-icons/core'; import { lucideBrain, lucideCheck, lucideChevronDown } from '@ng-icons/lucide'; import { HlmAccordionImports } from '@spartan-ng/helm/accordion'; import { HlmIconImports } from '@spartan-ng/helm/icon'; @Component({ selector: 'accordion-preview', standalone: true, imports: [ HlmAccordionImports, HlmIconImports, NgIcon, ], viewProviders: [provideIcons({ lucideChevronDown, lucideCheck, lucideBrain })], template: `
`, styles: [ ` hlm-accordion-content[data-state='open'] { display: block; } `, ] }) export class AccordionPreviewComponent { isOpened = input(false); title = input(''); icon = input(''); } ```` ## File: packages/ui-angular/src/app/components/skeleton-preview/skeleton-preview.component.ts ````typescript import { Component } from '@angular/core'; import { HlmSkeletonImports } from '@spartan-ng/helm/skeleton'; @Component({ selector: 'skeleton-preview', standalone: true, imports: [HlmSkeletonImports], template: `
`, }) export class SkeletonPreviewComponent {} ```` ## File: packages/ui-angular/src/app/services/api.service.ts ````typescript import { HttpClient, HttpDownloadProgressEvent, HttpEvent, HttpEventType } from '@angular/common/http'; import { inject, Injectable, NgZone } from '@angular/core'; import { BehaviorSubject, catchError, distinct, filter, startWith, switchMap } from 'rxjs'; import { environment } from '../../environments/environment'; export type ServerID = | 'echo-ping' | 'customer-query' | 'itinerary-planning' | 'destination-recommendation'; export type Tools = { id: ServerID; name: string; url: string; reachable: boolean; tools: object[]; selected: boolean; }; // Interface for SSE event types export interface ChatEvent { type: 'metadata' | 'error' | 'end'; agent?: string; event?: string | { [key: string]: any }; data?: any; message?: string; statusCode?: number; agentName?: string; kind?: string; } export type ChatEventErrorType = 'client' | 'server' | 'general' | undefined; export interface ChatStreamState { id?: number; event: ChatEvent; type: 'START' | 'END' | 'ERROR' | 'MESSAGE'; error?: { type: ChatEventErrorType; message: string; statusCode: number; }; } export interface ChatMessage { role: 'user' | 'assistant'; content: string; timestamp: Date; metadata?: { events?: ChatEvent[] | null; }; reasoning: { content: string; }[]; } @Injectable({ providedIn: 'root', }) export class ApiService { ngZone = inject(NgZone); private readonly http = inject(HttpClient); private apiUrl = ''; chatStreamState = new BehaviorSubject>({}); // Track both position and incomplete JSON private lastProcessedIndex = 0; private incompleteJsonBuffer = ''; async getAvailableApiUrls(): Promise<{ label: string; url: string, isOnline: boolean }[]> { return [ { ...environment.apiLangChainJsServer, isOnline: await this.checkApiHealth(environment.apiLangChainJsServer.url), }, { ...environment.apiLlamaIndexTsServer, isOnline: await this.checkApiHealth(environment.apiLlamaIndexTsServer.url), }, { ...environment.apiMafPythonServer, isOnline: await this.checkApiHealth(environment.apiMafPythonServer.url), }, ]; } private async checkApiHealth(url: string): Promise { const healthUrl = `${url}/api/health`; try { const response = await fetch(healthUrl, { method: 'GET', headers: { 'Content-Type': 'application/json', }, }); return response.ok; } catch (error) { console.error(`Health check failed for ${healthUrl}:`); return false; } } setApiUrl(url: string) { this.apiUrl = url; } async initializeApiUrlToDefault() { const apiUrls = await this.getAvailableApiUrls(); const onlineApi = apiUrls.find(api => api.isOnline); if (onlineApi) { this.apiUrl = onlineApi.url; } else if (apiUrls.length > 0) { this.apiUrl = apiUrls[0].url; } else { throw new Error('No API URLs are configured.'); } } async fetchAvailableTools(): Promise<{ tools: Tools[] } | void> { if (!this.apiUrl) { await this.initializeApiUrlToDefault(); } try { const response = await fetch(`${this.apiUrl}/api/tools`, { method: 'GET', headers: { 'Content-Type': 'application/json', }, }); if (!response.ok) { const { error } = await response.json(); return this.handleApiError( new Error(error || 'An error occurred while fetching tools'), response.status ); } return await response.json(); } catch (error) { return this.handleApiError(error, 0); } } stream(message: string, tools: Tools[]) { // Reset trackers for each new stream this.lastProcessedIndex = 0; this.incompleteJsonBuffer = ''; return this.http .post( `${this.apiUrl}/api/chat`, { message, tools }, { responseType: 'text', observe: 'events', reportProgress: true, } ) .pipe( filter( (event: HttpEvent): boolean => event.type === HttpEventType.DownloadProgress || event.type === HttpEventType.Response ), switchMap((event: HttpEvent): any => { // NOTE: partialText is cumulative! it contains all the data received so far, not just the new chunk. // We need to track what we've already processed and only handle the new data. const fullText = (event as HttpDownloadProgressEvent).partialText! || ''; // Extract only the NEW data const newData = fullText.substring(this.lastProcessedIndex); this.lastProcessedIndex = fullText.length; if (!newData.trim()) { return []; } // Combine with any incomplete JSON from previous chunk const dataToProcess = this.incompleteJsonBuffer + newData; // Split by double newlines const parts = dataToProcess.split(/\n\n+/); // The last part might be incomplete, save it for next time // (unless this is the final Response event) if (event.type !== HttpEventType.Response) { this.incompleteJsonBuffer = parts.pop() || ''; } else { this.incompleteJsonBuffer = ''; } return parts.map((jsonValue: string) => { try { const parsedData = JSON.parse( jsonValue.replace(/data:\s+/, '').trim() ); return { type: event.type === HttpEventType.Response ? 'END' : 'MESSAGE', event: parsedData, id: Date.now(), }; } catch (error) { this.handleApiError(error, 0); return null; } }); }), distinct(), catchError((error) => { this.handleApiError(error, 0); throw error; }), filter((state) => state !== null), startWith({ type: 'START', id: Date.now() }), ); } private handleApiError(error: unknown, statusCode: number) { console.error('Fetch error:', error); let errorType: ChatEventErrorType = 'general'; this.chatStreamState.next({ id: Date.now(), type: 'ERROR', error: { type: errorType, message: (error as Error).toString(), statusCode, }, }); } } ```` ## File: packages/ui-angular/src/app/app.config.ts ````typescript import { provideHttpClient, withFetch } from '@angular/common/http'; import { ApplicationConfig, provideZoneChangeDetection } from '@angular/core'; import { provideRouter } from '@angular/router'; import { routes } from './app.routes'; export const appConfig: ApplicationConfig = { providers: [ provideZoneChangeDetection({ eventCoalescing: true }), provideRouter(routes), // provideClientHydration(withEventReplay()), // seems to cause issues with hydration in some cases on initial app load provideHttpClient(withFetch()) ] }; ```` ## File: packages/ui-angular/src/environments/environment.development.ts ````typescript export const environment = { production: false, apiLangChainJsServer: { label: 'Local Langchain.js API', url: 'http://localhost:4000' }, apiLlamaIndexTsServer: { label: 'Local LlamaIndex TS API', url: 'http://localhost:4001' }, apiMafPythonServer: { label: 'Local MAF Python API', url: 'http://localhost:4010' }}; ```` ## File: packages/ui-angular/src/environments/environment.ts ````typescript export const environment = { production: true, apiLangChainJsServer: { label: 'Langchain.js', url: import.meta.env.NG_API_URL_LANGCHAIN_JS, }, apiLlamaIndexTsServer: { label: 'LlamaIndex TS', url: import.meta.env.NG_API_URL_LLAMAINDEX_TS, }, apiMafPythonServer: { label: 'MAF Python', url: import.meta.env.NG_API_URL_MAF_PYTHON, }, }; ```` ## File: packages/ui-angular/src/env.d.ts ````typescript declare interface Env { readonly NODE_ENV: string; readonly NG_API_URL_LANGCHAIN_JS: string; readonly NG_API_URL_LLAMAINDEX_TS: string; readonly NG_API_URL_MAF_PYTHON: string; } declare interface ImportMeta { readonly env: Env; } ```` ## File: packages/ui-angular/src/main.server.ts ````typescript import { BootstrapContext, bootstrapApplication } from '@angular/platform-browser'; import { AppComponent } from './app/app.component'; import { config } from './app/app.config.server'; const bootstrap = (context: BootstrapContext) => bootstrapApplication(AppComponent, config, context); export default bootstrap; ```` ## File: packages/ui-angular/src/app/chat-conversation/chat-conversation.component.html ````html

AI Travel Agent Chat

Clear chat session?

This will clear all messages from the current chat session and start a new one. This action cannot be undone.

@if (messages.length === 0) {
@for(prompt of samplePrompts; track prompt) {

{{ prompt }}

}
} @for (message of messages; track message) {
@if (message.role === 'user') {

{{ message.content }}

} @else if (message.role === 'assistant') {
@if(message.metadata?.events?.at(0)?.kind !== 'llamaindex-ts') { @for (evt of message.metadata?.events; track $index) { @if((evt.event !== 'AgentStream' && evt.event !== 'AgentDelta' && evt.event !== 'llm_token')) {
@if(evt.agent) { Agent {{ evt.agent }}
Event {{ evt.event }}
} @if(evt.data?.tool) { Tools {{ evt.data.tool }}
Args {{ evt.data?.input || "{}" | json }} }
@if(evt.data?.output && printAgentsGraph(evt)) { Agents {{ (evt.data?.output?.goto || "") + printAgentsGraph(evt) || "-" }} }
{{ evt.data?.input?.messages?.at(-1)?.kwargs?.content || "" }} {{ evt.data?.output?.update?.messages?.at(-1)?.kwargs ?.content || "" }}
} }
}
@if ($last && chatService.isLoading()) { } @else { @for(reason of message.reasoning; track reason) {

{{ reason.content }}

}

} }
{{ message.timestamp | date : "short" }}
}
AI-generated content may be incorrect

Available Agents and MCP tools

Choose which agents to use

@for(tool of chatService.tools(); track tool) {
}

Orchestration API

@for( api of availableApiUrls(); track api) { {{ api.label }} @if(!api.isOnline) { (Offline) } }
```` ## File: infra/hooks/ui/setup.sh ````bash #! /bin/bash # Install dependencies for the UI service printf ">> Installing dependencies for the UI (Angular) service...\n" if [ ! -d ./packages/ui-angular/node_modules ]; then printf "Installing dependencies for the UI service...\n" npm ci --prefix=src/ui-angular status=$? if [ $status -ne 0 ]; then printf "UI dependencies installation failed with exit code $status. Exiting.\n" exit $status fi else printf "Dependencies for the UI (Angular) service already installed.\n" fi ```` ## File: packages/ui-angular/package.json ````json { "name": "azure-ai-travel-agents-ui", "version": "1.0.0", "scripts": { "ng": "ng", "start": "NG_APP_ENV=development ng serve", "build": "NG_APP_ENV=development ng build --configuration development", "build:docker": "NG_APP_ENV=docker ng build --configuration development", "build:production": "NG_APP_ENV=production ng build --configuration production", "watch": "NG_APP_ENV=development ng build --watch --configuration development", "test": "NG_APP_ENV=development ng test", "serve:ssr": "node dist/app/server/server.mjs" }, "private": true, "dependencies": { "@angular/cdk": "^20.2.9", "@angular/common": "^20.3.6", "@angular/compiler": "^20.3.6", "@angular/core": "^20.3.6", "@angular/forms": "^20.3.6", "@angular/platform-browser": "^20.3.6", "@angular/platform-browser-dynamic": "^20.3.6", "@angular/platform-server": "^20.3.6", "@angular/router": "^20.3.6", "@angular/ssr": "^20.3.6", "@ng-icons/core": "^29.10.0", "@ng-icons/lucide": ">=29.0.0", "@spartan-ng/brain": "^0.0.1-alpha.532", "class-variance-authority": "^0.7.0", "clsx": "^2.1.1", "embla-carousel-angular": "^20.0.0", "express": "^4.18.2", "marked": "^15.0.12", "ngx-scrollbar": ">=16.0.0", "ngx-sonner": "^3.0.0", "postcss": "^8.5.3", "rxjs": "~7.8.0", "tslib": "^2.3.0", "zone.js": "~0.15.0" }, "devDependencies": { "@angular-devkit/build-angular": "^20.3.6", "@angular/cli": "^20.3.6", "@angular/compiler-cli": "^20.3.6", "@ngx-env/builder": "^19.0.4", "@spartan-ng/cli": "^0.0.1-alpha.532", "@types/express": "^4.17.17", "@types/jasmine": "~5.1.0", "@types/node": "^18.18.0", "jasmine-core": "~5.6.0", "karma": "~6.4.0", "karma-chrome-launcher": "~3.2.0", "karma-coverage": "~2.2.0", "karma-jasmine": "~5.1.0", "karma-jasmine-html-reporter": "~2.1.0", "tailwind-merge": "^2.5.2", "tailwindcss": "^3.4.17", "tailwindcss-animate": "^1.0.6", "typescript": "~5.8.3" }, "volta": { "node": "22.20.0" } } ```` ## File: infra/hooks/api/setup.sh ````bash #! /bin/bash # Install dependencies for the Langchain.js API service printf ">> Installing dependencies for the Langchain.js API service...\n" if [ ! -d ./packages/api-langchain-js/node_modules ]; then printf "Installing dependencies for the Langchain.js API service...\n" npm ci --prefix=src/api-langchain-js --legacy-peer-deps status=$? if [ $status -ne 0 ]; then printf "API dependencies installation failed with exit code $status. Exiting.\n" exit $status fi else printf "Dependencies for the Langchain.js API service already installed.\n" fi # Install dependencies for the Llamaindex.TS API service printf ">> Installing dependencies for the Llamaindex.TS API service...\n" if [ ! -d ./packages/api-llamaindex-ts/node_modules ]; then printf "Installing dependencies for the Llamaindex.TS API service...\n" npm ci --prefix=src/api-llamaindex-ts --legacy-peer-deps status=$? if [ $status -ne 0 ]; then printf "API dependencies installation failed with exit code $status. Exiting.\n" exit $status fi else printf "Dependencies for the Llamaindex.TS API service already installed.\n" fi # Install dependencies for the MAF API service printf ">> Installing dependencies for the MAF API service...\n" cd ./packages/api-maf-python/ && \ python3 -m venv .venv && \ source .venv/bin/activate && \ pip install --upgrade pip && \ pip install -r requirements.txt fi # Enable Docker Desktop Model Runner printf ">> Enabling Docker Desktop Model Runner...\n" docker desktop enable model-runner --tcp 12434 printf ">> Pulling Docker model...\n" docker model pull ai/phi4:14B-Q4_0 ```` ## File: infra/hooks/mcp/setup.ps1 ````powershell # This script builds and sets up the MCP containers for Windows PowerShell. ########################################################################## # MCP Tools ########################################################################## $tools = @('echo-ping', 'customer-query', 'destination-recommendation', 'itinerary-planning') Write-Host '>> Creating .env file for the MCP servers...' foreach ($tool in $tools) { $envSample = "./packages/mcp-servers/$tool/.env.sample" $envFile = "./packages/mcp-servers/$tool/.env" $envDockerFile = "./packages/mcp-servers/$tool/.env.docker" if (Test-Path $envSample) { Write-Host "Creating .env file for $tool..." if (-not (Test-Path $envFile)) { Copy-Item $envSample $envFile Add-Content $envFile "# File automatically generated on $(Get-Date)" Add-Content $envFile "# See .env.sample for more information" } if (-not (Test-Path $envDockerFile)) { Copy-Item $envSample $envDockerFile Add-Content $envDockerFile "# File automatically generated on $(Get-Date)" Add-Content $envDockerFile "# See .env.sample for more information" } } else { Write-Host "No .env.sample found for $tool, skipping..." } } # Enable Docker Desktop Model Runner Write-Host 'Enabling Docker Desktop Model Runner...' docker desktop enable model-runner --tcp 12434 docker model pull ai/phi4:14B-Q4_0 # Only build docker compose, do not start the containers yet Write-Host '>> Building MCP servers with Docker Compose...' $composeServices = $tools | ForEach-Object { "mcp-$_" } | Join-String ' ' docker compose -f docker-compose.yml up --build -d $composeServices ```` ## File: infra/hooks/postprovision.ps1 ````powershell # This script is executed after the Azure Developer CLI (azd) provisioning step # It sets up the environment for the AI Travel Agents application, including creating .env files, # installing dependencies, and preparing the MCP tools. # Note: this script is executed at the root of the project directory Write-Host "Running post-deployment script for AI Travel Agents application..." ########################################################################## # API Services (LangChain.js, LlamaIndex.TS, MAF Python) ########################################################################## Write-Host ">> Creating .env files for API services..." # Get shared Azure OpenAI endpoint $AZURE_OPENAI_ENDPOINT = azd env get-value AZURE_OPENAI_ENDPOINT | Out-String | ForEach-Object { $_.Trim() } # Function to create API .env file function Create-ApiEnvFile { param( [string]$ApiPath, [string]$ServiceName, [int]$Port = 4000 ) $apiEnvPath = "$ApiPath/.env" if (-not (Test-Path $apiEnvPath)) { "# File automatically generated on $(Get-Date)" | Out-File $apiEnvPath "# See .env.sample for more information" | Add-Content $apiEnvPath "" | Add-Content $apiEnvPath "AZURE_OPENAI_ENDPOINT=$AZURE_OPENAI_ENDPOINT" | Add-Content $apiEnvPath "" | Add-Content $apiEnvPath "LLM_PROVIDER=azure-openai" | Add-Content $apiEnvPath "" | Add-Content $apiEnvPath "AZURE_OPENAI_DEPLOYMENT=gpt-5" | Add-Content $apiEnvPath "" | Add-Content $apiEnvPath "MCP_CUSTOMER_QUERY_URL=http://localhost:8080" | Add-Content $apiEnvPath "MCP_DESTINATION_RECOMMENDATION_URL=http://localhost:5002" | Add-Content $apiEnvPath "MCP_ITINERARY_PLANNING_URL=http://localhost:5003" | Add-Content $apiEnvPath "MCP_ECHO_PING_URL=http://localhost:5004" | Add-Content $apiEnvPath "MCP_ECHO_PING_ACCESS_TOKEN=123-this-is-a-fake-token-please-use-a-token-provider" | Add-Content $apiEnvPath "" | Add-Content $apiEnvPath "PORT=$Port" | Add-Content $apiEnvPath "" | Add-Content $apiEnvPath "OTEL_SERVICE_NAME=$ServiceName" | Add-Content $apiEnvPath "OTEL_EXPORTER_OTLP_ENDPOINT=http://aspire-dashboard:18889" | Add-Content $apiEnvPath "OTEL_EXPORTER_OTLP_HEADERS=header-value" | Add-Content $apiEnvPath Write-Host "Created .env file for $ServiceName" } # Set overrides for docker environment $apiEnvDockerPath = "$ApiPath/.env.docker" if (-not (Test-Path $apiEnvDockerPath)) { "# File automatically generated on $(Get-Date)" | Out-File $apiEnvDockerPath "# See .env.sample for more information" | Add-Content $apiEnvDockerPath "" | Add-Content $apiEnvDockerPath "MCP_CUSTOMER_QUERY_URL=http://customer-query:8080" | Add-Content $apiEnvDockerPath "MCP_DESTINATION_RECOMMENDATION_URL=http://destination-recommendation:5002" | Add-Content $apiEnvDockerPath "MCP_ITINERARY_PLANNING_URL=http://itinerary-planning:5003" | Add-Content $apiEnvDockerPath "MCP_ECHO_PING_URL=http://echo-ping:5004" | Add-Content $apiEnvDockerPath Write-Host "Created .env.docker file for $ServiceName" } } # Create .env files for all API orchestrators Create-ApiEnvFile -ApiPath "./packages/api-langchain-js" -ServiceName "api-langchain-js" -Port 4000 Create-ApiEnvFile -ApiPath "./packages/api-llamaindex-ts" -ServiceName "api-llamaindex-ts" -Port 4000 Create-ApiEnvFile -ApiPath "./packages/api-maf-python" -ServiceName "api-maf-python" -Port 8000 # Install dependencies for TypeScript API services $tsApiServices = @("api-langchain-js", "api-llamaindex-ts") foreach ($service in $tsApiServices) { Write-Host ">> Installing dependencies for $service service..." if (-not (Test-Path "./packages/$service/node_modules")) { Write-Host "Installing dependencies for $service service..." npm ci --prefix=./packages/$service --legacy-peer-deps } else { Write-Host "Dependencies for $service service already installed." } } ########################################################################## # UI (Angular) ########################################################################## Write-Host ">> Creating .env file for the UI service..." $uiEnvPath = "./packages/ui-angular/.env" if (-not (Test-Path $uiEnvPath)) { "# File automatically generated on $(Get-Date)" | Out-File $uiEnvPath "# See .env.sample for more information" | Add-Content $uiEnvPath "" | Add-Content $uiEnvPath # Get provisioned API URLs (if available) try { $NG_API_URL_LANGCHAIN_JS = azd env get-value NG_API_URL_LANGCHAIN_JS 2>$null | Out-String | ForEach-Object { $_.Trim() } $NG_API_URL_LLAMAINDEX_TS = azd env get-value NG_API_URL_LLAMAINDEX_TS 2>$null | Out-String | ForEach-Object { $_.Trim() } $NG_API_URL_MAF_PYTHON = azd env get-value NG_API_URL_MAF_PYTHON 2>$null | Out-String | ForEach-Object { $_.Trim() } } catch { # Ignore errors if env values don't exist yet } "LangChain.js: $NG_API_URL_LANGCHAIN_JS" | Add-Content $uiEnvPath "" | Add-Content $uiEnvPath "# Available provisioned API endpoints:" | Add-Content $uiEnvPath if ($NG_API_URL_LLAMAINDEX_TS) { "# LlamaIndex.TS: $NG_API_URL_LLAMAINDEX_TS" | Add-Content $uiEnvPath } if ($NG_API_URL_MAF_PYTHON) { "# MAF Python: $NG_API_URL_MAF_PYTHON" | Add-Content $uiEnvPath } } # Install dependencies for the UI service Write-Host ">> Installing dependencies for the UI service..." if (-not (Test-Path "./packages/ui-angular/node_modules")) { Write-Host "Installing dependencies for the UI service..." npm ci --prefix=./packages/ui-angular } else { Write-Host "Dependencies for the UI service already installed." } ########################################################################## # MCP Tools ########################################################################## $tools = @('echo-ping', 'customer-query', 'destination-recommendation', 'itinerary-planning') Write-Host ">> Creating .env file for the MCP servers..." foreach ($tool in $tools) { $toolPath = "./packages/mcp-servers/$tool" $envSample = "$toolPath/.env.sample" $envFile = "$toolPath/.env" $envDockerFile = "$toolPath/.env.docker" if (Test-Path $envSample) { Write-Host "Creating .env file for $tool..." if (-not (Test-Path $envFile)) { Copy-Item $envSample $envFile "# File automatically generated on $(Get-Date)" | Add-Content $envFile "# See .env.sample for more information" | Add-Content $envFile } if (-not (Test-Path $envDockerFile)) { Copy-Item $envSample $envDockerFile "# File automatically generated on $(Get-Date)" | Add-Content $envDockerFile "# See .env.sample for more information" | Add-Content $envDockerFile } # Install dependencies for the tool service Write-Host ">> Installing dependencies for $tool service..." if (-not (Test-Path "$toolPath/node_modules")) { npm ci --prefix=$toolPath } else { Write-Host "Dependencies for $tool service already installed." } } else { Write-Host "No .env.sample found for $tool, skipping..." } } # Enable Docker Desktop Model Runner docker desktop enable model-runner --tcp 12434 # Only build docker compose, do not start the containers yet Write-Host ">> Building MCP servers with Docker Compose..." $toolServices = $tools | Join-String -Separator ' ' docker compose -f docker-compose.yml up --build -d $toolServices ```` ## File: infra/hooks/postprovision.sh ````bash #!/bin/bash ## This script is executed after the Azure Developer CLI (azd) provisioning step # It sets up the environment for the AI Travel Agents application, including creating .env files, # installing dependencies, and preparing the MCP tools. # Note: this script is executed at the root of the project directory echo "Running post-deployment script for AI Travel Agents application..." ########################################################################## # API Services (LangChain.js, LlamaIndex.TS, MAF Python) ########################################################################## echo ">> Creating .env files for API services..." # Get shared Azure OpenAI endpoint AZURE_OPENAI_ENDPOINT=$(azd env get-value AZURE_OPENAI_ENDPOINT) # Function to create API .env file create_api_env_file() { local api_path=$1 local service_name=$2 local port=$3 if [ ! -f "$api_path/.env" ]; then echo "# File automatically generated on $(date)" > "$api_path/.env" echo "# See .env.sample for more information" >> "$api_path/.env" echo "" >> "$api_path/.env" echo "AZURE_OPENAI_ENDPOINT=\"$AZURE_OPENAI_ENDPOINT\"" >> "$api_path/.env" echo "" >> "$api_path/.env" echo "LLM_PROVIDER=azure-openai" >> "$api_path/.env" echo "" >> "$api_path/.env" echo "AZURE_OPENAI_DEPLOYMENT=gpt-5" >> "$api_path/.env" echo "" >> "$api_path/.env" echo "MCP_CUSTOMER_QUERY_URL=http://localhost:8080" >> "$api_path/.env" echo "MCP_DESTINATION_RECOMMENDATION_URL=http://localhost:5002" >> "$api_path/.env" echo "MCP_ITINERARY_PLANNING_URL=http://localhost:5003" >> "$api_path/.env" echo "MCP_ECHO_PING_URL=http://localhost:5004" >> "$api_path/.env" echo "MCP_ECHO_PING_ACCESS_TOKEN=123-this-is-a-fake-token-please-use-a-token-provider" >> "$api_path/.env" echo "" >> "$api_path/.env" echo "PORT=$port" >> "$api_path/.env" echo "" >> "$api_path/.env" echo "OTEL_SERVICE_NAME=$service_name" >> "$api_path/.env" echo "OTEL_EXPORTER_OTLP_ENDPOINT=http://aspire-dashboard:18889" >> "$api_path/.env" echo "OTEL_EXPORTER_OTLP_HEADERS=header-value" >> "$api_path/.env" echo "Created .env file for $service_name" fi # Set overrides for docker environment if [ ! -f "$api_path/.env.docker" ]; then echo "# File automatically generated on $(date)" > "$api_path/.env.docker" echo "# See .env.sample for more information" >> "$api_path/.env.docker" echo "" >> "$api_path/.env.docker" echo "MCP_CUSTOMER_QUERY_URL=http://customer-query:8080" >> "$api_path/.env.docker" echo "MCP_DESTINATION_RECOMMENDATION_URL=http://destination-recommendation:5002" >> "$api_path/.env.docker" echo "MCP_ITINERARY_PLANNING_URL=http://itinerary-planning:5003" >> "$api_path/.env.docker" echo "MCP_ECHO_PING_URL=http://echo-ping:5004" >> "$api_path/.env.docker" echo "Created .env.docker file for $service_name" fi } # Create .env files for all API orchestrators create_api_env_file "./packages/api-langchain-js" "api-langchain-js" "4000" create_api_env_file "./packages/api-llamaindex-ts" "api-llamaindex-ts" "4000" create_api_env_file "./packages/api-maf-python" "api-maf-python" "8000" # Install dependencies for TypeScript API services for service in "api-langchain-js" "api-llamaindex-ts"; do echo ">> Installing dependencies for $service service..." if [ ! -d "./packages/$service/node_modules" ]; then echo "Installing dependencies for $service service..." npm ci --prefix=./packages/$service --legacy-peer-deps else echo "Dependencies for $service service already installed." fi done ########################################################################## # UI (Angular) ########################################################################## echo ">> Creating .env file for the UI service..." if [ ! -f ./packages/ui-angular/.env ]; then echo "# File automatically generated on $(date)" > ./packages/ui-angular/.env echo "# See .env.sample for more information" >> ./packages/ui-angular/.env echo "" >> ./packages/ui-angular/.env # Get provisioned API URLs (if available) NG_API_URL_LANGCHAIN_JS=$(azd env get-value NG_API_URL_LANGCHAIN_JS 2>/dev/null || echo "") NG_API_URL_LLAMAINDEX_TS=$(azd env get-value NG_API_URL_LLAMAINDEX_TS 2>/dev/null || echo "") NG_API_URL_MAF_PYTHON=$(azd env get-value NG_API_URL_MAF_PYTHON 2>/dev/null || echo "") echo "# LangChain.js: $NG_API_URL_LANGCHAIN_JS" >> ./packages/ui-angular/.env echo "" >> ./packages/ui-angular/.env echo "# Available provisioned API endpoints:" >> ./packages/ui-angular/.env [ -n "$NG_API_URL_LLAMAINDEX_TS" ] && echo "# LlamaIndex.TS: $NG_API_URL_LLAMAINDEX_TS" >> ./packages/ui-angular/.env [ -n "$NG_API_URL_MAF_PYTHON" ] && echo "# MAF Python: $NG_API_URL_MAF_PYTHON" >> ./packages/ui-angular/.env fi # Install dependencies for the UI service echo ">> Installing dependencies for the UI service..." if [ ! -d ./packages/ui-angular/node_modules ]; then echo "Installing dependencies for the UI service..." npm ci --prefix=./packages/ui-angular else echo "Dependencies for the UI service already installed." fi ########################################################################## # MCP Tools ########################################################################## echo ">> Setting up MCP tools..." if [ -f ./infra/hooks/mcp/setup.sh ]; then echo "Executing MCP tools setup script..." ./infra/hooks/mcp/setup.sh mcp_status=$? if [ $mcp_status -ne 0 ]; then echo "MCP tools setup failed with exit code $mcp_status. Exiting." exit $mcp_status fi else echo "MCP tools setup script not found. Skipping MCP tools setup." fi ```` ## File: infra/hooks/mcp/setup.sh ````bash #!/bin/bash # This script builds and sets up the MCP containers. ########################################################################## # MCP Tools ########################################################################## tools="mcp-echo-ping mcp-customer-query mcp-destination-recommendation mcp-itinerary-planning" printf ">> Creating .env file for the MCP servers...\n" # for each tool copy the .env.sample (if it exists) to .env and .env.docker (dont overwrite existing .env files) for tool in $tools; do if [ -f ./packages/mcp-servers/$tool/.env.sample ]; then printf "Creating .env file for $tool...\n" if [ ! -f ./packages/mcp-servers/$tool/.env ]; then cp ./packages/mcp-servers/$tool/.env.sample ./packages/mcp-servers/$tool/.env printf "# File automatically generated on $(date)\n" >> ./packages/mcp-servers/$tool/.env printf "# See .env.sample for more information\n" >> ./packages/mcp-servers/$tool/.env fi # Create .env.docker file if it doesn't exist if [ ! -f ./packages/mcp-servers/$tool/.env.docker ]; then cp ./packages/mcp-servers/$tool/.env.sample ./packages/mcp-servers/$tool/.env.docker printf "# File automatically generated on $(date)\n" >> ./packages/mcp-servers/$tool/.env.docker printf "# See .env.sample for more information\n" >> ./packages/mcp-servers/$tool/.env.docker fi else printf "No .env.sample found for $tool, skipping...\n" fi done # only build docker compose, do not start the containers yet printf ">> Building MCP servers with Docker Compose...\n" docker compose -f docker-compose.yml up --build -d $tools ````