--- name: inference-latency-profiler description: | Inference Latency Profiler - Auto-activating skill for ML Deployment. Triggers on: inference latency profiler, inference latency profiler Part of the ML Deployment skill category. allowed-tools: Read, Write, Edit, Bash, Grep version: 1.0.0 license: MIT author: Jeremy Longshore --- # Inference Latency Profiler ## Purpose This skill provides automated assistance for inference latency profiler tasks within the ML Deployment domain. ## When to Use This skill activates automatically when you: - Mention "inference latency profiler" in your request - Ask about inference latency profiler patterns or best practices - Need help with machine learning deployment skills covering model serving, mlops pipelines, monitoring, and production optimization. ## Capabilities - Provides step-by-step guidance for inference latency profiler - Follows industry best practices and patterns - Generates production-ready code and configurations - Validates outputs against common standards ## Example Triggers - "Help me with inference latency profiler" - "Set up inference latency profiler" - "How do I implement inference latency profiler?" ## Related Skills Part of the **ML Deployment** skill category. Tags: mlops, serving, inference, monitoring, production