--- name: Confidence Scoring description: See the main Model Explainability skill for comprehensive coverage of confidence scoring and calibration. --- # Confidence Scoring This skill is covered in detail in the main **Model Explainability** skill. Please refer to: `44-ai-governance/model-explainability/SKILL.md` That skill covers: - SHAP and LIME for feature importance - Confidence scoring and interpretation - Calibration techniques - Explainability for different model types - LLM-specific explainability - Presenting explanations to users - Tools (SHAP, LIME, InterpretML, Captum) - Real-world explainability examples For confidence-specific topics, also see: - Confidence thresholds in `44-ai-governance/human-approval-flows` - Model risk management in `44-ai-governance/model-risk-management` --- ## Related Skills * `44-ai-governance/model-explainability` (Main skill) * `44-ai-governance/human-approval-flows` * `44-ai-governance/model-risk-management`