{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Introduction to Using LangChain with MLflow\n",
"\n",
"Welcome to this interactive tutorial designed to introduce you to [LangChain](https://python.langchain.com/docs/get_started/introduction) and its integration with MLflow. This tutorial is structured as a notebook to provide a hands-on, practical learning experience with the simplest and most core features of LangChain."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"Download this Notebook
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### What You Will Learn\n",
"\n",
"- **Understanding LangChain**: Get to know the basics of LangChain and how it is used in developing applications powered by language models.\n",
"- **Chains in LangChain**: Explore the concept of `chains` in LangChain, which are sequences of actions or operations orchestrated to perform complex tasks.\n",
"- **Integration with MLflow**: Learn how LangChain integrates with MLflow, a platform for managing the machine learning lifecycle, including logging, tracking, and deploying models.\n",
"- **Practical Application**: Apply your knowledge to build a LangChain chain that acts like a sous chef, focusing on the preparation steps of a recipe.\n",
"\n",
"### Background on LangChain\n",
"\n",
"LangChain is a Python-based framework that simplifies the development of applications using language models. It is designed to enhance context-awareness and reasoning in applications, allowing for more sophisticated and interactive functionalities.\n",
"\n",
"### What is a Chain?\n",
"\n",
"- **Chain Definition**: In LangChain, a `chain` refers to a series of interconnected components or steps designed to accomplish a specific task. \n",
"- **Chain Example**: In our tutorial, we'll create a chain that simulates a sous chef's role in preparing ingredients and tools for a recipe.\n",
"\n",
"### Tutorial Overview\n",
"\n",
"In this tutorial, you will:\n",
"\n",
"1. **Set Up LangChain and MLflow**: Initialize and configure both LangChain and MLflow.\n",
"2. **Create a Sous Chef Chain**: Develop a LangChain chain that lists ingredients, describes preparation techniques, organizes ingredient staging, and details cooking implements preparation for a given recipe.\n",
"3. **Log and Load the Model**: Utilize MLflow to log the chain model and then load it for prediction.\n",
"4. **Run a Prediction**: Execute the chain to see how it would prepare a restaurant dish for a specific number of customers.\n",
"\n",
"By the end of this tutorial, you will have a solid foundation in using LangChain with MLflow and an understanding of how to construct and manage chains for practical applications.\n",
"\n",
"Let's dive in and explore the world of LangChain and MLflow!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"\n",
"In order to get started with this tutorial, we're going to need a few things first. \n",
"\n",
"1. An OpenAI API Account. You can [sign up here](https://platform.openai.com/login?launch) to get access in order to start programatically accessing one of the leading highly sophisticated LLM services on the planet.\n",
"2. An OpenAI API Key. You can access this once you've created an account by navigating [to the API keys page](https://platform.openai.com/api-keys).\n",
"3. The OpenAI SDK. It's [available on PyPI](https://pypi.org/project/openai/) here. For this tutorial, we're going to be using version 0.28.1 (the last release prior to the 1.0 release).\n",
"4. The LangChain package. You can [find it here on PyPI](https://pypi.org/project/langchain/). \n",
"\n",
"#### Notebook compatibility\n",
"\n",
"With rapidly changing libraries such as `langchain`, examples can become outdated rather quickly and will no longer work. For the purposes of demonstration, here are the critical dependencies that are recommended to use to effectively run this notebook:\n",
"\n",
"| Package | Version |\n",
"|:--------------------|:------------|\n",
"| langchain | **0.1.16** |\n",
"| lanchain-community | **0.0.33** |\n",
"| langchain-openai | **0.0.8** |\n",
"| openai | **1.12.0** |\n",
"| tiktoken | **0.6.0** |\n",
"| mlflow | **2.12.1** |\n",
"\n",
"If you attempt to execute this notebook with different versions, it may function correctly, but it is recommended to use the precise versions above to ensure that your code executes properly.\n",
"\n",
"To install the dependent packages simply run:\n",
"\n",
"```bash\n",
"pip install openai==1.12.0 tiktoken==0.6.0 langchain==0.1.16 langchain-openai==0.0.33 langchain-community==0.0.33 mlflow==2.12.1\n",
"```\n",
"\n",
"> NOTE: This tutorial does not support openai<1 and is not guaranteed to work with versions of langchain<1.16.0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### API Key Security Overview\n",
"\n",
"API keys, especially for SaaS Large Language Models (LLMs), are as sensitive as financial information due to their connection to billing.\n",
"\n",
"If you're interested in learning more about an alternative MLflow solution that securely manages your access keys, [read about MLflow AI Gateway here](https://www.mlflow.org/docs/latest/llms/deployments/index.html).\n",
"\n",
"#### Essential Practices:\n",
"\n",
"- **Confidentiality**: Always keep API keys private.\n",
"- **Secure Storage**: Prefer environment variables or secure services.\n",
"- **Frequent Rotation**: Regularly update keys to avoid unauthorized access.\n",
"\n",
"#### Configuring API Keys\n",
"\n",
"For secure usage, set API keys as environment variables.\n",
"\n",
"**macOS/Linux**:\n",
"Refer to [Apple's guide on using environment variables in Terminal](https://support.apple.com/en-gb/guide/terminal/apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac) for detailed instructions.\n",
"\n",
"**Windows**:\n",
"Follow the steps outlined in [Microsoft's documentation on environment variables](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7.4).\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"import mlflow\n",
"\n",
"assert \"OPENAI_API_KEY\" in os.environ, \"Please set the OPENAI_API_KEY environment variable.\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **NOTE: If you'd like to use Azure OpenAI with LangChain, you need to install `openai>=1.10.0` and `langchain-openai>=0.0.6`, as well as to specify the following credentials and parameters:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# NOTE: Only run this cell if you are using Azure interfaces with OpenAI. If you have a direct account with\n",
"# OpenAI, ignore this cell.\n",
"\n",
"from langchain_openai import AzureOpenAI, AzureOpenAIEmbeddings\n",
"\n",
"# Set this to `azure`\n",
"os.environ[\"OPENAI_API_TYPE\"] = \"azure\"\n",
"# The API version you want to use: set this to `2023-05-15` for the released version.\n",
"os.environ[\"OPENAI_API_VERSION\"] = \"2023-05-15\"\n",
"assert (\n",
" \"AZURE_OPENAI_ENDPOINT\" in os.environ\n",
"), \"Please set the AZURE_OPENAI_ENDPOINT environment variable. It is the base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.\"\n",
"assert (\n",
" \"OPENAI_API_KEY\" in os.environ\n",
"), \"Please set the OPENAI_API_KEY environment variable. It is the API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.\"\n",
"\n",
"azure_openai_llm = AzureOpenAI(\n",
" deployment_name=\"\",\n",
" model_name=\"gpt-4o-mini\",\n",
")\n",
"azure_openai_embeddings = AzureOpenAIEmbeddings(\n",
" azure_deployment=\"\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Configuring the OpenAI Completions Model in LangChain\n",
"\n",
"In this section of the tutorial, we have configured the OpenAI model with specific parameters suitable for generating language completions. We're using a Completions model, not ChatCompletions, which means each request is independent, and the entire prompt needs to be included every time to generate a response.\n",
"\n",
"#### Understanding the Completions Model\n",
"\n",
"- **Completions Model**: This model does not maintain contextual information across requests. It's ideal for tasks where each request is standalone and doesn't depend on past interactions. Offers flexibility for a variety of non-conversational applications.\n",
"\n",
"- **No Contextual Memory**: The lack of memory of previous interactions means the model is best suited for one-off requests or scenarios where continuity of the conversation is not required.\n",
"\n",
"- **Comparisons with the ChatCompletions Model Type**: Tailored for conversational AI, maintaining context across multiple exchanges for a continuous conversation. Suitable for chatbots or applications where dialogue history is crucial.\n",
"\n",
"In this tutorial, we use the Completions model for its simplicity and effectiveness in handling individual, independent requests, aligning with our tutorial's focus on preparation steps before cooking.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0.1, max_tokens=1000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Explanation of the Template Instruction for Sous Chef Simulation\n",
"\n",
"In this part of the tutorial, we have crafted a detailed prompt template that simulates the role of a fine dining sous chef. This template is designed to guide the LangChain model in preparing for a dish, focusing exclusively on the [mise-en-place](https://en.wikipedia.org/wiki/Mise_en_place) process.\n",
"\n",
"#### Breakdown of the Template Instruction\n",
"\n",
"- **Sous Chef Roleplay**: The prompt places the language model in the role of a sous chef, emphasizing meticulous preparation.\n",
"- **Task Outline**:\n",
" 1. **List the Ingredients**: Instructs the model to itemize all necessary ingredients for a given dish.\n",
" 2. **Preparation Techniques**: Asks the model to describe necessary techniques for ingredient preparation, such as cutting and processing.\n",
" 3. **Ingredient Staging**: Requires the model to provide detailed staging instructions for each ingredient, considering the sequence and timing of use.\n",
" 4. **Cooking Implements Preparation**: Guides the model to list and prepare all cooking tools required for the dish's preparation phase.\n",
"\n",
"- **Scope Limitation**: The template is explicitly designed to stop at the preparation stage, avoiding the actual cooking process. It focuses on setting up everything needed for the chef to begin cooking.\n",
"\n",
"- **Dynamic Inputs**: The template is adaptable to different recipes and customer counts, as indicated by placeholders `{recipe}` and `{customer_count}`.\n",
"\n",
"This template instruction is a key component of the tutorial, demonstrating how to leverage LangChain declaring instructive prompts with parametrized features geared toward single-purpose completions-style applications.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"template_instruction = (\n",
" \"Imagine you are a fine dining sous chef. Your task is to meticulously prepare for a dish, focusing on the mise-en-place process.\"\n",
" \"Given a recipe, your responsibilities are: \"\n",
" \"1. List the Ingredients: Carefully itemize all ingredients required for the dish, ensuring every element is accounted for. \"\n",
" \"2. Preparation Techniques: Describe the techniques and operations needed for preparing each ingredient. This includes cutting, \"\n",
" \"processing, or any other form of preparation. Focus on the art of mise-en-place, ensuring everything is perfectly set up before cooking begins.\"\n",
" \"3. Ingredient Staging: Provide detailed instructions on how to stage and arrange each ingredient. Explain where each item should be placed for \"\n",
" \"efficient access during the cooking process. Consider the timing and sequence of use for each ingredient. \"\n",
" \"4. Cooking Implements Preparation: Enumerate all the cooking tools and implements needed for each phase of the dish's preparation. \"\n",
" \"Detail any specific preparation these tools might need before the actual cooking starts and describe what pots, pans, dishes, and \"\n",
" \"other tools will be needed for the final preparation.\"\n",
" \"Remember, your guidance stops at the preparation stage. Do not delve into the actual cooking process of the dish. \"\n",
" \"Your goal is to set the stage flawlessly for the chef to execute the cooking seamlessly.\"\n",
" \"The recipe you are given is for: {recipe} for {customer_count} people. \"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Constructing the LangChain Chain\n",
"\n",
"We start by setting up a `PromptTemplate` in LangChain, tailored to our sous chef scenario. The template is designed to dynamically accept inputs like the recipe name and customer count. Then, we initialize an `LLMChain` by combining our OpenAI language model with the prompt template, creating a chain that can simulate the sous chef's preparation process.\n",
"\n",
"### Logging the Chain in MLflow\n",
"\n",
"With the chain ready, we proceed to log it in MLflow. This is done within an MLflow run, which not only logs the chain model under a specified name but also tracks various details about the model. The logging process ensures that all aspects of the chain are recorded, allowing for efficient version control and future retrieval."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"prompt = PromptTemplate(\n",
" input_variables=[\"recipe\", \"customer_count\"],\n",
" template=template_instruction,\n",
")\n",
"chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
"mlflow.set_experiment(\"Cooking Assistant\")\n",
"\n",
"with mlflow.start_run():\n",
" model_info = mlflow.langchain.log_model(chain, \"langchain_model\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we navigate to the MLflow UI, we'll see our logged LangChain model.\n",
"\n",
"\n",
"![Our LangChain Model in the MLflow UI](https://i.imgur.com/CeCeyp2.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading the Model and Predicting with MLflow\n",
"\n",
"In this part of our tutorial, we demonstrate the practical application of the logged LangChain model using MLflow. We load the model and run a prediction for a specific dish, showcasing the model's ability to assist in culinary preparation.\n",
"\n",
"#### Model Loading and Execution\n",
"\n",
"After logging our LangChain chain with MLflow, we proceed to load the model using MLflow's `pyfunc.load_model` function. This step is crucial as it brings our previously logged model into an executable state.\n",
"\n",
"We then input a specific recipe along with the customer count into our model. In this case, we use the recipe for \"boeuf bourginon\" and specify that it's for 12 customers. The model, acting as a sous chef, processes this information and generates detailed preparation instructions.\n",
"\n",
"#### Output from the Model\n",
"\n",
"The model's output provides a comprehensive guide on preparing \"boeuf bourginon,\" covering several critical aspects:\n",
"\n",
"- **Ingredients List**: A detailed enumeration of all necessary ingredients, quantified and tailored for the specified number of customers.\n",
"- **Preparation Techniques**: Step-by-step instructions on how to prepare each ingredient, following the principles of mise-en-place.\n",
"- **Ingredient Staging**: Guidance on how to organize and stage the ingredients, ensuring efficient access and use during the cooking process.\n",
"- **Cooking Implements Preparation**: Instructions on preparing the necessary cooking tools and implements, from pots and pans to bowls and colanders.\n",
"\n",
"This example demonstrates the power and utility of combining LangChain and MLflow in a practical scenario. It highlights how such an integration can effectively translate complex requirements into actionable steps, aiding in tasks that require precision and careful planning.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"1. Ingredients:\n",
"- 2 pounds beef chuck, cut into 1-inch cubes\n",
"- 6 slices of bacon, diced\n",
"- 2 tablespoons olive oil\n",
"- 1 onion, diced\n",
"- 2 carrots, diced\n",
"- 2 cloves of garlic, minced\n",
"- 1 tablespoon tomato paste\n",
"- 1 bottle of red wine\n",
"- 2 cups beef broth\n",
"- 1 bouquet garni (thyme, bay leaf, parsley)\n",
"- 1 pound pearl onions, peeled\n",
"- 1 pound mushrooms, quartered\n",
"- Salt and pepper to taste\n",
"- Chopped parsley for garnish\n",
"\n",
"2. Preparation Techniques:\n",
"- Cut the beef chuck into 1-inch cubes and set aside.\n",
"- Dice the bacon and set aside.\n",
"- Peel and dice the onion and carrots.\n",
"- Mince the garlic cloves.\n",
"- Prepare the bouquet garni by tying together a few sprigs of thyme, a bay leaf, and a few sprigs of parsley with kitchen twine.\n",
"- Peel the pearl onions and quarter the mushrooms.\n",
"\n",
"3. Ingredient Staging:\n",
"- Place the beef cubes in a bowl and season with salt and pepper.\n",
"- In a large Dutch oven, heat the olive oil over medium-high heat.\n",
"- Add the diced bacon and cook until crispy.\n",
"- Remove the bacon from the pot and set aside.\n",
"- In the same pot, add the seasoned beef cubes and cook until browned on all sides.\n",
"- Remove the beef from the pot and set aside.\n",
"- In the same pot, add the diced onion and carrots and cook until softened.\n",
"- Add the minced garlic and cook for an additional minute.\n",
"- Stir in the tomato paste and cook for another minute.\n",
"- Add the beef and bacon back into the pot.\n",
"- Pour in the red wine and beef broth.\n",
"- Add the bouquet garni and bring to a simmer.\n",
"- Cover the pot and let it simmer for 2 hours, stirring occasionally.\n",
"- After 2 hours, add the pearl onions and mushrooms to the pot.\n",
"- Continue to simmer for an additional hour, or until the beef is tender.\n",
"- Remove the bouquet garni and discard.\n",
"- Taste and adjust seasoning with salt and pepper if needed.\n",
"- Garnish with chopped parsley before serving.\n",
"\n",
"4. Cooking Implements Preparation:\n",
"- Large Dutch oven or heavy-bottomed pot\n",
"- Kitchen twine\n",
"- Cutting board\n",
"- Chef's knife\n",
"- Wooden spoon\n",
"- Measuring cups and spoons\n",
"- Bowls for prepped ingredients\n",
"- Tongs for handling meat\n",
"- Ladle for serving\n",
"- Serving dishes for the final dish.\n"
]
}
],
"source": [
"loaded_model = mlflow.pyfunc.load_model(model_info.model_uri)\n",
"\n",
"dish1 = loaded_model.predict({\"recipe\": \"boeuf bourginon\", \"customer_count\": \"4\"})\n",
"\n",
"print(dish1[0])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Ingredients:\n",
"- 2 cups all-purpose flour\n",
"- 2 teaspoons baking powder\n",
"- 1/2 teaspoon salt\n",
"- 2 eggs\n",
"- 1 1/2 cups water\n",
"- 1/2 head cabbage, thinly sliced\n",
"- 1/2 cup green onions, thinly sliced\n",
"- 1/2 cup carrots, grated\n",
"- 1/2 cup red bell pepper, thinly sliced\n",
"- 1/2 cup cooked shrimp, chopped\n",
"- 1/2 cup cooked bacon, chopped\n",
"- 1/2 cup pickled ginger, chopped\n",
"- 1/2 cup tenkasu (tempura flakes)\n",
"- 1/2 cup mayonnaise\n",
"- 1/4 cup okonomiyaki sauce\n",
"- 1/4 cup katsuobushi (dried bonito flakes)\n",
"- Vegetable oil for cooking\n",
"\n",
"Preparation Techniques:\n",
"1. In a large mixing bowl, combine the flour, baking powder, and salt.\n",
"2. In a separate bowl, beat the eggs and water together.\n",
"3. Slowly pour the egg mixture into the flour mixture, stirring until well combined.\n",
"4. Set the batter aside to rest for 10 minutes.\n",
"5. Thinly slice the cabbage, green onions, and red bell pepper.\n",
"6. Grate the carrots.\n",
"7. Chop the cooked shrimp, bacon, and pickled ginger.\n",
"8. Prepare the tenkasu, mayonnaise, okonomiyaki sauce, and katsuobushi.\n",
"\n",
"Ingredient Staging:\n",
"1. Place the sliced cabbage, green onions, carrots, red bell pepper, shrimp, bacon, and pickled ginger in separate bowls.\n",
"2. Arrange the tenkasu, mayonnaise, okonomiyaki sauce, and katsuobushi in small dishes.\n",
"3. Set up a large griddle or non-stick pan for cooking the okonomiyaki.\n",
"\n",
"Cooking Implements Preparation:\n",
"1. Make sure the griddle or pan is clean and dry.\n",
"2. Heat the griddle or pan over medium heat.\n",
"3. Have a spatula, tongs, and a large plate ready for flipping and serving the okonomiyaki.\n",
"4. Prepare a large plate or platter for serving the finished okonomiyaki.\n",
"\n",
"Remember, mise-en-place is key to a successful dish. Make sure all ingredients are prepped and ready to go before starting the cooking process. Happy cooking!\n"
]
}
],
"source": [
"dish2 = loaded_model.predict({\"recipe\": \"Okonomiyaki\", \"customer_count\": \"12\"})\n",
"\n",
"print(dish2[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Conclusion\n",
"\n",
"In the final step of our tutorial, we execute another prediction using our LangChain model. This time, we explore the preparation for \"Okonomiyaki,\" a Japanese dish, for 12 customers. This demonstrates the model's adaptability and versatility across various cuisines.\n",
"\n",
"#### Additional Prediction with the Loaded Model\n",
"\n",
"The model processes the input for \"Okonomiyaki\" and outputs detailed preparation steps. This includes listing the ingredients, explaining the preparation techniques, guiding ingredient staging, and detailing the required cooking implements, showcasing the model's capability to handle diverse recipes with precision.\n",
"\n",
"#### What We've Learned\n",
"\n",
"- **Model Versatility**: The tutorial highlighted the LangChain framework for assembling component parts of a basic LLM application, chaining a specific instructional prompt to a Completions-style LLM.\n",
"- **MLflow's Role in Model Management**: The integration of LangChain with MLflow demonstrated effective model lifecycle management, from creation and logging to prediction execution.\n",
"\n",
"### Closing Thoughts\n",
"\n",
"This tutorial offered an insightful journey through creating, managing, and utilizing a LangChain model with MLflow for culinary preparation. It showcased the practical applications and adaptability of LangChain in complex scenarios. We hope this experience has provided valuable knowledge and encourages you to further explore and innovate using LangChain and MLflow in your projects. Happy coding!\n",
"\n",
"### What's next?\n",
"\n",
"To continue learning about the capabilities of MLflow and LangChain in more complex examples, we encourage you to continue your learning with [the additional LangChain tutorials](https://www.mlflow.org/docs/latest/llms/langchain/index.html#advanced-tutorials)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}