{ "cells": [ { "cell_type": "markdown", "id": "cbae6dd2-2d8e-48cf-bebc-fc6e49868e91", "metadata": {}, "source": [ "# SageMaker + Astra DB, integration example" ] }, { "cell_type": "markdown", "id": "661f23b6-9a44-4787-bd82-0416aeed9a52", "metadata": {}, "source": [ "Use an LLM and an Embedding model from Amazon SageMaker and a Vector Store from [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) to run a simple RAG-based application.\n", "\n", "In this notebook, you will:\n", "- either deploy Embedding model and LLM, or connect to existing ones in SageMaker, and see them in action;\n", "- Connect with Astra DB and create a Vector Store in it;\n", "- populate it with example \"pretend entomology\" information;\n", "- run an AI-powered entomology assistant to help identification of field insect observations.\n", "\n", "> Note: this notebook is designed to run within Amazon SageMaker Studio. See [this page](https://awesome-astra.github.io/docs/pages/aiml/aws/aws-sagemaker/) for more information and references." ] }, { "cell_type": "markdown", "id": "183c0fe8-998d-4594-b010-95cf0c7f3dd1", "metadata": {}, "source": [ "## General setup\n", "\n", "_Note: you may see some dependency-resolution error in the output from `pip` here. Do not pay too much attention: the rest of this notebook will work just fine._" ] }, { "cell_type": "code", "execution_count": null, "id": "dd118138-60cc-406f-af74-a8dca0a5868d", "metadata": { "tags": [] }, "outputs": [], "source": [ "!pip install --upgrade pip\n", "!pip install --quiet \\\n", " \"langchain-astradb>=0.3.3\" \\\n", " \"langchain>=0.2,<0.3\" \\\n", " \"sagemaker>=2,<3\" \\\n", " \"datasets>=2.16.1\" \\\n", " \"jupyter-ai-magics>=2.19.1\" \\\n", " \"faiss-cpu\"\n", "\n", "# (don't worry about the last two lines above: these fix a temporary version-clash issue with JupyterLab's preinstalled image)" ] }, { "cell_type": "code", "execution_count": null, "id": "7db92cd9-3999-4a6f-ba8d-b3a7d8f17093", "metadata": { "tags": [] }, "outputs": [], "source": [ "import json\n", "import getpass\n", "from typing import Dict, List, Optional, Any\n", "\n", "import boto3\n", "\n", "from datasets import load_dataset\n", "\n", "from sagemaker.jumpstart.model import JumpStartModel\n", "from sagemaker.session import Session\n", "from sagemaker import image_uris, model_uris\n", "from sagemaker.predictor import Predictor\n", "from sagemaker.model import Model\n", "from sagemaker.utils import name_from_base\n", "from sagemaker.base_serializers import JSONSerializer\n", "from sagemaker.base_deserializers import JSONDeserializer" ] }, { "cell_type": "code", "execution_count": null, "id": "0fa3d240-c51e-47c1-934d-3aafae1b68c2", "metadata": {}, "outputs": [], "source": [ "from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n", "\n", "from langchain_community.embeddings import SagemakerEndpointEmbeddings\n", "from langchain_community.embeddings.sagemaker_endpoint import EmbeddingsContentHandler\n", "from langchain_community.llms import SagemakerEndpoint\n", "from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n", "\n", "from langchain_astradb import AstraDBVectorStore" ] }, { "cell_type": "code", "execution_count": null, "id": "024ac83b-dfa4-4f6a-994d-eb45ed2d1fe6", "metadata": { "tags": [] }, "outputs": [], "source": [ "boto3_sm_client = boto3.client('runtime.sagemaker')\n", "region_name = boto3.Session().region_name\n", "\n", "sagemaker_session = Session()\n", "aws_role = sagemaker_session.get_caller_identity_arn()" ] }, { "cell_type": "markdown", "id": "2cc084f2-480d-4953-9dc3-ffd36e305d45", "metadata": {}, "source": [ "#### Define a custom predictor ser/des\n", "\n", "Prepare a function that specializes the default SageMaker \"Predictor\": this will come handy a few times when working around the `Model` objects.\n", "\n", "> In some cases one can pass a `Model` out of the box, but for these models you want to specify usage of\n", "> JSON serialization/deserialization when interacting with the endpoints." ] }, { "cell_type": "code", "execution_count": null, "id": "05a3e589-e11e-473b-9a15-3c9cc3773658", "metadata": { "tags": [] }, "outputs": [], "source": [ "def my_json_predictor(*pargs, **kwargs):\n", " return Predictor(\n", " serializer=JSONSerializer(),\n", " deserializer=JSONDeserializer(),\n", " *pargs,\n", " **kwargs,\n", " )" ] }, { "cell_type": "markdown", "id": "cb92fb7b-1f89-4ae0-94db-3df2ea337f87", "metadata": {}, "source": [ "## Embedding model, setup\n", "\n", "Here you can choose between a model already deployed in the UI and a programmatic deploy throug the SageMaker SDK." ] }, { "cell_type": "code", "execution_count": null, "id": "ada861aa-c567-41cf-ae04-d11e0e6a78cf", "metadata": { "tags": [] }, "outputs": [], "source": [ "emb_endpoint_supplied = False\n", "\n", "emb_endpoint_name = input(\"Enter the *embedding model* endpoint name if already deployed (leave empty if deploying with SDK):\").strip()\n", "\n", "if emb_endpoint_name == \"\":\n", " print(f\"\\n{'*' * 101}\")\n", " print(\"*** INFO: the embedding model will be deployed programmatically, as no endpoint name was provided. **\")\n", " print(\"*** Re-run this cell and supply the endpoint name if this is incorrect. **\")\n", " print(f\"{'*' * 101}\")\n", "else:\n", " emb_endpoint_supplied = True" ] }, { "cell_type": "markdown", "id": "cec4fbc2-a3d1-48ff-b5cd-45ddda2d78a3", "metadata": {}, "source": [ "The following cell will perform a programmatic deployment of a JumpStart model through the SageMaker SDK.\n", "\n", "Note that it will do nothing else than print a message, instead, if the embedding model endpoint has been given already, i.e. if the model has been deployed beforehand through the UI." ] }, { "cell_type": "markdown", "id": "16c00c54-e78f-4203-8a95-3e1b86d7bacf", "metadata": {}, "source": [ "#### This is the actual deploy step.\n", "\n", "(in case of programmatic deploy, that is).\n", "\n", "> _Note: this cell may take even **ten minutes** to complete. You may check the SageMaker Studio 'endpoints' tab while this is running._" ] }, { "cell_type": "code", "execution_count": null, "id": "ee4a18a9-0a83-4ca8-a099-2f3484b1343b", "metadata": {}, "outputs": [], "source": [ "if not emb_endpoint_supplied:\n", " emb_model_id = \"huggingface-textembedding-gpt-j-6b\"\n", " emb_endpoint_name = name_from_base(emb_model_id)\n", " emb_instance_type = \"ml.g5.24xlarge\"\n", " emb_model_version = \"1.0.1\"\n", " emb_model = JumpStartModel(\n", " model_id=emb_model_id,\n", " model_version=emb_model_version,\n", " instance_type=emb_instance_type,\n", " predictor_cls=my_json_predictor,\n", " )\n", " print(f\"Deploying (endpoint name = '{emb_endpoint_name}') ...\")\n", " emb_predictor = emb_model.deploy(\n", " initial_instance_count=1,\n", " instance_type=emb_instance_type,\n", " endpoint_name=emb_endpoint_name,\n", " )\n", " print(f\"\\nDeploy completed.\")\n", "\n", " # a quick test that the model is behaving\n", " emb_test_result = json.dumps(emb_predictor.predict({\"text_inputs\": [\"I am here!\", \"So do I.\"]}))\n", " print(f\"\\nEmb. model functional test resulted in: '{emb_test_result[:50]}...'\")\n", "\n", " print(f\"\\nSetting the emb. endpoint name to '{emb_endpoint_name}'\")\n", " emb_endpoint_name = emb_predictor.endpoint_name\n", "else:\n", " print(\"(nothing to do in this case)\")" ] }, { "cell_type": "markdown", "id": "aa75f8f8-2fd5-434d-ad8d-6fcaa2d792b1", "metadata": {}, "source": [ "## Embedding model, LangChain setup\n", "\n", "To be able to work with the shape of the input and output specific to _this_ embedding model, you need to create and supply a suitable `EmbeddingsContentHandler` when instantiating the LangChain abstraction for the SageMaker embedding:" ] }, { "cell_type": "code", "execution_count": null, "id": "0f48e98a-90b0-4550-8f72-28d7e9ad2879", "metadata": { "tags": [] }, "outputs": [], "source": [ "class SageMakerGPTJ6BContentHandler(EmbeddingsContentHandler):\n", " content_type = \"application/json\"\n", " accepts = \"application/json\"\n", "\n", " def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes:\n", " input_encoded = json.dumps({\n", " \"text_inputs\": inputs,\n", " **model_kwargs,\n", " }).encode(\"utf-8\")\n", " return input_encoded\n", "\n", " def transform_output(self, output: bytes) -> List[List[float]]:\n", " \"\"\"\n", " `output` is actually a botocore.response.StreamingBody object in our case\n", " \"\"\"\n", " response_json = json.loads(output.read().decode(\"utf-8\"))\n", " return response_json[\"embedding\"]\n", "\n", "\n", "emb_content_handler = SageMakerGPTJ6BContentHandler()\n", "\n", "embeddings = SagemakerEndpointEmbeddings(\n", " endpoint_name=emb_endpoint_name,\n", " region_name=region_name,\n", " content_handler=emb_content_handler,\n", ")" ] }, { "cell_type": "markdown", "id": "76691705-8794-480c-8fce-807adfc65bf7", "metadata": {}, "source": [ "### Embedding model, test invocation through LangChain\n", "\n", "As a simple test, check that the model returns vectors normalized to having unit norm:" ] }, { "cell_type": "code", "execution_count": null, "id": "c851524f-8e62-42b5-897b-68bbeb5b5333", "metadata": { "tags": [] }, "outputs": [], "source": [ "vector1 = embeddings.embed_query(\"Hello, SageMaker\")\n", "vectors = embeddings.embed_documents([\"Can you embed multiple sentences at once?\", \"Sure, you can.\"])\n", "\n", "print(f\"Vector dimensionality: {len(vector1)}\")\n", "\n", "print(f\"Norm of 'vector1': {sum(x*x for x in vector1):.4f}\")\n", "\n", "print(\"Norms of 'vectors'\")\n", "for i, v in enumerate(vectors):\n", " print(f\" [{i}] norm = {sum(x*x for x in v):.4f}\")" ] }, { "cell_type": "markdown", "id": "3c9b6900-8e84-48d2-9a4d-1f1112ca1e33", "metadata": {}, "source": [ "## LLM, setup\n", "\n", "Here you can choose between a model already deployed in the UI and a programmatic deploy throug the SageMaker SDK." ] }, { "cell_type": "code", "execution_count": null, "id": "f35f11c9-3d4f-4f90-917d-b067ea431fc4", "metadata": { "tags": [] }, "outputs": [], "source": [ "llm_endpoint_supplied = False\n", "\n", "llm_endpoint_name = input(\"Enter the *LLM* endpoint name if already deployed (leave empty if deploying with SDK):\").strip()\n", "\n", "if llm_endpoint_name == \"\":\n", " print(f\"\\n{'*' * 89}\")\n", " print(\"*** INFO: the LLM will be deployed programmatically, as no endpoint name was provided. **\")\n", " print(\"*** Re-run this cell and supply the endpoint name if this is incorrect. **\")\n", " print(f\"{'*' * 89}\")\n", "else:\n", " llm_endpoint_supplied = True" ] }, { "cell_type": "markdown", "id": "dc628aad-ba67-41d8-84a9-0ad026ecbdab", "metadata": {}, "source": [ "The following cell works similarly to the embedding model deployment seen earlier:" ] }, { "cell_type": "markdown", "id": "87d0ee7e-1d19-4b74-b393-785f1fc5c27a", "metadata": {}, "source": [ "#### This is the actual deploy step.\n", "\n", "(in case of programmatic deploy, that is).\n", "\n", "> _Note: this cell may take even **twenty minutes or so** to complete. You may check the SageMaker Studio 'endpoints' tab while this is running._" ] }, { "cell_type": "code", "execution_count": null, "id": "2d41d49d-dabb-47b9-bdfb-ed112af97465", "metadata": { "tags": [] }, "outputs": [], "source": [ "if not llm_endpoint_supplied:\n", " llm_model_id = \"meta-textgeneration-llama-2-70b-f\"\n", " llm_endpoint_name = name_from_base(llm_model_id)\n", " llm_instance_type = \"ml.g5.48xlarge\"\n", " llm_model_version = \"3.0.2\"\n", " llm_model = JumpStartModel(\n", " model_id=llm_model_id,\n", " model_version=llm_model_version,\n", " instance_type=llm_instance_type,\n", " predictor_cls=my_json_predictor,\n", " )\n", "\n", " print(f\"Deploying (endpoint name = '{llm_endpoint_name}') ...\")\n", " llm_predictor = llm_model.deploy(\n", " initial_instance_count=1,\n", " instance_type=llm_instance_type,\n", " endpoint_name=llm_endpoint_name,\n", " accept_eula=True,\n", " )\n", " print(f\"\\nDeploy completed.\")\n", "\n", " # a quick test that the model is behaving\n", " llm_test_result = json.dumps(llm_predictor.predict(\n", " {\"inputs\": \"Write a short three-stanzas poem about ichneumonid wasps.\", \"parameters\": {\"max_new_tokens\": 256}},\n", " ))\n", " print(f\"\\nLLM model functional test resulted in: '{llm_test_result[:70]}...'\")\n", " \n", " print(f\"\\nSetting the LLM endpoint name to '{llm_endpoint_name}'\")\n", " llm_endpoint_name = llm_predictor.endpoint_name\n", "else:\n", " print(\"(nothing to do in this case)\")" ] }, { "cell_type": "markdown", "id": "49314b60-596d-4ecd-9337-d66efef2dbb7", "metadata": {}, "source": [ "## LLM, LangChain setup\n", "\n", "Similarly as what was done for the embedding model, you need to provide a \"Content Handler\" tailored to the specific signature of this LLM." ] }, { "cell_type": "code", "execution_count": null, "id": "47243fc9-b633-4cb0-86c6-cafeff6757eb", "metadata": {}, "outputs": [], "source": [ "from langchain_community.llms import SagemakerEndpoint\n", "from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n", "\n", "class ContentHandler(LLMContentHandler):\n", " content_type = \"application/json\"\n", " accepts = \"application/json\"\n", "\n", " def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:\n", " input_str = json.dumps({\"inputs\": prompt, \"parameters\": model_kwargs})\n", " return input_str.encode(\"utf-8\")\n", "\n", " def transform_output(self, output: bytes) -> str:\n", " response_json = json.loads(output.read().decode(\"utf-8\"))\n", " return response_json[\"generated_text\"]\n", "\n", "content_handler = ContentHandler()\n", "\n", "llm=SagemakerEndpoint(\n", " endpoint_name=llm_endpoint_name,\n", " # credentials_profile_name=\"credentials-profile-name\",\n", " region_name=region_name,\n", " model_kwargs={\"max_new_tokens\": 3072, \"top_p\": 0.4, \"temperature\": 0.001},\n", " endpoint_kwargs={\n", " \"CustomAttributes\": \"accept_eula=true\",\n", " },\n", " content_handler=content_handler,\n", ")" ] }, { "cell_type": "markdown", "id": "5e083aab-14b6-4272-83b2-cf9d18bf7395", "metadata": {}, "source": [ "_A note about the `endpoint_kwargs` parameter._\n", "\n", "As mentioned earlier, for this model each LLM call must carry a special header to signal acceptance of the EULA. This is accomplished,\n", "at the LangChain level, by passing this parameter when creating the `SagemakerEndpoint` instance. For reference, you can check how this parameter\n", "is used within the LangChain code ([check the code](https://github.com/langchain-ai/langchain/blob/7db6aabf65e70811e40ee6f2e1ba8e0425ba81c9/libs/langchain/langchain/llms/sagemaker_endpoint.py#L359C23-L359C39)).\n", "Essentially the EULA acceptance flag is passed down to the underlying `boto3` library, whose `invoke_endpoint` method accepts the `CustomAttributes` parameter\n", "([check the docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime/client/invoke_endpoint.html#invoke-endpoint))." ] }, { "cell_type": "markdown", "id": "65ee60c6-b47a-4fd0-b665-0b87974a9ca9", "metadata": {}, "source": [ "### LLM, test invocation through LangChain" ] }, { "cell_type": "code", "execution_count": null, "id": "01ecc091-01f9-40b6-8d96-d6d40e7e2cf3", "metadata": {}, "outputs": [], "source": [ "print(llm.invoke(\"Summarize the differences between insects and scorpions in less than ten words.\").strip())" ] }, { "cell_type": "markdown", "id": "85b69f6f-6bc8-460a-90ee-e00e93fe4d2a", "metadata": {}, "source": [ "## Vector store on Astra DB" ] }, { "cell_type": "markdown", "id": "1c948036-b0f3-4005-859c-3ddda4b3b15f", "metadata": {}, "source": [ "In this section, first provide the credentials to the Astra DB instance, used later to create the LangChain vector store:" ] }, { "cell_type": "code", "execution_count": null, "id": "928c948b-65a6-4935-9055-e21269013144", "metadata": { "tags": [] }, "outputs": [], "source": [ "ASTRA_DB_API_ENDPOINT = input(\"Enter your Astra DB API endpoint ('https://...astra.datastax.com'):\")\n", "ASTRA_DB_APPLICATION_TOKEN = getpass.getpass(\"Enter your Astra DB Token ('AstraCS:...'):\")\n", "desired_keyspace = input(\"Enter your Astra DB namespace (leave empty if default):\")\n", "if desired_keyspace:\n", " ASTRA_DB_KEYSPACE = desired_keyspace\n", "else:\n", " ASTRA_DB_KEYSPACE = None" ] }, { "cell_type": "markdown", "id": "c41f98d5-a4e6-4b98-946b-c60417a98bd5", "metadata": {}, "source": [ "Now a vector store is created, ready for use:" ] }, { "cell_type": "code", "execution_count": null, "id": "833b848b-65bb-45f6-be81-4c5fcdb76056", "metadata": { "tags": [] }, "outputs": [], "source": [ "astra_v_store = AstraDBVectorStore(\n", " token=ASTRA_DB_APPLICATION_TOKEN,\n", " api_endpoint=ASTRA_DB_API_ENDPOINT,\n", " namespace=ASTRA_DB_KEYSPACE,\n", " collection_name=\"sagemaker_demo_v_store\",\n", " embedding=embeddings,\n", ")" ] }, { "cell_type": "markdown", "id": "fbb2229a-936c-4904-83e4-665571bdc999", "metadata": {}, "source": [ "A small example dataset is loaded through HuggingFace. You can print a sample item to get an idea of its structure." ] }, { "cell_type": "code", "execution_count": null, "id": "98a48730-7bf3-4104-9175-957856e529c9", "metadata": { "tags": [] }, "outputs": [], "source": [ "sample_dataset = load_dataset(\"datastax/entomology\")[\"train\"]\n", "\n", "def _shorten(dct): return {k: v if len(v) < 40 else v[:40]+\"...\" for k, v in dct.items()}\n", "\n", "print(f\"Loaded {len(sample_dataset)} entries\")\n", "print(\"Example entry:\")\n", "print(\"\\n\".join(\n", " f\" {l}\" for l in json.dumps(_shorten(sample_dataset[19]), indent=4).split(\"\\n\")\n", "))" ] }, { "cell_type": "markdown", "id": "16f3f28e-7748-451c-b01b-e11b76148f2b", "metadata": {}, "source": [ "The dataset is prepared for insertion in the vector store:\n", "\n", "_(Note: Care is taken of calculating IDs deterministically to avoid accidental creation of duplicates in case the `add_texts` cell is run repeatedly.)_" ] }, { "cell_type": "code", "execution_count": null, "id": "e2addd01-b3ae-4375-b6a3-14f569f402db", "metadata": { "tags": [] }, "outputs": [], "source": [ "texts = [entry[\"description\"] for entry in sample_dataset]\n", "metadatas = [\n", " {\n", " \"name\": entry[\"name\"],\n", " \"order\": entry[\"order\"],\n", " }\n", " for entry in sample_dataset\n", "]\n", "ids = [entry[\"name\"].lower().replace(\" \", \"_\") for entry in sample_dataset]\n", "\n", "print(f\"Example from `texts`:\\n \\\"{texts[19][:40]}...\\\"\")\n", "print(f\"Example from `metadatas`:\\n {metadatas[19]}\")\n", "print(f\"Example from `ids`:\\n \\\"{ids[19]}\\\"\")" ] }, { "cell_type": "markdown", "id": "06d65238-89ad-4008-926e-d169342ca474", "metadata": {}, "source": [ "This is where the writes take place (and the embedding vectors are calculated for each item in `texts`):" ] }, { "cell_type": "code", "execution_count": null, "id": "5c0982a0-ae91-4e98-a6a3-5690f0026e12", "metadata": { "tags": [] }, "outputs": [], "source": [ "inserted_ids = astra_v_store.add_texts(texts=texts, metadatas=metadatas, ids=ids)\n", "\n", "print(f\"Inserted: {', '.join(inserted_ids)[:80]}... ({len(inserted_ids)} items)\")" ] }, { "cell_type": "markdown", "id": "87d46e04-e5f4-4056-8250-95db5315f3af", "metadata": {}, "source": [ "## Set up the full pipeline" ] }, { "cell_type": "markdown", "id": "5ab0a7d4-1af2-44b4-8d77-306de3678700", "metadata": {}, "source": [ "### Retrieval part\n", "\n", "Package the search part of the flow in a handy function:" ] }, { "cell_type": "code", "execution_count": null, "id": "a2cc4fee-db34-42db-9555-dec8d619fbf4", "metadata": { "tags": [] }, "outputs": [], "source": [ "def find_similar_entries(observation, k=3, order=None):\n", " if order:\n", " md = {\"order\": order}\n", " else:\n", " md = {}\n", " documents = astra_v_store.similarity_search(observation, k=k, filter=md)\n", " return documents" ] }, { "cell_type": "code", "execution_count": null, "id": "fe8156ed-84cc-44f9-925c-0b19973ac022", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(find_similar_entries(\"Long wings with brown spots, flies erratically, thin legs\", k=2, order=\"Odonata\"))" ] }, { "cell_type": "markdown", "id": "16ca6752-4921-49e4-a0dc-f82aa6229031", "metadata": {}, "source": [ "### Generation part" ] }, { "cell_type": "code", "execution_count": null, "id": "212b236a-b035-42c8-a579-678e6663057f", "metadata": {}, "outputs": [], "source": [ "PROMPT_TEMPLATE = \"\"\"\n", "[INST] <>\n", "You are an expert entomologist tasked with helping specimen identification on the field.\n", "You are given relevant excerpts from an invertebrate textbook along with my field observation.\n", "Your task is to compare my observation with the textbook excerpts and come to an identification,\n", "explaining why you came to that conclusion and giving the degree of certainity.\n", "Only use the information provided in the user observation to come to your conclusion!\n", "Be sure to provide, in your verdict, the species' Order together with the full Latin name.\n", "Keep it short and informal, not like a letter, do not start with 'Dear User' or similar,\n", "do not sign your communication.\n", "\n", "TEXTBOOK CANDIDATE MATCHES:\n", "{candidates}\n", "\n", "<>\n", "\n", "Here is my observation:\n", "{observation}\n", "\n", "Please assist me in the identification. [/INST]\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "id": "8a105442-b995-4b8a-a72f-7d54fa8edf6a", "metadata": {}, "outputs": [], "source": [ "def describe_candidates(matches):\n", " return \"\\n\".join([\n", " f\"Candidate species {i+1}: '{doc.metadata['name']}' (order: {doc.metadata['order']})\\nDescription: {doc.page_content}\\n\"\n", " for i, doc in enumerate(matches)\n", " ])\n", "\n", "def format_prompt(observation, candidates):\n", " return PROMPT_TEMPLATE.format(observation=observation, candidates=candidates)" ] }, { "cell_type": "code", "execution_count": null, "id": "f1592e09-c3e3-4f46-99b3-cab7727c13e8", "metadata": {}, "outputs": [], "source": [ "candidates = describe_candidates(find_similar_entries(\"Long wings with brown spots, flies erratically, thin legs\", k=2, order=\"Odonata\"))\n", "print(candidates)" ] }, { "cell_type": "code", "execution_count": null, "id": "92eb2839-c62c-4dd8-9213-1158da929567", "metadata": {}, "outputs": [], "source": [ "print(format_prompt(observation=\"I saw a certain bug!\", candidates=candidates))" ] }, { "cell_type": "code", "execution_count": null, "id": "448cfea3-d05a-4344-987b-54d2c94de49f", "metadata": {}, "outputs": [], "source": [ "def identify_and_suggest(observation, order=None):\n", " matches = find_similar_entries(observation, k=3, order=order)\n", " candidates_text = describe_candidates(matches)\n", " prompt = format_prompt(\n", " observation=observation,\n", " candidates=candidates_text,\n", " )\n", " return llm.invoke(prompt).strip()" ] }, { "cell_type": "markdown", "id": "4f573031-731d-4482-97fb-52f242f138e7", "metadata": {}, "source": [ "### Putting it all to test" ] }, { "cell_type": "code", "execution_count": null, "id": "d5ca2e2a-4dbf-48a1-a25b-6bdf30874449", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(identify_and_suggest(\"A large butterfly with pointed wing tips and a yellow spot in the middle of each wing.\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "e486732b-2c7b-49bf-b472-bda4802dbcc1", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(identify_and_suggest(\"I found a nondescript brown bug with small wings, dark elitra and sturdy antennae in a meadow.\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "b0e32bbd-bc24-4f32-8095-35980b633b9f", "metadata": { "tags": [] }, "outputs": [], "source": [ "print(identify_and_suggest(\"What looked like a leaf was in fact moving! It startled me greatly. But I'm not sure it's an insect, I did not see antennae. What was it?\"))" ] }, { "cell_type": "markdown", "id": "a8234b34-7cae-4185-bf39-79b4e65c17ac", "metadata": {}, "source": [ "### The \"final app\":" ] }, { "cell_type": "markdown", "id": "bf8e7ba2-8e8b-42d1-af05-9355a4cc6e73", "metadata": {}, "source": [ "The loop below is a simple \"app\" to repeatedly interact with the entomology assistant:\n", "\n", "- Try it with simple observations such as _I found a strange bug in the library, whose appearance was that of an old piece of paper. What was it?_\n", "- Enter an empty input to end the cell." ] }, { "cell_type": "code", "execution_count": null, "id": "9bb8adbe-6dfc-4b05-aa18-eb662bb98343", "metadata": { "tags": [] }, "outputs": [], "source": [ "while True:\n", " observation = input(\"\\n=============================\\nEnter your field observation: \").strip()\n", " if observation:\n", " print(\"-----------------------------\")\n", " result = identify_and_suggest(observation)\n", " print(f\"Result ==> {result}\")\n", " else:\n", " print(\"(no input)\")\n", " break\n", " \n", "print(\"\\n========\\nGoodbye.\")" ] }, { "cell_type": "markdown", "id": "ffd9cb4e-b00f-46ae-8076-e310e6398531", "metadata": {}, "source": [ "## Appendix: non-LangChain model tests\n", "\n", "The code below is not part of the main LangChain-based application, but shows how you can use the SageMaker endpoints at lower abstraction layers than LangChain, namely by calling directly the boto3 or the SageMaker SDK primitives. Note that in the latter case, if you have deployed the model in the SageMaker UI, you will have to construct a `Predictor` object manually.\n", "\n", "_These non-LangChain idioms are important in themselves, as they open the way to a richer set of possibilities for integrating Astra DB with Amazon SageMaker._" ] }, { "cell_type": "markdown", "id": "5c5bf686-bc78-4bb1-96c4-89ce71200bd9", "metadata": {}, "source": [ "### Embedding model, test invocation through boto3" ] }, { "cell_type": "code", "execution_count": null, "id": "4df5ac27-0ff8-4110-ad1d-222255dc49ba", "metadata": { "tags": [] }, "outputs": [], "source": [ "encoded_body = json.dumps(\n", " {\n", " \"text_inputs\": [\n", " \"Can you invoke a SageMaker embedding model from boto3 directly?\",\n", " \"Wait and see...\"\n", " ]\n", " }\n", ").encode(\"utf-8\")\n", "\n", "response = boto3_sm_client.invoke_endpoint(\n", " EndpointName=emb_endpoint_name,\n", " Body=encoded_body,\n", " ContentType='application/json',\n", " Accept='application/json',\n", ")\n", "\n", "response_body = response['Body']\n", "read_body = response_body.read()\n", "response_json = json.loads(read_body.decode())\n", "\n", "# This is a list 2 lists, each made of 4096 floats:\n", "embedding_vectors = response_json['embedding']\n", "\n", "print(f\"Returned {len(embedding_vectors)} embedding vectors.\")\n", "print(f\"Each is made of {len(embedding_vectors[0])} float values.\")\n", "print(f\" The first one starts with: {str(embedding_vectors[0])[:80]}...\")" ] }, { "cell_type": "markdown", "id": "9985f22e-5874-4692-8149-6eadb59121d6", "metadata": {}, "source": [ "### Embedding model, test invocation through SageMaker SDK" ] }, { "cell_type": "code", "execution_count": null, "id": "a4896578-466a-4856-a22d-bf705255291c", "metadata": { "tags": [] }, "outputs": [], "source": [ "if emb_endpoint_supplied:\n", " emb_predictor = my_json_predictor(emb_endpoint_name)\n", "else:\n", " # `emb_predictor` was already created as part of the deploy-from-code procedure\n", " pass\n", "\n", "response_json = emb_predictor.predict(\n", " {\"text_inputs\": [\n", " \"Can you show me how to use the SageMaker SDK directly for embeddings?\",\n", " \"Let me look at the docs...\"\n", " ]\n", " }\n", ")\n", "\n", "# This is a list 2 lists, each made of 4096 floats:\n", "embedding_vectors = response_json[\"embedding\"]\n", "\n", "print(f\"Returned {len(embedding_vectors)} embedding vectors.\")\n", "print(f\"Each is made of {len(embedding_vectors[0])} float values.\")\n", "print(f\" The first one starts with: {str(embedding_vectors[0])[:80]}...\")" ] }, { "cell_type": "markdown", "id": "877e0476-8418-4804-9d2c-5bed720bb5ee", "metadata": {}, "source": [ "### LLM, test invocation through boto3\n", "\n", "For this particular model, the `inputs` field is a string. In this case it is a simple string, juts a piece of text. The particular encoding required to provide system/assistant/user exchanges can be found in the \"Test inference\" tab of your deployed endpoint, looking for the (Python) programmatic example.\n", "\n", "Note how the EULA acceptance is passed in this case ([reference](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html))." ] }, { "cell_type": "code", "execution_count": null, "id": "34d2f9c7-3fc1-4841-ac17-9690eedccd89", "metadata": { "tags": [] }, "outputs": [], "source": [ "sample_question = (\"Answer witty and in less than 20 words: what would \"\n", " \"Heidegger do if he were suddenly transported on the Moon?\")\n", "\n", "encoded_body = json.dumps({\n", " \"inputs\": sample_question,\n", " \"parameters\": {\n", " \"max_new_tokens\": 256,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.6\n", " },\n", "}).encode(\"utf-8\")\n", "\n", "response = boto3_sm_client.invoke_endpoint(\n", " EndpointName=llm_endpoint_name,\n", " Body=encoded_body,\n", " ContentType='application/json',\n", " Accept='application/json',\n", " # This is required for each invocation of this model:\n", " CustomAttributes='accept_eula=true',\n", ")\n", "response_body = response['Body']\n", "read_body = response_body.read()\n", "response_json = json.loads(read_body.decode())\n", "\n", "print(f\"Full response:\\n\")\n", "print(json.dumps(response_json, indent=4))" ] }, { "cell_type": "markdown", "id": "669341f5-835e-4959-afcb-66b6aa51708c", "metadata": {}, "source": [ "### LLM, test invocation through SageMaker SDK\n", "\n", "Note how the EULA acceptance is passed in this case ([reference](https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html))." ] }, { "cell_type": "code", "execution_count": null, "id": "5bb12751-debb-402e-b1b7-8618dc5b2bc8", "metadata": { "tags": [] }, "outputs": [], "source": [ "if llm_endpoint_supplied:\n", " llm_predictor = my_json_predictor(llm_endpoint_name)\n", "else:\n", " # `llm_predictor` was already created as part of the deploy-from-code procedure\n", " pass\n", "\n", "\n", "response_json = llm_predictor.predict(\n", " {\n", " \"inputs\": sample_question,\n", " \"parameters\": {\n", " \"max_new_tokens\": 256,\n", " \"top_p\": 0.9,\n", " \"temperature\": 0.6\n", " },\n", " },\n", " custom_attributes='accept_eula=true',\n", ")\n", "\n", "print(f\"Full response:\\n\")\n", "print(json.dumps(response_json, indent=4))" ] }, { "cell_type": "markdown", "id": "28a07ca8-33a5-4e43-be7e-5d78ad9e3942", "metadata": {}, "source": [ "## (Optional) Astra DB cleanup\n", "\n", "If you want to deallocate all resources used in the demo, besides going through the [AWS side of the operation](https://awesome-astra.github.io/docs/pages/aiml/aws/aws-sagemaker/#cleanup), you might want to delete the vector collection on Astra DB used throughout this example. To do so, simply run the following:" ] }, { "cell_type": "code", "execution_count": null, "id": "a85f8fc4-5d5b-4d84-b933-3a0a837aef89", "metadata": {}, "outputs": [], "source": [ "astra_v_store.delete_collection()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.14" } }, "nbformat": 4, "nbformat_minor": 5 }