{ "cells": [ { "cell_type": "markdown", "id": "f64b92eb", "metadata": {}, "source": [ "# Re-programming to Python of LLM- and Chatbook packages\n", "\n", "## Introduction\n", "\n", "In this computational document (converted into a Markdown and/or blog post) I would like to proclaim my efforts to re-program the Large Language Models (LLM) \n", "Raku packages into Python packages. \n", "\n", "\n", "I heavily borrowed use case ideas and functionality designs from LLM works of Wolfram Research, Inc. (WRI), see [SW1, SW2]. \n", "Hence, opportunistically, I am also going to include comparisons with Wolfram Language (WL) (aka Mathematica.)" ] }, { "cell_type": "markdown", "id": "93312c2a", "metadata": {}, "source": [ "## Why doing this?\n", "\n", "Here is a list of reasons why I did the Raku-to-Python reprogramming:\n", "\n", "- I am mostly doing that kind re-programmings for getting new perspectives and -- more importantly -- to review and evaluate the underlying software architecture of the packages.\n", " - Generally speaking, my Raku packages are not used by others much, hence re-programming to any other language is a fairly good way to review and evaluate them.\n", "- Since I, sort of, \"do not care\" about Python, I usually try to make only \"advanced\" Minimal Viable Products (MVPs) in Python. \n", " - Hence, the brainstorming perspective of removing \"the fluff\" from the Raku packages.\n", "- Of course, an \"advanced MVP\" has a set of fairly useful functionalities. \n", " - If the scope of the package is small, I can make its Python translation as advanced (or better) than the corresponding Raku package.\n", "- Good, useful documentation is essential, hence:\n", " - I usually write \"complete enough\" (often \"extensive\") documentation of the Raku packages I create and publish.\n", " - The Raku documentation is of course a good start for the corresponding Python documentation.\n", " - ...and a way to review and evaluate it.\n", "- In the re-programming of the Raku LLM packages, I used a Raku Jupyter Chatbook for translation of Raku code into Python code.\n", " - In other words: I used LLMs to reprogram LLM interaction software.\n", " - That, of course, is a typical application of the principle [\"eat your own dog food.\"](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)\n", "- I also used a Raku chatbook to write the Python-centric article \"Workflows with LLM functions\", [AAn3py].\n", "- The \"data package\" [\"LLM::Prompts\"](https://raku.land/zef:antononcube/LLM::Prompts) provides ≈200 prompts -- it is beneficial to have those prompts in other programming languages.\n", " - The usefulness of chat cells in chatbooks is greatly enhanced with the prompt expansion provided by \"LLM::Prompts\", [AAv2].\n", " - It was instructive to reprogram into Python the corresponding Domain Specific Language (DSL) for prompt specifications.\n", " - Again, an LLM interaction in a chatbook was used to speed-up the re-programming." ] }, { "cell_type": "markdown", "id": "a978a612", "metadata": {}, "source": [ "### Article structure\n", "\n", "- **Big picture use case warm-up** \n", " Mind-map for LLMs and flowchart for chatbooks\n", "- **Tabulated comparisons** \n", " Quicker overview, clickable entries\n", "- **LLM functions examples** \n", " Fundamental in order to \"manage\" LLMs\n", "- **LLM prompts examples** \n", " Tools for pre-conditioning and bias (of LLMs)\n", "- **Chatbook multi-cell chats** \n", " Must have for LLMs\n", "- **Observations, remarks, and conclusions** \n", " Could be used to start the article with...\n", "- **Future plans** \n", " Missing functionalities " ] }, { "cell_type": "markdown", "id": "6e353cdf", "metadata": {}, "source": [ "------\n", "\n", "## Big picture warm-up" ] }, { "cell_type": "markdown", "id": "0fb1813e", "metadata": {}, "source": [ "### Mind-map\n", "\n", "Here is a mind-map aimed at assisting in understanding and evaluating the discussed LLM functionalities in this document:" ] }, { "cell_type": "code", "execution_count": 25, "id": "94159912", "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#% mermaid\n", "mindmap\n", " **LLM
functionalities**\n", " LLM services access\n", " ::icon(fa fa-cloud)\n", " ChatGPT\n", " DALL-E\n", " ::icon(fas fa-image)\n", " PaLM\n", "\n", " LLM functions\n", " ::icon(fas fa-box-open)\n", "\n", " LLM chat objects\n", " ::icon(fa fa-comments)\n", "\n", " LLM prompts\n", " ::icon(fa fa-terminal)\n", "\n", " Chatbooks\n", " ::icon(fa fa-book-open)\n", " Direct access of LLM services\n", " Notebook-wide chats" ] }, { "cell_type": "markdown", "id": "c519b5bb", "metadata": {}, "source": [ "\n", "### Primary use case \n", "\n", "A ***primary use case*** for LLMs in Raku is the following:\n", "\n", "> A Raku \"chat notebook solution\" -- ***chatbook*** -- that allows convenient access to LLM services and facilitates multiple multi-cell chat-interactions with LLMs.\n", "\n", "***We are interested in other types of workflows, but they would be either readily available or easy to implement if the primary use case is developed, tested, and documented.***\n", "\n", "An expanded version of the use-case formulation can be as follows:\n", "\n", "> The Raku chatbook solution aims to provide a user-friendly interface for interacting with LLM (Language Model) services and offers seamless integration for managing multiple multi-cell chats with LLMs. The key features of this solution include:\n", ">\n", "> 1. **Direct Access to LLM Services:** \n", " The notebook solution provides a straightforward way to access LLM services without the need for complex setup or configuration. Users can easily connect to their preferred LLM service provider and start utilizing their language modeling capabilities.\n", ">\n", "> 2. **Easy Creation of Chat Objects:** \n", " The solution allows users to effortlessly create chat objects within the notebook environment. These chat objects serve as individual instances for conducting conversations with LLMs and act as containers for storing chat-related information.\n", ">\n", "> 3. **Simple Access and Invocation of Chat Cells:** \n", " Users can conveniently access and invoke chat cells within the notebook solution. Chat cells represent individual conversation steps or inputs given to the LLM. Users can easily interact with the LLM by adding, modifying, or removing chat cells.\n", ">\n", "> 4. **Native Support for Multi-Cell Chats:** \n", " The notebook solution offers native support for managing multi-cell chats per chat object. Users can organize their conversations into multiple cells, making it easier to structure and navigate through complex dialogues. The solution ensures that the context and history of each chat object are preserved throughout\n", "\n" ] }, { "cell_type": "markdown", "id": "00f60e1d", "metadata": {}, "source": [ "Here is a flowchart that outlines the solution derived with the Raku LLM packages discussed below:" ] }, { "cell_type": "code", "execution_count": 2, "id": "95bc622d", "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#% mermaid\n", "flowchart LR\n", " OpenAI{{OpenAI}}\n", " PaLM{{PaLM}}\n", " LLMFunc[[LLM::Functions]]\n", " LLMProm[[LLM::Prompts]]\n", " CODB[(Chat objects)]\n", " PDB[(Prompts)]\n", " CCell[/Chat cell/]\n", " CRCell[/Chat result cell/]\n", " CIDQ{Chat ID
specified?}\n", " CIDEQ{Chat ID
exists in DB?}\n", " RECO[Retrieve existing
chat object]\n", " COEval[Message
evaluation]\n", " PromParse[Prompt
DSL spec parsing]\n", " KPFQ{Known
prompts
found?}\n", " PromExp[Prompt
expansion]\n", " CNCO[Create new
chat object]\n", " CIDNone[\"Assume chat ID
is 'NONE'\"] \n", " subgraph Chatbook frontend \n", " CCell\n", " CRCell\n", " end\n", " subgraph Chatbook backend\n", " CIDQ\n", " CIDEQ\n", " CIDNone\n", " RECO\n", " CNCO\n", " CODB\n", " end\n", " subgraph Prompt processing\n", " PDB\n", " LLMProm\n", " PromParse\n", " KPFQ\n", " PromExp \n", " end\n", " subgraph LLM interaction\n", " COEval\n", " LLMFunc\n", " PaLM\n", " OpenAI\n", " end\n", " CCell --> CIDQ\n", " CIDQ --> |yes| CIDEQ\n", " CIDEQ --> |yes| RECO\n", " RECO --> PromParse\n", " COEval --> CRCell\n", " CIDEQ -.- CODB\n", " CIDEQ --> |no| CNCO\n", " LLMFunc -.- CNCO -.- CODB\n", " CNCO --> PromParse --> KPFQ\n", " KPFQ --> |yes| PromExp\n", " KPFQ --> |no| COEval\n", " PromParse -.- LLMProm \n", " PromExp -.- LLMProm\n", " PromExp --> COEval \n", " LLMProm -.- PDB\n", " CIDQ --> |no| CIDNone\n", " CIDNone --> CIDEQ\n", " COEval -.- LLMFunc\n", " LLMFunc <-.-> OpenAI\n", " LLMFunc <-.-> PaLM" ] }, { "cell_type": "markdown", "id": "ef71d63f", "metadata": {}, "source": [ "The flowchart represents the process for handling chat requests in the Raku chat notebook solition [\"Jupyter::Chatbook\"](https://raku.land/zef:antononcube/Jupyter::Chatbook), [AAp4p6]. \n", "(Also, for Python's [\"JupyterChatbook\"](https://pypi.org/project/JupyterChatbook/), [AAp4py].)\n", "\n", "1. When a chat request is received, the system checks if a Chat IDentifier (Chat ID) is specified. \n", " - If it is, the system verifies if the Chat ID exists in the Chat Objects Database (CODB). \n", " - If the Chat ID exists, the system retrieves the existing chat object from the database. \n", " - Otherwise, a new chat object is created.\n", "\n", "2. Next, the system parses the DSL spec of the prompt, which defines the structure and behavior of the desired response. \n", " - The parsed prompt spec is then checked against the Known Prompts Database (PDB) to determine if any known prompts match the spec. \n", " - If a match is found, the prompt is expanded, modifying the behavior or structure of the response accordingly.\n", "\n", "3. Once the prompt is processed, the system evaluates the chat message using the underlying LLM function. \n", " - This involves interacting with the OpenAI and PaLM models. \n", " - The LLM function generates a response based on the chat message and the prompt.\n", "\n", "4. The generated response is then displayed in the Chat Result Cell (CRCell) in the chat interface. \n", " - The system also updates the Chat Objects Database (CODB) to store the chat history and other relevant information.\n", "\n", "Throughout this process, various components such as the frontend interface, backend logic, prompt processing, and LLM interaction work together to provide an interactive chat experience in the *chatbook*.\n", "\n", "**Remark:** The flowchart and explanations are also relevant to a large degree for WL's chatbook solution, [SW2.]" ] }, { "cell_type": "markdown", "id": "f6982204", "metadata": {}, "source": [ "------\n", "\n", "## Tabulated comparisons\n", "\n", "In this section we put into tables corresponding packages of Raku, Python, Wolfram Language. Similarly, corresponding demonstration videos are also tabulated." ] }, { "cell_type": "markdown", "id": "8852ed22", "metadata": {}, "source": [ "### Primary LLM packages\n", "\n", "We can say that the Raku packages \n", "[\"LLM::Functions\"](https://raku.land/zef:antononcube/LLM::Functions) and \n", "[\"LLM::Prompts\"](https://raku.land/zef:antononcube/LLM::Prompts) \n", "adopted the LLM designs by Wolfram Research, Inc. (WRI);\n", "see [SW1, SW2].\n", "\n", "Here is a table with links to:\n", "- \"Primary\" LLM Raku packages\n", "- Corresponding Python packages\n", "- Corresponding Wolfram Language (WL) paclets and prompt repository" ] }, { "cell_type": "markdown", "id": "cbe61c99", "metadata": {}, "source": [ "| What? | Raku | Python | WL |\n", "|----------------------|---------------------------|-------------------------|-------------------------------------------|\n", "| OpenAI access | [WWW::OpenAI][] | [openai][] | [OpenAILink][] |\n", "| PaLM access | [WWW::PaLM][] | [google-generativeai][] | [PaLMLink][] |\n", "| LLM functions | [LLM::Functions][] | [LLMFunctionObjects][] | [LLMFunctions][WL-LLMFunctions] |\n", "| LLM prompts | [LLM::Prompts][] | [LLMPrompts][] | [Wolfram Prompt Repostory][WPR] |\n", "| Chatbook | [Jupyter::Chatbook][] | [JupyterChatbook][] | [Chatbook][WL-Chatbook] | \n", "| Find textual answers | [ML::FindTextualAnswer][] | [LLMFunctionObjects][] | [FindTextualAnswer][WL-FindTextualAnswer] |\n", "\n", "[WWW::OpenAI]: https://raku.land/zef:antononcube/WWW::OpenAI\n", "[openai]: https://pypi.org/project/openai/\n", "[OpenAILink]: https://resources.wolframcloud.com/PacletRepository/resources/ChristopherWolfram/OpenAILink/\n", "\n", "[WWW::PaLM]: https://raku.land/zef:antononcube/WWW::PaLM\n", "[google-generativeai]: https://pypi.org/project/google-generativeai/\n", "[PaLMLink]: https://resources.wolframcloud.com/PacletRepository/resources/AntonAntonov/PaLMLink/\n", "\n", "[LLM::Functions]: https://raku.land/zef:antononcube/LLM::Functions\n", "[LLMFunctionObjects]: https://pypi.org/project/LLMFunctionObjects\n", "[WL-LLMFunctions]: https://resources.wolframcloud.com/PacletRepository/resources/Wolfram/LLMFunctions/\n", "\n", "[LLM::Prompts]: https://raku.land/zef:antononcube/LLM::Prompts\n", "[LLMPrompts]: https://pypi.org/project/LLMPrompts\n", "[WPR]: https://resources.wolframcloud.com/PromptRepository/\n", "\n", "[Jupyter::Chatbook]: https://raku.land/zef:antononcube/Jupyter::Chatbook\n", "[JupyterChatbook]: https://pypi.org/project/JupyterChatbook\n", "[WL-Chatbook]: https://resources.wolframcloud.com/PacletRepository/resources/Wolfram/Chatbook/\n", "\n", "[ML::FindTextualAnswer]: https://raku.land/zef:antononcube/ML::FindTextualAnswer\n", "[WL-FindTextualAnswer]: https://reference.wolfram.com/language/ref/FindTextualAnswer.html" ] }, { "cell_type": "markdown", "id": "abbc6afd", "metadata": {}, "source": [ "**Remark:** There is a plethora of Python packages dealing with LLM and extending \n", "Jupyter notebooks with LLM services access.\n", "\n", "**Remark:** Finding of Textual Answers (FTAs) was primary motivator to implement the Raku package \"LLM::Functions\". FTA is a fundamental functionality for the [NLP Template Engine](https://github.com/antononcube/NLP-Template-Engine) used to generate *correct*, executable code for different computational sub-cultures. See [AApwl1, AAv5]." ] }, { "cell_type": "markdown", "id": "0af8d329", "metadata": {}, "source": [ "### Secondary LLM packages \n", "\n", "The \"secondary\" LLM Raku packages -- inspired from working with the \"primary\" LLM packages -- are\n", "[\"Text::SubParsers\"](https://raku.land/zef:antononcube/Text::SubParsers) and\n", "[\"Data::Translators\"](https://raku.land/zef:antononcube/Data::Translators).\n", "\n", "Also, while using LLMs, conveniently and opportunistically is used the package [\"Data::TypeSystem\"](https://raku.land/zef:antononcube/Data::TypeSystem).\n", "\n", "Here is a table of the Raku-Python correspondence:" ] }, { "cell_type": "markdown", "id": "1a849da7", "metadata": {}, "source": [ "| Post processing of LLM results | Raku | Python | WL |\n", "|----------------------------------|---------------------------|--------------------------------|------------------------|\n", "| Extracting text elements | [Text::SubParsers][] | part of [LLMFunctionObjects][] | - |\n", "| Shapes and types | [Data::TypeSystem][] | [DataTypeSystem][] | - |\n", "| Converting to texts formats | [Data::Translators][] | - | - | \n", "| Magic arguments parsing | [Getopt::Long::Grammar][] | [argparse][] | - |\n", "| Copy to clipboard | [Clipboard][] | [pyperclip][] et al. | [CopyToClipboard][WL-CopyToClipboard] |\n", "\n", "[Text::SubParsers]: https://raku.land/zef:antononcube/Text::SubParsers\n", "[Data::Translators]: https://raku.land/zef:antononcube/Data::Translators\n", "[Data::TypeSystem]: https://raku.land/zef:antononcube/Data::TypeSystem\n", "[Getopt::Long::Grammar]: https://raku.land/zef:antononcube/Getopt::Long::Grammar\n", "[Clipboard]: https://raku.land/zef:antononcube/Clipboard\n", "\n", "[LLMFunctionObjects]: https://pypi.org/project/LLMFunctionObjects\n", "[DataTypeSystem]: https://pypy.org/project/DataTypeSystem\n", "[argparse]: https://docs.python.org/3/library/argparse.html\n", "[pyperclip]: https://pypi.org/project/pyperclip/\n", "\n", "[WL-CopyToClipboard]: https://reference.wolfram.com/language/ref/CopyToClipboard.html" ] }, { "cell_type": "markdown", "id": "88bdb231", "metadata": {}, "source": [ "### Introductory videos\n", "\n", "Here is a table of introduction and guide videos for using chatbooks:" ] }, { "cell_type": "markdown", "id": "26007616", "metadata": {}, "source": [ "| What | Raku | Python | WL |\n", "|------|------|--------|------|\n", "| Direct LLM
services access | [Jupyter Chatbook LLM cells demo (Raku)][v1]
(5 min)| [Jupyter Chatbook LLM cells demo (Python)][v2]
(4.8 min) | [OpenAIMode demo (Mathematica)][v3]
(6.5 min) |\n", "| Multi-cell chat | [Jupyter Chatbook multi cell LLM chats teaser (Raku)][v4]
(4.2 min) | [Jupyter Chatbook multi cell LLM chats teaser (Python)][v5]
(4.5 min) | [Chat Notebooks bring the power of Notebooks to LLMs][v6]
(57 min) |\n", "\n", "[v1]: https://www.youtube.com/watch?v=cICgnzYmQZg\n", "[v2]: https://youtu.be/WN3N-K_Xzz8\n", "[v3]: https://www.youtube.com/watch?v=htUIOqcS9uA\n", "[v4]: https://www.youtube.com/watch?v=wNpIGUAwZB8\n", "[v5]: https://www.youtube.com/watch?v=8pv0QRGc7Rw\n", "[v6]: https://www.youtube.com/watch?v=e11ePElnqRE" ] }, { "cell_type": "markdown", "id": "d5596120", "metadata": {}, "source": [ "\n", "------\n", "\n", "## LLM functions\n", "\n", "In this section we show examples of creation and invocation of LLM functions.\n", "\n", "Because the name \"LLMFunctions\" was *approximately* taken in PyPI.org, I used the name \"LLMFunctionObjects\" for the Python package.\n", "\n", "That name is, actually, more faithful to the design and implementation of the Python package -- the creator function `llm_function` produces [function objects](https://github.com/antononcube/Python-packages/blob/main/LLMFunctionObjects/LLMFunctionObjects/Functor.py) (or functors) that have the `__call__` magic.\n", "\n", "Since the LLM functions functionalities are fundamental, I Python-localized the LLM workflows notebooks I created previously for both Raku and WL. \n", "Here are links to all three notebooks:\n", "\n", "- [\"Workflows with LLM functions (in Raku)\"](https://community.wolfram.com/groups/-/m/t/2982320) \n", "- [\"Workflows with LLM functions (in WL)\"](https://community.wolfram.com/groups/-/m/t/2983602) \n", "- [\"Workflows with LLM functions (in Python)\"](https://community.wolfram.com/groups/-/m/t/3027072) \n", "\n", "### Raku\n", "\n", "Here we create an LLM function:" ] }, { "cell_type": "code", "execution_count": 1, "id": "474c51d5", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-> **@args, *%args { #`(Block|2358575708296) ... }" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "my &f1 = llm-function({\"What is the $^a of the country $^b?\"});" ] }, { "cell_type": "markdown", "id": "c17f1d3f", "metadata": {}, "source": [ "Here is an example invocation of the LLM function:" ] }, { "cell_type": "code", "execution_count": 2, "id": "4db1d594", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\n", "\n", "The official ISO 3166-1 alpha-2 code for the People’s Republic of China is CN. The corresponding alpha-3 code is CHN." ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "&f1('GDB', 'China')" ] }, { "cell_type": "markdown", "id": "49a7457d", "metadata": {}, "source": [ "Here is another one:" ] }, { "cell_type": "code", "execution_count": 3, "id": "9b0a94a3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\n", "\n", "As of July 2020, the population of China is estimated to be 1,439,323,776." ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "&f1( | )" ] }, { "cell_type": "markdown", "id": "e748904f", "metadata": {}, "source": [ "### Python\n", "\n", "Here is the corresponding Python definition and invocation of the Raku LLM function above:" ] }, { "cell_type": "code", "execution_count": 7, "id": "faf4414c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "The GDB (Gross Domestic Product) of China in 2020 was approximately $15.42 trillion USD.\n" ] } ], "source": [ "from LLMFunctionObjects import *\n", "\n", "f1 = llm_function(lambda a, b: f\"What is the {a} of the country {b}?\")\n", "\n", "print(\n", " f1('GDB', 'China')\n", ")\n" ] }, { "cell_type": "markdown", "id": "198a76f0", "metadata": {}, "source": [ "------\n", "\n", "## LLM prompts\n", "\n", "The package [\"LLM::Prompts\"](https://raku.land/zef:antononcube/LLM::Prompts) provides ≈200 prompts. \n", "The prompts are taken from [Wolfram Prompt Repository (WPR)](https://resources.wolframcloud.com/PromptRepository/) and [Google's generative AI prompt gallery](https://developers.generativeai.google/prompt-gallery). (Most of the prompts are from WPR.)\n", "\n", "Both the Raku and Python prompt packages provide prompt expansion using a simple DSL described on [SW2]." ] }, { "cell_type": "markdown", "id": "64d8f30b", "metadata": {}, "source": [ "### Raku\n", "\n", "Here is an example of prompt spec expansion:" ] }, { "cell_type": "code", "execution_count": null, "id": "d2b522dc", "metadata": { "is_executing": true }, "outputs": [], "source": [ "my $pewg = llm-prompt-expand(\"@EmailWriter Hi! What do you do? #Translated|German\")" ] }, { "cell_type": "markdown", "id": "b4db8ebf", "metadata": {}, "source": [ "Here the prompt above is used to generate an email (in German) for work-leave:" ] }, { "cell_type": "code", "execution_count": 26, "id": "b3ef7367", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\n", "\n", "Sehr geehrte Damen und Herren,\n", " \n", "Ich schreibe Ihnen, um meine Abwesenheit vom Arbeitsplatz für eine Konferenz bekannt zu geben. Ich werde die nächsten zwei Tage nicht im Büro sein, da ich an der Konferenz teilnehmen werde. Während meiner Abwesenheit werde ich meine Aufgaben durch meine Kollegen erledigen lassen.\n", "\n", "Falls Sie weitere Informationen benötigen, können Sie mich jederzeit kontaktieren.\n", "\n", "Vielen Dank für Ihr Verständnis.\n", "\n", "Freundliche Grüße,\n", "[Your name]" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm-synthesize([$pewg, \"Write a letter for leaving work in order to go to a conference.\"])" ] }, { "cell_type": "markdown", "id": "71b4cbee", "metadata": {}, "source": [ "### Python \n", "\n", "The Python implementation is completely analogous:" ] }, { "cell_type": "code", "execution_count": 2, "id": "97352bf5", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Given a topic, write emails in a concise, professional manner.\n", " Hi! What do you do? Respond to the prompts only in German. Do not use any language other than German.\n", "DO NOT TRANSLATE THE INPUT.\n", "Provide answers to questions, or follow up on statements. Do not translate the input.\n" ] } ], "source": [ "from LLMPrompts import *\n", "\n", "print(\n", " llm_prompt_expand(\"@EmailWriter Hi! What do you do? #Translated|German\")\n", ")" ] }, { "cell_type": "markdown", "id": "e1be381b", "metadata": {}, "source": [ "------\n", "\n", "## Chatbook cells\n", "\n", "#### Raku\n", "\n", "Creation:" ] }, { "cell_type": "code", "execution_count": 27, "id": "7e3b2413", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "A Jedi Master, I am. Known as Yoda, am I. How may I assist you, young one?" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#%chat yoda1, prompt='@Yoda'\n", "Hi! Who are you?" ] }, { "cell_type": "markdown", "id": "4644632f", "metadata": {}, "source": [ "Alternative creation via chat meta cell:" ] }, { "cell_type": "code", "execution_count": 19, "id": "2514cf23", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Chat object created with ID : yoda2.\n", "Expanded prompt:\n", "⎡You are Yoda. \n", "Respond to ALL inputs in the voice of Yoda from Star Wars. \n", "Be sure to ALWAYS use his distinctive style and syntax. Vary sentence length.⎦" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#%chat yoda2 prompt\n", "@Yoda" ] }, { "cell_type": "markdown", "id": "7ea0ac64", "metadata": {}, "source": [ "Here is an interaction with the first chat object:" ] }, { "cell_type": "code", "execution_count": 28, "id": "5042c604", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Many students, I have taught. Countless, they are. Young Padawans, eager to learn the ways of the Force. Lightsaber, you ask about? Green, it is. The color of wisdom and tranquility, it represents." ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#%chat yoda1\n", "How many students did you teach? What is the color of your light saber?" ] }, { "cell_type": "markdown", "id": "8590bdaf", "metadata": {}, "source": [ "\n", "### Python\n", "\n", "Here we load the extension [\"JuptyerChatbook\"]():" ] }, { "cell_type": "code", "execution_count": 2, "id": "e6ff7632", "metadata": {}, "outputs": [], "source": [ "%load_ext JupyterChatbook" ] }, { "cell_type": "markdown", "id": "de951a78", "metadata": {}, "source": [ "Creation using a magic argument for prompt (and prompt expansion):" ] }, { "cell_type": "code", "execution_count": 3, "id": "dba6a673", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Ah, greetings, my dear interlocutor! I am none other than the Mad Hatter, at your service. Delighted to make your acquaintance, I must say. Pray tell, have you ever attended a tea party? Oh, the joyous revelry that ensues! But I digress, for I am here to engage in whimsical banter with you. So, what brings you to my peculiar corner of Wonderland?" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%chat -i mad --prompt='@MadHatter'\n", "Hi! Who are you?" ] }, { "cell_type": "code", "execution_count": 4, "id": "057ef401", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Ah, Alice! A delightful young lass, indeed. Curiosity personified, she is. But alas, I must inform you that I have not seen her lately. She tends to wander off into the most peculiar of places, you see. Perhaps she has found herself in the company of the Cheshire Cat or engaged in a riddle with the Queen of Hearts. Oh, the adventures she embarks upon! But fret not, my friend, for tea time shall surely bring her back. Would you care to join me for a cuppa while we await her return?" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%chat -i mad\n", "I am looking for Alice..." ] }, { "cell_type": "markdown", "id": "a3f5ffe0", "metadata": {}, "source": [ "------\n", "\n", "## Observations, remarks, and conclusions\n", "\n", "- The Python package for LLM services access provided a significant jump-start of the reprogramming endeavors.\n", "- Much easier to program Jupyter chatbook cells in Python\n", " - [\"IPython\"](https://ipython.readthedocs.io/en/stable/config/custommagics.html) facilitates extensions with custom magics in a very streamlined way.\n", " - Not very documented, though -- I had look up concrete implementations in GitHub to figure out:\n", " - Argument parsing details\n", " - [Chat cells testing](https://github.com/mgaitan/fortran_magic/blob/master/test/test_f2py.py)\n", " - Placing of cell-evaluation results\n", "- Figuring out (in Python) the prompt expansion DSL parsing and actions took longer than expected.\n", " - Although, I \"knew what I was doing\" and I used LLM facilitation of the Raku to Python translation.\n", " - Basically, I had to read and understand the Python way of using regexes. (Sigh...)\n", "- For some reason, embedding Mermaid-JS diagrams in Python notebooks is not that easy.\n", "- Making chat cells tests for Python chatbooks is much easier than for Raku chatbooks.\n", "- Parsing of Python-Jupyter magic cell arguments is both more restricted and more streamlined than Raku-Jupyter.\n", "- In Python it was much easier and more obvious (to me) to figure out how to program creation and usage LLM function objects and make them behave like functions than implement the Raku LLM-function anonymous (pure, lambda) function solution.\n", " - Actually, it is my TOD list, see below.\n", "- Copying to clipboard was already implemented in Python (and reliably working) for multiple platforms.\n", "- Working Python code is much more often obtained Raku code when using LLMs.\n", " - Hence, Python chatbooks could be seen preferable by some.\n", "- My primary use-case was not chatbooks, but finding textual answers in order to re-implement the NLP Template Engine from WL to Raku.\n", " - I have done that to a large degree -- see [\"ML::NLPTemplateEngine\"](https://github.com/antononcube/Raku-ML-NLPTemplateEngine).\n", " - Working on the \"workhorse\" function `llm-find-textual-answer` made me look up WRI's approach to creation of LLM functions and corresponding configurations and evaluators; see [SW1].\n", "- Quite a few fragments of this document were created with via LLM chats:\n", " - Initial version of the comparison tables from \"linear\" Markdown lists with links\n", " - The extended primary use formulation\n", " - The narration of the flowchart\n", "- I did not just copy-and-pasted the those LLM generated fragments -- I read then in full and edited them too! \n", "\n" ] }, { "cell_type": "markdown", "id": "69731bbd", "metadata": {}, "source": [ "------\n", "\n", "## Future plans\n", "\n", "### Both\n", "\n", "- Chatbooks can have magic specs (and corresponding cells) for:\n", " - DeppL\n", " - ProdGDT\n", "- A video with comprehensive (long) discussion of multi-cell chats.\n", "\n", "### Python\n", "\n", "- Documenting how LLM-generated images can be converted into image objects (and further manipulated image-wise.)\n", "\n", "### Raku \n", "\n", "- Make Python chatbooks re-runnable as Raku chatbooks.\n", " - This requires the parsing of Python-style magics.\n", "- Implement LLM function objects (functors) in Raku.\n", " - In conjunction of the anonymous functions implementation.\n", " - Which one is used is specified with an option.\n", "- Figure out how to warn users for \"almost right, yet wrong\" chat cell magic specs.\n", "- Implement copy-to-clipboard for Linux and Windows.\n", " - I have put rudimentary code for that, but actual implementation and testing for Linux and Windows are needed.\n" ] }, { "cell_type": "markdown", "id": "49bf8ace", "metadata": {}, "source": [ "------\n", "\n", "## References\n", "\n", "### Articles\n", "\n", "[SW1] Stephen Wolfram,\n", "[\"The New World of LLM Functions: Integrating LLM Technology into the Wolfram Language\"](https://writings.stephenwolfram.com/2023/05/the-new-world-of-llm-functions-integrating-llm-technology-into-the-wolfram-language/),\n", "(2023),\n", "[Stephen Wolfram Writings](https://writings.stephenwolfram.com).\n", "\n", "[SW2] Stephen Wolfram,\n", "[\"Introducing Chat Notebooks: Integrating LLMs into the Notebook Paradigm\"](https://writings.stephenwolfram.com/2023/06/introducing-chat-notebooks-integrating-llms-into-the-notebook-paradigm/),\n", "(2023),\n", "[Stephen Wolfram Writings](https://writings.stephenwolfram.com).\n", "\n", "### Notebooks\n", "\n", "[AAn1p6] Anton Antonov,\n", "[\"Workflows with LLM functions (in Raku)\"](https://community.wolfram.com/groups/-/m/t/2982320),\n", "(2023),\n", "[community.wolfram.com](https://community.wolfram.com/).\n", "\n", "[AAn1wl] Anton Antonov,\n", "[\"Workflows with LLM functions (in WL)\"](https://community.wolfram.com/groups/-/m/t/2983602),\n", "(2023),\n", "[community.wolfram.com](https://community.wolfram.com/).\n", "\n", "[AAn1py] Anton Antonov,\n", "[\"Workflows with LLM functions (in Python)\"](),\n", "(2023),\n", "[community.wolfram.com](https://community.wolfram.com/).\n", "\n", "### Python packages\n", "\n", "[AAp1py] Anton Antonov,\n", "[LLMFunctions Python package](),\n", "(2023),\n", "[PyPI.org/antononcube](https://pypi.org/user/antononcube/).\n", "\n", "[AAp2py] Anton Antonov,\n", "[LLMPrompts Python package](),\n", "(2023),\n", "[PyPI.org/antononcube](https://pypi.org/user/antononcube/).\n", "\n", "[AAp3py] Anton Antonov,\n", "[DataTypeSystem Python package](https://pypi.org/project/DataTypeSystem/),\n", "(2023),\n", "[PyPI.org/antononcube](https://pypi.org/user/antononcube/).\n", "\n", "[AAp4py] Anton Antonov,\n", "[JupyterChatbook Python package](https://pypi.org/project/JupyterChatbook/),\n", "(2023),\n", "[PyPI.org/antononcube](https://pypi.org/user/antononcube/).\n", "\n", "### Raku packages\n", "\n", "[AAp1p6] Anton Antonov,\n", "[LLM::Functions Raku package](https://raku.land/zef:antononcube/ML::FindTextualAnswer),\n", "(2023),\n", "[raku.land/antononcube](https://raku.land/zef:antononcube).\n", "\n", "[AAp2p6] Anton Antonov,\n", "[LLMPrompts Raku package](https://raku.land/zef:antononcube/LLM::Prompts),\n", "(2023),\n", "[raku.land/antononcube](https://raku.land/zef:antononcube).\n", "\n", "[AAp3p6] Anton Antonov,\n", "[Data::TypeSystem Raku package](https://raku.land/zef:antononcube/ML::TypeSystem),\n", "(2023),\n", "[raku.land/antononcube](https://raku.land/zef:antononcube).\n", "\n", "[AAp4p6] Anton Antonov,\n", "[Jupyter::Chatbook Raku package](https://raku.land/zef:antononcube/Jupyter::Chatbook),\n", "(2023),\n", "[raku.land/antononcube](https://raku.land/zef:antononcube).\n", "\n", "[AAp5p6] Anton Antonov,\n", "[ML::FindTextualAnswer Raku package](https://raku.land/zef:antononcube/ML::FindTextualAnswer),\n", "(2023),\n", "[raku.land/antononcube](https://raku.land/zef:antononcube).\n", "\n", "#### Wolfram Language paclets\n", "\n", "[WRIp1] Wolfram Research Inc.,\n", "[LLMFunctions paclet](https://resources.wolframcloud.com/PacletRepository/resources/Wolfram/LLMFunctions/),\n", "(2023)\n", "[Wolfram Paclet Repository](https://resources.wolframcloud.com/PacletRepository/).\n", "\n", "[WRIr1] Wolfram Research Inc.,\n", "[Wolfram Prompt Repository](https://resources.wolframcloud.com/PromptRepository/).\n", "\n", "[AAp4wl] Anton Antonov,\n", "[NLPTemplateEngine paclet](https://resources.wolframcloud.com/PacletRepository/resources/AntonAntonov/NLPTemplateEngine/),\n", "(2023)\n", "[Wolfram Paclet Repository](https://resources.wolframcloud.com/PacletRepository/).\n", "\n", "### Videos\n", "\n", "[AAv1] Anton Antonov,\n", "[\"Jupyter Chatbook LLM cells demo (Raku)\"](https://www.youtube.com/watch?v=cICgnzYmQZg),\n", "(2023),\n", "[YouTube/@AAA4Prediction](https://www.youtube.com/@AAA4Prediction).\n", "\n", "[AAv2] Anton Antonov,\n", "[\"Jupyter Chatbook multi-cell LLM chats demo (Raku)\"](https://youtu.be/WN3N-K_Xzz8),\n", "(2023),\n", "[YouTube/@AAA4Prediction](https://www.youtube.com/@AAA4Prediction).\n", "\n", "[AAv3] Anton Antonov,\n", "[\"Jupyter Chatbook LLM cells demo (Python)\"](https://www.youtube.com/watch?v=WN3N-K_Xzz8),\n", "(2023),\n", "[YouTube/@AAA4Prediction](https://www.youtube.com/@AAA4Prediction).\n", "\n", "[AAv4] Anton Antonov,\n", "[\"Jupyter Chatbook multi cell LLM chats teaser (Python)\"](https://www.youtube.com/watch?v=8pv0QRGc7Rw),\n", "(2023),\n", "[YouTube/@AAA4Prediction](https://www.youtube.com/@AAA4Prediction).\n", "\n", "[AAv5] Anton Antonov,\n", "[\"Simplified Machine Learning Workflows Overview (Raku-centric)](https://www.youtube.com/watch?v=p3iwPsc6e74),\n", "(2023),\n", "[YouTube/@AAA4Prediction](https://www.youtube.com/@AAA4Prediction).\n" ] } ], "metadata": { "jupytext": { "cell_metadata_filter": "-all", "main_language": "python", "notebook_metadata_filter": "-all" }, "kernelspec": { "display_name": "RakuChatbook", "language": "raku", "name": "raku" }, "language_info": { "file_extension": ".raku", "mimetype": "text/plain", "name": "raku", "version": "6.d" } }, "nbformat": 4, "nbformat_minor": 5 }