| { |
| "cells": [ |
| { |
| "cell_type": "markdown", |
| "id": "61278d3e-958b-49ee-94b8-2ee1f5e58773", |
| "metadata": {}, |
| "source": [ |
| "# Basic\n", |
| "## Setup" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "46cb1e2e-5527-4acf-a1d0-5085eb621f4e", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "import uuid\n", |
| "from typing import Tuple\n", |
| "\n", |
| "import openai # replace with your favorite LLM client library\n", |
| "\n", |
| "from burr.core import action, State, ApplicationBuilder, when, persistence" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "406b410d-0091-4368-ad08-b24220f728dd", |
| "metadata": {}, |
| "source": [ |
| "## Define Actions\n", |
| "\n", |
| "We define two actions:\n", |
| "1. `human_input` -- this is the first one, it accepts a prompt from the outside and adds it to the state\n", |
| "2. `ai_response` -- this is the second one, it takes the prompt + chat history and queries OpenAI.\n", |
| "\n", |
| "While this is just a pass-through to OpenAI (and is thus a little pointless) -- its a start. We'll be adding more later." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "206222f0-245b-4941-8976-11f9dcc7fd37", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "@action(reads=[], writes=[\"prompt\", \"chat_history\"])\n", |
| "def human_input(state: State, prompt: str) -> Tuple[dict, State]:\n", |
| " \"\"\"Pulls human input from the outside world and massages it into a standard chat format.\n", |
| " Note we're adding it into the chat history (with an `append` operation). This \n", |
| " is just for convenience of reference -- we could easily just store the chat history\n", |
| " and access it.\n", |
| " \"\"\"\n", |
| " \n", |
| " chat_item = {\n", |
| " \"content\": prompt,\n", |
| " \"role\": \"user\"\n", |
| " }\n", |
| " # return the prompt as the result\n", |
| " # put the prompt in state and update the chat_history\n", |
| " return (\n", |
| " {\"prompt\": prompt}, \n", |
| " state.update(prompt=prompt).append(chat_history=chat_item)\n", |
| " )\n", |
| "\n", |
| "@action(reads=[\"chat_history\"], writes=[\"response\", \"chat_history\"])\n", |
| "def ai_response(state: State) -> Tuple[dict, State]:\n", |
| " \"\"\"Queries OpenAI with the chat. You could easily use langchain, etc... to handle this,\n", |
| " but we wanted to keep it simple to demonstrate\"\"\"\n", |
| " client = openai.Client() # replace with your favorite LLM client library\n", |
| " content = client.chat.completions.create(\n", |
| " model=\"gpt-3.5-turbo\",\n", |
| " messages=state[\"chat_history\"],\n", |
| " ).choices[0].message.content\n", |
| " chat_item = {\n", |
| " \"content\": content,\n", |
| " \"role\": \"assistant\"\n", |
| " }\n", |
| " # return the response as the result\n", |
| " # put the response in state and update the chat history\n", |
| " return (\n", |
| " {\"response\": content}, \n", |
| " state.update(response=content).append(chat_history=chat_item)\n", |
| " )" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "d3798307-6137-40ea-9818-471acafe3e1a", |
| "metadata": {}, |
| "source": [ |
| "## Create the app\n", |
| "\n", |
| "We create our app by adding our actions (they're `**kwargs`, the name of the action is the key), then adding transitions. \n", |
| "In this case, the transitions are simple -- we just go in a loop. Why a loop? We want to be able to continue the chat. Although the control flow\n", |
| "will pause (move from the application to the caller) after every `ai_response` call, we want to keep state, etc..." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "6e75d7ae-c9b0-4e58-812d-3da3e9d5226d", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "app = (\n", |
| " ApplicationBuilder().with_actions(\n", |
| " human_input=human_input,\n", |
| " ai_response=ai_response\n", |
| " ).with_transitions(\n", |
| " (\"human_input\", \"ai_response\"),\n", |
| " (\"ai_response\", \"human_input\")\n", |
| " ).with_state(chat_history=[])\n", |
| " .with_entrypoint(\"human_input\")\n", |
| " .build()\n", |
| ")\n", |
| "app.visualize(output_file_path=\"digraph_initial\", format=\"png\")" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "7cae8a7c-b696-44c6-bf32-babca8fdbe4f", |
| "metadata": {}, |
| "source": [ |
| "## Run the app\n", |
| "\n", |
| "To run the app, we call the `.run` function, passing in a stopping condition. In this case, we want it to halt after `ai_response`. \n", |
| "It returns the action it ran, the result it got, and the state. We use te state variable of response to print out the output, although in a react-like frontend system we may elect to return the entire chat history and render it all for the user." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "3bac2b00-bcb0-41e1-8e34-722e303a2cff", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "final_action, result, state = app.run(\n", |
| " halt_after=[\"ai_response\"], \n", |
| " inputs={\"prompt\" : \"Who was Aaron Burr?\"}\n", |
| ")\n", |
| "print(state['response'])" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "119f2ce1-a4e0-429d-a617-754c92f343cc", |
| "metadata": {}, |
| "source": [ |
| "## Add a decision making step\n", |
| "\n", |
| "Let's add a step to check if the prompt is \"safe\". In this case OpenAI does this automatically, so we're going to simulate it by marking it as unsafe it the word \"unsafe\" is in the response.\n", |
| "We're going to add one step that checks for safety, and another that drafts a response in the case that it is unsafe.\n", |
| "\n", |
| "Then, we're going to add a \"conditional\" transition, allowing us to respond differently depending on the value of `safe`. " |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "69a2ef61-75c3-4a23-9c0e-87dcf1bfbf6b", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "@action(reads=[\"prompt\"], writes=[\"safe\"])\n", |
| "def safety_check(state: State) -> Tuple[dict, State]:\n", |
| " safe = \"unsafe\" not in state[\"prompt\"]\n", |
| " return {\"safe\": safe}, state.update(safe=safe)\n", |
| "\n", |
| "\n", |
| "@action(reads=[], writes=[\"response\", \"chat_history\"])\n", |
| "def unsafe_response(state: State) -> Tuple[dict, State]:\n", |
| " content = \"I'm sorry, my overlords have forbidden me to respond.\"\n", |
| " new_state = (\n", |
| " state\n", |
| " .update(response=content)\n", |
| " .append(\n", |
| " chat_history={\"content\": content, \"role\": \"assistant\"})\n", |
| " )\n", |
| " return {\"response\": content}, new_state" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "e5480950-916d-45b7-ab70-fdbb8865de8d", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "safe_app = (\n", |
| " ApplicationBuilder().with_actions(\n", |
| " human_input=human_input,\n", |
| " ai_response=ai_response,\n", |
| " safety_check=safety_check,\n", |
| " unsafe_response=unsafe_response\n", |
| " ).with_transitions(\n", |
| " (\"human_input\", \"safety_check\"),\n", |
| " (\"safety_check\", \"unsafe_response\", when(safe=False)),\n", |
| " (\"safety_check\", \"ai_response\", when(safe=True)),\n", |
| " ([\"unsafe_response\", \"ai_response\"], \"human_input\"),\n", |
| " ).with_state(chat_history=[])\n", |
| " .with_entrypoint(\"human_input\")\n", |
| " .build()\n", |
| ")\n", |
| "safe_app.visualize(output_file_path=\"digraph_safe\", include_conditions=True)" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "3c102eaf-4e4b-4bf3-ad60-7078bfe499d8", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "action, result, state = safe_app.run(\n", |
| " halt_after=[\"ai_response\", \"unsafe_response\"], \n", |
| " inputs={\"prompt\": \"Who was Aaron Burr, sir (unsafe)?\"}\n", |
| ")\n", |
| "print(state[\"response\"])" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "002fe5f3-1076-40a8-92b4-7345aa96a2b7", |
| "metadata": {}, |
| "source": [ |
| "## Tracking\n", |
| "\n", |
| "OK, now let's interact with telemetry! All we have to do is add `with_tracker` call. We can also just grab the `builder` from the prior app and start where we left off. We'll run on quite a few prompts to test this out:" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "acb2d83b-de78-4b5a-9514-4e1e2633bcd5", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "app_with_tracker = (\n", |
| " ApplicationBuilder().with_actions(\n", |
| " human_input=human_input,\n", |
| " ai_response=ai_response,\n", |
| " safety_check=safety_check,\n", |
| " unsafe_response=unsafe_response\n", |
| " ).with_transitions(\n", |
| " (\"human_input\", \"safety_check\"),\n", |
| " (\"safety_check\", \"unsafe_response\", when(safe=False)),\n", |
| " (\"safety_check\", \"ai_response\", when(safe=True)),\n", |
| " ([\"unsafe_response\", \"ai_response\"], \"human_input\"),\n", |
| " ).with_state(chat_history=[])\n", |
| " .with_entrypoint(\"human_input\")\n", |
| " .with_tracker(\n", |
| " \"local\", project=\"demo_getting_started\"\n", |
| " ).build()\n", |
| ")" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "e7a6d792-e872-4bf1-a9f7-c0d545481859", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "app_with_tracker.uid" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "4497d659-ca22-45ca-a085-0510903b6906", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "for prompt in [\n", |
| " \"Who was Aaron Burr, sir?\",\n", |
| " \"Who was Aaron Burr, sir (unsafe)?\",\n", |
| " \"If you had ml/ai libraries called 'Hamilton' and 'Burr', what would they do?\",\n", |
| " \"Who was Aaron Burr, sir?\",\n", |
| " \"Who was Aaron Burr, sir (unsafe)?\",\n", |
| " \"If you had ml/ai libraries called 'Hamilton' and 'Burr', what would they do?\",\n", |
| "]:\n", |
| " action_we_ran, result, state = app_with_tracker.run(\n", |
| " halt_after=[\"ai_response\", \"unsafe_response\"], \n", |
| " inputs={\"prompt\": prompt}\n", |
| " )" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "48c70315-32cd-4845-92a9-a40738322310", |
| "metadata": {}, |
| "source": [ |
| "## Tracking Server\n", |
| "\n", |
| "You can run the tracking server by running `burr` in the terminal. If you want to see it live, you can run the subsequence cell (which does some magic to run it for you). If the embedding in the notebook gets annoying, navigate to the link to the UI, outputted by the next cell. \n", |
| "\n", |
| "Press the refresh button (🔄) by \"Live\" to watch it live. Run the above cell to query." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "d96936ef-fb09-4443-be9c-56d5b7ecae69", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "from IPython.display import Markdown\n", |
| "url = f\"[Link to UI](http://localhost:7241/project/demo_getting_started/{app_with_tracker.uid})\"\n", |
| "Markdown(url)" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "fa4e1d34-4eb8-4d1c-9708-6269612b7209", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "import os\n", |
| "import time\n", |
| "from IPython.display import HTML\n", |
| "get_ipython().system = os.system\n", |
| "!burr --no-open > /dev/null 2>&1 &\n", |
| "time.sleep(3) # quick trick to wait for the server to start\n", |
| "url = f\"http://localhost:7241/project/demo_getting_started/{app_with_tracker.uid}\"\n", |
| "iframe = f'<iframe src=\"{url}\" width=\"100%\" height=\"1000px\"></iframe>'\n", |
| "display(HTML(iframe))" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "id": "d4a6591c-fb54-4211-8808-c4aab0e38040", |
| "metadata": {}, |
| "source": [ |
| "## Persistence\n", |
| "\n", |
| "While the tracking we showed above does do storage/persistence, Burr has a host of other capabilities to help with state persistence and reloading.\n", |
| "\n", |
| "The use case is this -- you have quite a few simultaneous conversations, each with their own state/assigned to their own users. You want to be able to store them, pause them when the user logs off, and reload them when the user logs back on. You can do this with persisters. There are two interfaces to them:\n", |
| "\n", |
| "1. A set of pre-build persisters (postgres, sqllite3, redis, etc...) that you can use\n", |
| "2. A custom persistor class that you write\n", |
| "\n", |
| "To add a persistor, you have to tell it to load from a state (`.initialize(...)`) on the builder, and tell it to save to a state (`.with_state_persister`).\n", |
| "\n", |
| "More about persistence [here](https://burr.apache.org/concepts/state-persistence/)." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "53f25edd-d483-4c41-b716-95ac3b0cbcc4", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "app_id = f\"unique_app_id_{uuid.uuid4()}\" # unique app ID -- we create it here but this will be generated for you\n", |
| "partition_key = \"new_burr_user\" # this can be anything. In a chatbot it will " |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "468d4d53-9343-48b3-ba1f-651bba3e1a3c", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "# we're going to be creating this multiple times to demonstrate so let's stick it in a function\n", |
| "def create_persistable_app():\n", |
| " sqllite_persister = persistence.SQLLitePersister(db_path=\"./sqlite.db\", table_name=\"burr_state\")\n", |
| " sqllite_persister.initialize()\n", |
| " return (\n", |
| " ApplicationBuilder().with_actions(\n", |
| " human_input=human_input,\n", |
| " ai_response=ai_response,\n", |
| " safety_check=safety_check,\n", |
| " unsafe_response=unsafe_response\n", |
| " ).with_transitions(\n", |
| " (\"human_input\", \"safety_check\"),\n", |
| " (\"safety_check\", \"unsafe_response\", when(safe=False)),\n", |
| " (\"safety_check\", \"ai_response\", when(safe=True)),\n", |
| " ([\"unsafe_response\", \"ai_response\"], \"human_input\"),\n", |
| " ).initialize_from(\n", |
| " initializer=sqllite_persister,\n", |
| " resume_at_next_action=True,\n", |
| " default_state={\"chat_history\": []},\n", |
| " default_entrypoint=\"human_input\"\n", |
| " ).with_state_persister(sqllite_persister)\n", |
| " .with_identifiers(app_id=app_id, partition_key=partition_key)\n", |
| " .with_tracker(\n", |
| " \"local\", project=\"demo_getting_started\"\n", |
| " ).build()\n", |
| " )" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "ec967861-133a-4151-a3e0-ac5238bc67f5", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "app_initial = create_persistable_app()\n", |
| "action, result, state = app_initial.run(\n", |
| " halt_after=[\"ai_response\", \"unsafe_response\"], \n", |
| " inputs={\"prompt\": \"Who was Aaron Burr, sir?\"}\n", |
| ")\n", |
| "for item in state['chat_history']:\n", |
| " print(item['role'] + ':' + item['content'] + '\\n')" |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "id": "dbd18904-2407-41e0-8151-e17bba3824ef", |
| "metadata": {}, |
| "outputs": [], |
| "source": [ |
| "del app_initial\n", |
| "app_reloaded = create_persistable_app()\n", |
| "\n", |
| "action, result, state = app_reloaded.run(\n", |
| " halt_after=[\"ai_response\", \"unsafe_response\"], \n", |
| " inputs={\"prompt\": \"Who was Alexander Hamilton?\"}\n", |
| ")\n", |
| "for item in state['chat_history']:\n", |
| " print(item['role'] + ':' + item['content'] + '\\n')" |
| ] |
| } |
| ], |
| "metadata": { |
| "kernelspec": { |
| "display_name": "Python 3 (ipykernel)", |
| "language": "python", |
| "name": "python3" |
| }, |
| "language_info": { |
| "codemirror_mode": { |
| "name": "ipython", |
| "version": 3 |
| }, |
| "file_extension": ".py", |
| "mimetype": "text/x-python", |
| "name": "python", |
| "nbconvert_exporter": "python", |
| "pygments_lexer": "ipython3", |
| "version": "3.10.4" |
| } |
| }, |
| "nbformat": 4, |
| "nbformat_minor": 5 |
| } |