Skip to content

Commit cb8838c

Browse files
gustavocidornelaswhoseoyster
authored andcommitted
docs: add LangGraph notebook example
1 parent b89396c commit cb8838c

File tree

1 file changed

+390
-0
lines changed

1 file changed

+390
-0
lines changed
Lines changed: 390 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,390 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "2722b419",
6+
"metadata": {},
7+
"source": [
8+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/langgraph/langgraph_tracing.ipynb)\n",
9+
"\n",
10+
"\n",
11+
"# <a id=\"top\">LangGraph tracing</a>\n",
12+
"\n",
13+
"This notebook illustrates how use Openlayer's callback handler to monitor LangGraph workflows."
14+
]
15+
},
16+
{
17+
"cell_type": "markdown",
18+
"id": "75c2a473",
19+
"metadata": {},
20+
"source": [
21+
"## 1. Set the environment variables"
22+
]
23+
},
24+
{
25+
"cell_type": "code",
26+
"execution_count": null,
27+
"id": "f3f4fa13",
28+
"metadata": {},
29+
"outputs": [],
30+
"source": [
31+
"import os\n",
32+
"\n",
33+
"# OpenAI env variables\n",
34+
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY_HERE\"\n",
35+
"\n",
36+
"# Openlayer env variables\n",
37+
"os.environ[\"OPENLAYER_API_KEY\"] = \"YOUR_OPENLAYER_API_KEY_HERE\"\n",
38+
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE\""
39+
]
40+
},
41+
{
42+
"cell_type": "markdown",
43+
"id": "9758533f",
44+
"metadata": {},
45+
"source": [
46+
"## 2. Instantiate the `OpenlayerHandler`"
47+
]
48+
},
49+
{
50+
"cell_type": "code",
51+
"execution_count": null,
52+
"id": "e60584fa",
53+
"metadata": {},
54+
"outputs": [],
55+
"source": [
56+
"from openlayer.lib.integrations import langchain_callback\n",
57+
"\n",
58+
"openlayer_handler = langchain_callback.OpenlayerHandler()"
59+
]
60+
},
61+
{
62+
"cell_type": "markdown",
63+
"id": "72a6b954",
64+
"metadata": {},
65+
"source": [
66+
"## 3. Use LangGraph \n",
67+
"\n",
68+
"### 3.1 Simple chatbot example"
69+
]
70+
},
71+
{
72+
"cell_type": "markdown",
73+
"id": "76a350b4",
74+
"metadata": {},
75+
"source": [
76+
"We can start with a simple chatbot example similar to the one in the [LangGraph quickstart](https://langchain-ai.github.io/langgraph/tutorials/get-started/1-build-basic-chatbot/).\n",
77+
"\n",
78+
"The idea is passing the `openlayer_handler` as a callback to the LangGraph graph. After running the graph,\n",
79+
"you'll be able to see the traces in the Openlayer platform."
80+
]
81+
},
82+
{
83+
"cell_type": "code",
84+
"execution_count": null,
85+
"id": "cc351618",
86+
"metadata": {},
87+
"outputs": [],
88+
"source": [
89+
"from typing import Annotated\n",
90+
"from typing_extensions import TypedDict\n",
91+
"\n",
92+
"from langgraph.graph import StateGraph\n",
93+
"from langchain_openai import ChatOpenAI\n",
94+
"from langchain_core.messages import HumanMessage\n",
95+
"from langgraph.graph.message import add_messages"
96+
]
97+
},
98+
{
99+
"cell_type": "code",
100+
"execution_count": null,
101+
"id": "4595c63b",
102+
"metadata": {},
103+
"outputs": [],
104+
"source": [
105+
"class State(TypedDict):\n",
106+
" # Messages have the type \"list\". The `add_messages` function in the annotation defines how this state key should be updated\n",
107+
" # (in this case, it appends messages to the list, rather than overwriting them)\n",
108+
" messages: Annotated[list, add_messages]\n",
109+
"\n",
110+
"graph_builder = StateGraph(State)\n",
111+
"\n",
112+
"llm = ChatOpenAI(model = \"gpt-4o\", temperature = 0.2)\n"
113+
]
114+
},
115+
{
116+
"cell_type": "code",
117+
"execution_count": null,
118+
"id": "00a6fa80",
119+
"metadata": {},
120+
"outputs": [],
121+
"source": [
122+
"# The chatbot node function takes the current State as input and returns an updated messages list. This is the basic pattern for all LangGraph node functions.\n",
123+
"def chatbot(state: State):\n",
124+
" return {\"messages\": [llm.invoke(state[\"messages\"])]}\n"
125+
]
126+
},
127+
{
128+
"cell_type": "code",
129+
"execution_count": null,
130+
"id": "a36e5160",
131+
"metadata": {},
132+
"outputs": [],
133+
"source": [
134+
"# Add a \"chatbot\" node. Nodes represent units of work. They are typically regular python functions.\n",
135+
"graph_builder.add_node(\"chatbot\", chatbot)\n",
136+
"\n",
137+
"# Add an entry point. This tells our graph where to start its work each time we run it.\n",
138+
"graph_builder.set_entry_point(\"chatbot\")\n",
139+
"\n",
140+
"# Set a finish point. This instructs the graph \"any time this node is run, you can exit.\"\n",
141+
"graph_builder.set_finish_point(\"chatbot\")\n",
142+
"\n",
143+
"# To be able to run our graph, call \"compile()\" on the graph builder. This creates a \"CompiledGraph\" we can use invoke on our state.\n",
144+
"graph = graph_builder.compile()"
145+
]
146+
},
147+
{
148+
"cell_type": "code",
149+
"execution_count": null,
150+
"id": "deef517e",
151+
"metadata": {},
152+
"outputs": [],
153+
"source": [
154+
"# Pass the openlayer_handler as a callback to the LangGraph graph. After running the graph,\n",
155+
"# you'll be able to see the traces in the Openlayer platform.\n",
156+
"for s in graph.stream({\"messages\": [HumanMessage(content = \"What is the meaning of life?\")]},\n",
157+
" config={\"callbacks\": [openlayer_handler]}):\n",
158+
" print(s)"
159+
]
160+
},
161+
{
162+
"cell_type": "markdown",
163+
"id": "c049c8fa",
164+
"metadata": {},
165+
"source": [
166+
"### 3.2 Multi-agent example\n",
167+
"\n",
168+
"Now, we're going to use a more complex example. The principle, however, remains the same: passing the `openlayer_handler` as a callback to the LangGraph graph."
169+
]
170+
},
171+
{
172+
"cell_type": "code",
173+
"execution_count": null,
174+
"id": "213fc402",
175+
"metadata": {},
176+
"outputs": [],
177+
"source": [
178+
"from typing import Annotated\n",
179+
"from datetime import datetime\n",
180+
"\n",
181+
"from langchain.tools import Tool\n",
182+
"from langchain_community.tools import WikipediaQueryRun\n",
183+
"from langchain_community.utilities import WikipediaAPIWrapper\n",
184+
"\n",
185+
"# Define a tools that searches Wikipedia\n",
186+
"wikipedia_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())\n",
187+
"\n",
188+
"# Define a new tool that returns the current datetime\n",
189+
"datetime_tool = Tool(\n",
190+
" name=\"Datetime\",\n",
191+
" func = lambda x: datetime.now().isoformat(),\n",
192+
" description=\"Returns the current datetime\",\n",
193+
")"
194+
]
195+
},
196+
{
197+
"cell_type": "code",
198+
"execution_count": null,
199+
"id": "c76c8935",
200+
"metadata": {},
201+
"outputs": [],
202+
"source": [
203+
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
204+
"from langchain_openai import ChatOpenAI\n",
205+
"from langchain_core.messages import BaseMessage, HumanMessage\n",
206+
"\n",
207+
"\n",
208+
"def create_agent(llm: ChatOpenAI, system_prompt: str, tools: list):\n",
209+
" # Each worker node will be given a name and some tools.\n",
210+
" prompt = ChatPromptTemplate.from_messages(\n",
211+
" [\n",
212+
" (\n",
213+
" \"system\",\n",
214+
" system_prompt,\n",
215+
" ),\n",
216+
" MessagesPlaceholder(variable_name=\"messages\"),\n",
217+
" MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
218+
" ]\n",
219+
" )\n",
220+
" agent = create_openai_tools_agent(llm, tools, prompt)\n",
221+
" executor = AgentExecutor(agent=agent, tools=tools)\n",
222+
" return executor\n",
223+
"\n",
224+
"def agent_node(state, agent, name):\n",
225+
" result = agent.invoke(state)\n",
226+
" return {\"messages\": [HumanMessage(content=result[\"output\"], name=name)]}"
227+
]
228+
},
229+
{
230+
"cell_type": "code",
231+
"execution_count": null,
232+
"id": "f626e7f4",
233+
"metadata": {},
234+
"outputs": [],
235+
"source": [
236+
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
237+
"from langchain_core.output_parsers.openai_functions import JsonOutputFunctionsParser\n",
238+
"\n",
239+
"members = [\"Researcher\", \"CurrentTime\"]\n",
240+
"system_prompt = (\n",
241+
" \"You are a supervisor tasked with managing a conversation between the\"\n",
242+
" \" following workers: {members}. Given the following user request,\"\n",
243+
" \" respond with the worker to act next. Each worker will perform a\"\n",
244+
" \" task and respond with their results and status. When finished,\"\n",
245+
" \" respond with FINISH.\"\n",
246+
")\n",
247+
"# Our team supervisor is an LLM node. It just picks the next agent to process and decides when the work is completed\n",
248+
"options = [\"FINISH\"] + members\n",
249+
"\n",
250+
"# Using openai function calling can make output parsing easier for us\n",
251+
"function_def = {\n",
252+
" \"name\": \"route\",\n",
253+
" \"description\": \"Select the next role.\",\n",
254+
" \"parameters\": {\n",
255+
" \"title\": \"routeSchema\",\n",
256+
" \"type\": \"object\",\n",
257+
" \"properties\": {\n",
258+
" \"next\": {\n",
259+
" \"title\": \"Next\",\n",
260+
" \"anyOf\": [\n",
261+
" {\"enum\": options},\n",
262+
" ],\n",
263+
" }\n",
264+
" },\n",
265+
" \"required\": [\"next\"],\n",
266+
" },\n",
267+
"}\n",
268+
"\n",
269+
"# Create the prompt using ChatPromptTemplate\n",
270+
"prompt = ChatPromptTemplate.from_messages(\n",
271+
" [\n",
272+
" (\"system\", system_prompt),\n",
273+
" MessagesPlaceholder(variable_name=\"messages\"),\n",
274+
" (\n",
275+
" \"system\",\n",
276+
" \"Given the conversation above, who should act next?\"\n",
277+
" \" Or should we FINISH? Select one of: {options}\",\n",
278+
" ),\n",
279+
" ]\n",
280+
").partial(options=str(options), members=\", \".join(members))\n",
281+
"\n",
282+
"llm = ChatOpenAI(model=\"gpt-4o\")\n",
283+
"\n",
284+
"# Construction of the chain for the supervisor agent\n",
285+
"supervisor_chain = (\n",
286+
" prompt\n",
287+
" | llm.bind_functions(functions=[function_def], function_call=\"route\")\n",
288+
" | JsonOutputFunctionsParser()\n",
289+
")"
290+
]
291+
},
292+
{
293+
"cell_type": "code",
294+
"execution_count": null,
295+
"id": "ec307b80",
296+
"metadata": {},
297+
"outputs": [],
298+
"source": [
299+
"import operator\n",
300+
"import functools\n",
301+
"from typing import Sequence, TypedDict\n",
302+
"\n",
303+
"from langgraph.graph import END, START, StateGraph\n",
304+
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
305+
"\n",
306+
"\n",
307+
"# The agent state is the input to each node in the graph\n",
308+
"class AgentState(TypedDict):\n",
309+
" # The annotation tells the graph that new messages will always be added to the current states\n",
310+
" messages: Annotated[Sequence[BaseMessage], operator.add]\n",
311+
" # The 'next' field indicates where to route to next\n",
312+
" next: str\n",
313+
"\n",
314+
"# Add the research agent using the create_agent helper function\n",
315+
"research_agent = create_agent(llm, \"You are a web researcher.\", [wikipedia_tool])\n",
316+
"research_node = functools.partial(agent_node, agent=research_agent, name=\"Researcher\")\n",
317+
"\n",
318+
"# Add the time agent using the create_agent helper function\n",
319+
"currenttime_agent = create_agent(llm, \"You can tell the current time at\", [datetime_tool])\n",
320+
"currenttime_node = functools.partial(agent_node, agent=currenttime_agent, name = \"CurrentTime\")\n",
321+
"\n",
322+
"workflow = StateGraph(AgentState)\n",
323+
"\n",
324+
"# Add a \"chatbot\" node. Nodes represent units of work. They are typically regular python functions.\n",
325+
"workflow.add_node(\"Researcher\", research_node)\n",
326+
"workflow.add_node(\"CurrentTime\", currenttime_node)\n",
327+
"workflow.add_node(\"supervisor\", supervisor_chain)\n",
328+
"\n",
329+
"# We want our workers to ALWAYS \"report back\" to the supervisor when done\n",
330+
"for member in members:\n",
331+
" workflow.add_edge(member, \"supervisor\")\n",
332+
"\n",
333+
"# Conditional edges usually contain \"if\" statements to route to different nodes depending on the current graph state.\n",
334+
"# These functions receive the current graph state and return a string or list of strings indicating which node(s) to call next.\n",
335+
"conditional_map = {k: k for k in members}\n",
336+
"conditional_map[\"FINISH\"] = END\n",
337+
"workflow.add_conditional_edges(\"supervisor\", lambda x: x[\"next\"], conditional_map)\n",
338+
"\n",
339+
"# Add an entry point. This tells our graph where to start its work each time we run it.\n",
340+
"workflow.add_edge(START, \"supervisor\")\n",
341+
"\n",
342+
"# To be able to run our graph, call \"compile()\" on the graph builder. This creates a \"CompiledGraph\" we can use invoke on our state.\n",
343+
"graph_2 = workflow.compile()"
344+
]
345+
},
346+
{
347+
"cell_type": "code",
348+
"execution_count": null,
349+
"id": "08e35ae9",
350+
"metadata": {},
351+
"outputs": [],
352+
"source": [
353+
"# Pass the openlayer_handler as a callback to the LangGraph graph. After running the graph,\n",
354+
"# you'll be able to see the traces in the Openlayer platform.\n",
355+
"for s in graph_2.stream({\"messages\": [HumanMessage(content = \"How does photosynthesis work?\")]},\n",
356+
" config={\"callbacks\": [openlayer_handler]}):\n",
357+
" print(s)"
358+
]
359+
},
360+
{
361+
"cell_type": "code",
362+
"execution_count": null,
363+
"id": "16acecc2",
364+
"metadata": {},
365+
"outputs": [],
366+
"source": []
367+
}
368+
],
369+
"metadata": {
370+
"kernelspec": {
371+
"display_name": "callback-improvements",
372+
"language": "python",
373+
"name": "python3"
374+
},
375+
"language_info": {
376+
"codemirror_mode": {
377+
"name": "ipython",
378+
"version": 3
379+
},
380+
"file_extension": ".py",
381+
"mimetype": "text/x-python",
382+
"name": "python",
383+
"nbconvert_exporter": "python",
384+
"pygments_lexer": "ipython3",
385+
"version": "3.9.19"
386+
}
387+
},
388+
"nbformat": 4,
389+
"nbformat_minor": 5
390+
}

0 commit comments

Comments
 (0)