LangGraph Explained from Scratch
If you’ve spent some time working with Large Language Models, you’ve probably built a Chain before. In LangChain, this usually means moving from user input to a prompt, then to the LLM, and finally to the output. LangGraph is a library that helps us create these cycles. It takes us from Chains, which are Directed Acyclic Graphs (DAGs), to Graphs, where loops and cycles are possible. This is key for building real Agents; systems that can reason, loop, and make decisions on their own. In this article, I’ll walk you through a complete guide to LangGraph from the ground up.
LangGraph Explained
In this guide, we’ll build a simple agent from scratch using LangGraph. You won’t need any paid APIs like OpenAI. Everything will run on your local machine using open-source tools.
Before we start coding in Python, let’s break down the main idea. Graph Theory might sound complex, but you use this kind of thinking all the time. Think of a team working together on a project:
- The State, like a whiteboard, is the shared context. Everyone in the room checks the whiteboard to see the project’s current status. In LangGraph, the State is a dictionary object that gets passed around.
- The Nodes are like workers, each handling a specific task. For example, one node could be “Generate Code” and another could be “Review Code.”
- The Edges represent the flow of work. After “Generate Code,” do we move to “Finish” or to “Review Code”? If the review doesn’t pass, we go back to “Generate Code.”
This ability to loop back to a previous step based on a decision is what makes LangGraph unique.
Prerequisites to Get Started with LangGraph
To make this easy for everyone, we’ll use Ollama. With Ollama, you can run powerful models like Llama 3 or Mistral right on your laptop.
First, download and install Ollama from ollama.com. After installing, open your terminal and pull a model:
ollama pull llama3
Next, install the Python libraries. We’ll need the LangGraph library and the LangChain integration for Ollama:
pip install langgraph langchain langchain-ollama
We’re going to build a system that sorts a user’s question into categories:
- If the user asks about Math, we send it to a Math Node.
- If the user talks about anything else, we send it to a General Chat Node.
This shows how Conditional Edges let us make decisions without needing complicated tools.
Step 1: Define the State
The State is simply a Python class, specifically a TypedDict. It keeps track of everything that has happened so far:
from typing import TypedDict, List
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
# The State dictates what data flows through the graph.
# Here, we just track the list of messages in the conversation.
class AgentState(TypedDict):
messages: List[BaseMessage]
category: str # We will store the classification here (Math vs General)
In a regular script, variables can get lost. In LangGraph, the AgentState sticks around. Each node gets this state, updates it, and passes it along.
Step 2: Initialise the Local LLM
Now, let’s connect to the Ollama instance running on your computer:
from langchain_ollama import ChatOllama
# Ensure you have run `ollama pull llama3` in your terminal
llm = ChatOllama(model="llama3", temperature=0)
Step 3: Define the Nodes (The Workers)
Next, we’ll create three functions, which act as our Nodes:
- Categoriser: Decides what the input is about.
- Math Expert: Handles math questions.
- General Chat: Handles casual conversation.
# Node 1: The Categorizer
def categorize_input(state: AgentState):
"""
Analyzes the user's last message and decides if it's 'Math' or 'General'.
"""
last_message = state["messages"][-1].content
# We ask the LLM to classify strictly.
prompt = f"""
You are a router. Classify the following input as either 'Math' or 'General'.
Return ONLY the word 'Math' or 'General'. Do not add punctuation.
Input: {last_message}
"""
response = llm.invoke(prompt)
category = response.content.strip()
# Update the state with the category
return {"category": category}
# Node 2: The Math Expert
def handle_math(state: AgentState):
print("--- 🧮 Entering Math Node ---")
last_message = state["messages"][-1].content
response = llm.invoke(f"You are a mathematician. Solve this simply: {last_message}")
# We append the AI's response to the message history
return {"messages": [response]}
# Node 3: The General Chat
def handle_general(state: AgentState):
print("--- 💬 Entering General Chat Node ---")
last_message = state["messages"][-1].content
response = llm.invoke(f"You are a helpful assistant. Reply to: {last_message}")
return {"messages": [response]}
In LangGraph, each node function returns a dictionary. This gets merged into the global State.
Step 4: Define the Logic
Now we’ll write a function that checks the category in the state and tells the graph what to do next:
def routing_logic(state: AgentState):
# If the category is Math, go to 'math_node'
if state["category"] == "Math":
return "math_node"
# Otherwise, go to 'general_node'
return "general_node"
Step 5: Build the Graph
This is where we connect everything:
from langgraph.graph import StateGraph, END
# 1. Initialize the Graph with our State structure
workflow = StateGraph(AgentState)
# 2. Add the Nodes
workflow.add_node("categorizer", categorize_input)
workflow.add_node("math_node", handle_math)
workflow.add_node("general_node", handle_general)
# 3. Set the Entry Point
# When the graph starts, the first person to touch the ball is the 'categorizer'
workflow.set_entry_point("categorizer")
# 4. Add Conditional Edges
# After 'categorizer' runs, look at 'routing_logic' to decide where to go next.
workflow.add_conditional_edges(
"categorizer", # From this node...
routing_logic, # ...run this logic...
{ # ...and map the output to a node.
"math_node": "math_node",
"general_node": "general_node"
}
)
# 5. Add Normal Edges
# After math or general chat, we are done. Go to END.
workflow.add_edge("math_node", END)
workflow.add_edge("general_node", END)
# 6. Compile
app = workflow.compile()
Step 6: Running the Agent
Now it’s time to test what we’ve built. We’ll try out two different user inputs:
# Test 1: A Math Question
print("\n--- TEST 1: Math ---")
inputs_1 = {"messages": [HumanMessage(content="What is 55 multiplied by 10?")]}
# Stream the output to see the steps
for event in app.stream(inputs_1):
for key, value in event.items():
print(f"Finished running: {key}")
# Test 2: A Casual Greeting
print("\n--- TEST 2: General ---")
inputs_2 = {"messages": [HumanMessage(content="Tell me a fun fact about history.")]}
for event in app.stream(inputs_2):
for key, value in event.items():
print(f"Finished running: {key}")
(env) (base) kishna@MacBook-Pro aiagent % python lang.py
--- TEST 1: Math ---
Finished running: categorizer
--- 🧮 Entering Math Node ---
Finished running: math_node
--- TEST 2: General ---
Finished running: categorizer
--- 💬 Entering General Chat Node ---
Finished running: general_node
(env) (base) kishna@MacBook-Pro aiagent %
In Test 1, the console shows that it finished running the categoriser and then moved to math_node. In Test 2, it runs the categoriser and then goes to general_node.
Closing Thoughts
What we built might look simple; a few if/else statements could do the job. But imagine the Math Node tries to solve a problem, finds a mistake, and needs to try again. In a regular Python script, this logic can get messy quickly. With LangGraph, it’s just an edge that points back to itself.
Now you have the basics to build agents that can code, research, and fix their own mistakes, all running on your own computer.