Alright, let's translate theory into practice. Having explored the fundamental building blocks of LangChain, Models, Prompts, and Output Parsers, we will now assemble them to create a simple, functional application. This hands-on exercise will solidify your understanding of how these components interact within a basic workflow using the LangChain Expression Language (LCEL).
Our goal is to build a small application that takes a topic as input and generates a short, humorous tweet about it.
Before starting, ensure you have:
langchain
and a provider-specific library like langchain_openai
, langchain_anthropic
, or langchain_community
for interacting with other models (e.g., via Hugging Face Hub). If you haven't already, install them:
pip install langchain langchain_openai python-dotenv
# or other provider libraries as needed
.env
file, and load it as an environment variable. For example, if using OpenAI, your .env
file might contain:
OPENAI_API_KEY="your_api_key_here"
Make sure your Python script can access this variable, perhaps using a library like python-dotenv
.We will construct our application piece by piece.
The Model: First, we need to instantiate the language model we want to use. This object serves as the interface to the underlying LLM service. We'll use OpenAI's chat model in this example, but you can substitute it with others supported by LangChain.
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
# Load environment variables (ensure your OPENAI_API_KEY is set)
load_dotenv()
# Initialize the LLM
# We use a temperature of 0.7 for a balance between creativity and coherence
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
Here, ChatOpenAI
represents the connection to the specific model (gpt-3.5-turbo
). The temperature
parameter influences the randomness of the output; lower values make it more deterministic, higher values make it more creative.
The Prompt Template: Next, we define how we want to ask the LLM to perform the task. We'll use a ChatPromptTemplate
to structure our request, clearly stating the desired format and incorporating the user's input topic.
from langchain_core.prompts import ChatPromptTemplate
# Define the prompt structure
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a witty assistant that generates short, funny tweets."),
("human", "Generate a tweet (max 140 chars) about: {topic}")
])
This template uses two message types:
system
: Provides context or instructions about the AI's role.human
: Represents the user's input, including the placeholder {topic}
which will be filled dynamically.The Output Parser: The LLM typically returns its response within a specific object structure (like an AIMessage
). Often, we just want the plain text content. StrOutputParser
handles this extraction for us.
from langchain_core.output_parsers import StrOutputParser
# Initialize the output parser
output_parser = StrOutputParser()
Chaining Components with LCEL: LangChain Expression Language (LCEL) allows us to elegantly connect these components using the pipe operator (|
). This operator passes the output of one component as the input to the next, creating a processing pipeline.
# Create the chain using LCEL
tweet_chain = prompt_template | llm | output_parser
This line defines our application's workflow: the input data first goes to the prompt_template
to format the request, then the formatted prompt is sent to the llm
, and finally, the llm
's output is processed by the output_parser
to get the final string result.
Data flow through the simple LangChain Expression Language (LCEL) chain.
Running the Chain: Now we can invoke our chain with a specific topic.
# Define the input topic
input_data = {"topic": "procrastinating squirrels"}
# Invoke the chain and get the result
generated_tweet = tweet_chain.invoke(input_data)
# Print the result
print("Generated Tweet:")
print(generated_tweet)
Here is the complete Python script combining all the steps:
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
def main():
# Load environment variables from .env file
# Ensure your OPENAI_API_KEY is set in the .env file
load_dotenv()
if os.getenv("OPENAI_API_KEY") is None:
print("Error: OPENAI_API_KEY environment variable not set.")
print("Please create a .env file with OPENAI_API_KEY='your_api_key_here'")
return
# 1. Initialize the LLM
# Using temperature=0.7 for creative but coherent output
print("Initializing LLM...")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
# 2. Define the Prompt Template
print("Defining prompt template...")
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a witty assistant that generates short, funny tweets (max 140 characters)."),
("human", "Generate a tweet about: {topic}")
])
# 3. Initialize the Output Parser
print("Initializing output parser...")
output_parser = StrOutputParser()
# 4. Create the chain using LCEL pipe operator
print("Creating the processing chain...")
tweet_chain = prompt_template | llm | output_parser
# 5. Define the input and invoke the chain
input_topic = "procrastinating squirrels"
print(f"\nGenerating tweet for topic: '{input_topic}'...")
input_data = {"topic": input_topic}
try:
generated_tweet = tweet_chain.invoke(input_data)
# Print the result
print("\n--- Generated Tweet ---")
print(generated_tweet)
print("-----------------------")
print(f"Length: {len(generated_tweet)} characters")
except Exception as e:
print(f"\nAn error occurred: {e}")
print("Please check your API key, internet connection, and model availability.")
if __name__ == "__main__":
main()
When you run this script (assuming your environment is correctly set up), it will:
"procrastinating squirrels"
).The output might look something like this (the exact text will vary due to the nature of LLMs):
Initializing LLM...
Defining prompt template...
Initializing output parser...
Creating the processing chain...
Generating tweet for topic: 'procrastinating squirrels'...
--- Generated Tweet ---
My plan for world domination via nut hoarding is solid. Execution starts... right after this nap. And maybe one more acorn. #SquirrelLife #ProcrastinatorsUnite #MaybeTomorrow
-----------------------
Length: 138 characters
This example demonstrates the core workflow. You can easily modify it:
input_topic
variable to generate tweets about different subjects.system
or human
messages in the ChatPromptTemplate
to change the desired style or task.temperature
parameter in ChatOpenAI
to see how it affects output creativity.ChatOpenAI
with a different LangChain model integration.This practical exercise forms the basis for building more sophisticated applications. In the next chapter, we will explore how to create more complex sequences and introduce agents that can use tools to interact with the world.
© 2025 ApX Machine Learning