LangChain provides a uniform interface for interacting with a wide variety of language models, abstracting away the specific API differences between providers like OpenAI, Anthropic, or Hugging Face. This standardization allows you to switch between models with minimal code changes. At the heart of this abstraction are two primary types of models: LLMs and Chat Models. Understanding their differences is the first step toward building effective applications.The LLM Interface: A Direct Text-in, Text-out ModelThe LLM class represents the most straightforward interaction with a language model. It operates on a simple text-in, text-out basis. You provide a single string as a prompt, and the model returns a single string as its completion or answer. This interface is best suited for models that are designed for text completion, such as older models like OpenAI's text-davinci-003.Consider this interface as a direct function call where the input and output are both simple text.# Make sure to install the necessary packages: # pip install langchain-openai import os from langchain_openai import OpenAI # Set your API key # os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" # Instantiate the model # By default, it uses 'text-davinci-003', which is now a legacy model. # We specify a more current model that works with this interface. llm = OpenAI(model="gpt-3.5-turbo-instruct") # Prepare the input string prompt = "What is the capital of Japan?" # Invoke the model response = llm.invoke(prompt) print(response) # Expected Output: # The capital of Japan is Tokyo.The invoke method sends the prompt to the model's API and returns the generated text directly. This approach is efficient for single-turn tasks like summarization, translation, or answering a factual question without needing conversational context.The ChatModel Interface: Managing Structured ConversationsIn contrast, the ChatModel interface is designed for more complex, conversational interactions. Most modern and capable models, such as GPT-4 or Claude 3, are optimized for a chat-like format. Instead of a single string, a ChatModel takes a list of message objects as input. This structure allows the model to understand the context of a conversation, including who said what.There are three main types of messages:SystemMessage: This sets the overall tone and instructions for the AI. It's like giving the model a persona or a set of rules to follow throughout the conversation. It's usually the first message in the list.HumanMessage: This represents input from the user.AIMessage: This represents a previous response from the model itself. By including AIMessage objects in the input, you provide the model with the conversation's history.Let's see how this works with ChatOpenAI, the chat-specific integration for OpenAI models.from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage, SystemMessage # Instantiate the chat model chat = ChatOpenAI(model="gpt-4o") # Prepare the list of messages messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Hello, how are you?"), ] # Invoke the model response = chat.invoke(messages) print(response) # Expected Output: # AIMessage(content='Bonjour, comment allez-vous ?', response_metadata=...) # To get the text content, access the .content attribute print(response.content) # Expected Output: # Bonjour, comment allez-vous ?Notice that the output is not a string but an AIMessage object. This object contains the string content in its .content attribute, along with other useful metadata from the API response. This structured input and output makes ChatModel the preferred choice for building chatbots, multi-step reasoning agents, or any application where dialogue history is significant.Comparing the Two InterfacesThe distinction between these two model types is a reflection of the evolution of LLMs. Early models were primarily text completers, while modern models are fine-tuned as conversational assistants. The following diagram illustrates the different data flows for each interface.digraph G { rankdir=TB; graph [fontname="Arial", splines=ortho]; node [shape=box, style="rounded,filled", fontname="Arial", fillcolor="#e9ecef"]; edge [fontname="Arial"]; subgraph cluster_llm { label = "LLM Interface"; style = "rounded"; bgcolor = "#f8f9fa"; llm_input [label="Input\n(string)", shape=note, fillcolor="#a5d8ff"]; llm_model [label="LLM\n(e.g., gpt-3.5-turbo-instruct)", fillcolor="#ced4da"]; llm_output [label="Output\n(string)", shape=note, fillcolor="#96f2d7"]; llm_input -> llm_model -> llm_output; } subgraph cluster_chat { label = "ChatModel Interface"; style = "rounded"; bgcolor = "#f8f9fa"; chat_input [label="Input\n([SystemMessage,\nHumanMessage,\nAIMessage])", shape=note, fillcolor="#a5d8ff"]; chat_model [label="ChatModel\n(e.g., ChatOpenAI)", fillcolor="#ced4da"]; chat_output [label="Output\n(AIMessage)", shape=note, fillcolor="#96f2d7"]; chat_input -> chat_model -> chat_output; } }The LLM interface processes a simple string, while the ChatModel interface handles a structured list of messages, making it suitable for conversational context.Here is a summary of the differences:FeatureLLM InterfaceChatModel InterfaceInput FormatA single string of text.A list of message objects (SystemMessage, HumanMessage, etc.).Output FormatA single string of text.An AIMessage object containing content and metadata.Common UseSimple text completion, summarization, single-turn Q&A.Conversational agents, multi-turn dialogue, role-playing scenarios.Underlying ModelsOptimized for text completion (e.g., gpt-3.5-turbo-instruct).Optimized for instruction-following and dialogue (e.g., gpt-4o, claude-3-sonnet).As a general rule, you should default to using ChatModels, as they are the standard for most modern, high-performing language models.With a clear understanding of how to communicate with a model, the next step is to master how you formulate your instructions. We will now turn our attention to PromptTemplates, which allow you to construct dynamic and reusable inputs for these models.