\n\n\n\n My LangChain AI Projects Got a Boost with LCEL - AgntBox My LangChain AI Projects Got a Boost with LCEL - AgntBox \n

My LangChain AI Projects Got a Boost with LCEL

📖 10 min read•1,855 words•Updated Apr 16, 2026

Alright agntbox fam, Nina here, and let me tell you, my coffee pot is working overtime this week. Why? Because I’ve been knee-deep in something that’s been subtly but significantly shifting how I approach my personal AI projects: LangChain Expression Language (LCEL). You know how it is – you get used to a certain way of doing things, and then something comes along that makes you smack your forehead and say, “Why didn’t I think of that?!”

LCEL isn’t brand new, but it’s still gaining traction, and I’ve noticed a lot of folks in the forums are either just dipping their toes in or are still a bit intimidated by it. I was in that second camp for a while, I’ll admit. My initial reaction was, “Another layer? Do I really need this complexity?” But after a few frustrating hours debugging a particularly tangled LangChain sequence – a RAG pipeline that felt like untangling a ball of yarn after a cat had its way with it – I decided to give LCEL a proper shot.

And holy moly, what a difference. This isn’t just about making your LangChain code look prettier; it’s about making it more maintainable, more composable, and frankly, a lot less prone to those head-desk moments when you’re trying to figure out where your data went wrong. So, if you’re still writing your LangChain chains like it’s 2023, grab another coffee (or your beverage of choice), because we’re diving into why LCEL is your new best friend for building robust AI applications.

My Journey from Spaghetti Code to Streamlined Chains

Before LCEL, my LangChain chains often looked something like this:


from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# My old way (simplified for brevity, but you get the idea)
prompt = ChatPromptTemplate.from_messages([
 ("system", "You are a helpful AI assistant."),
 ("user", "{question}")
])
model = ChatOpenAI(model="gpt-4")
output_parser = StrOutputParser()

# The "chain"
def old_style_chain(question):
 formatted_prompt = prompt.format_messages(question=question)
 response = model.invoke(formatted_prompt)
 parsed_response = output_parser.parse(response.content) # Had to manually parse sometimes
 return parsed_response

# And then imagine adding retrievers, multiple prompts, formatting steps...

It worked, sure. But as soon as I needed to add a retriever, or introduce conditional logic, or pass the output of one component as input to another in a slightly non-linear way, things got messy. Fast. I’d end up with nested calls, temporary variables everywhere, and a debugging process that felt like playing a game of “Where’s Waldo?” with my data.

The turning point for me was a project where I was building a customer support chatbot that needed to query a vector database, summarize the results, and then use that summary to answer the user’s question, all while maintaining a conversational history. My initial attempt was a sprawling Python script. Every time I added a new step, it felt like I was patching a leaky boat. That’s when I seriously looked at LCEL, and the concept of “piping” really clicked.

What Exactly is LCEL, Anyway?

At its core, LangChain Expression Language is a way to compose LangChain components into chains using a simple, intuitive syntax – specifically, the | operator. Think of it like the pipe operator in your Unix shell, but for AI components. You take the output of one component and “pipe” it directly as input to the next.

It embraces functional programming principles, making your chains feel more like data pipelines than imperative scripts. Each component in an LCEL chain is essentially a callable that takes an input and returns an output. This might sound overly simple, but its power lies in its composability and the way it handles input/output types.

Key Benefits I’ve Actually Experienced:

  • Readability: My chains are now much easier to follow. The flow of data is explicit, left-to-right. No more guessing what’s going where.
  • Modularity: Each component is a distinct unit. Need to swap out your LLM? Just change that one component in the chain. Need a different retriever? Same deal.
  • Debugging: This is a big one. Because each step is clearly defined, if something goes wrong, it’s usually much easier to pinpoint which part of the chain is causing the issue. LangChain’s built-in tracing (like with LangSmith) becomes even more powerful with LCEL.
  • Streaming: LCEL chains support streaming out of the box, which is fantastic for user experience, especially with longer LLM responses.
  • Parallelism: For parts of your chain that can run independently (e.g., fetching multiple pieces of context concurrently), LCEL makes it easy to specify parallel execution.

LCEL in Action: Practical Examples

Let’s rewrite that simple example using LCEL:


from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# The new, improved way with LCEL
prompt = ChatPromptTemplate.from_messages([
 ("system", "You are a helpful AI assistant."),
 ("user", "{question}")
])
model = ChatOpenAI(model="gpt-4")
output_parser = StrOutputParser()

# The chain, beautifully piped
chain = prompt | model | output_parser

# Now, invoke it!
response = chain.invoke({"question": "What is the capital of France?"})
print(response)

See? So much cleaner! The prompt takes the dictionary {"question": "..."}, formats it into messages, passes those messages to the model, which then invokes the LLM, and finally, the output_parser takes the LLM’s output and extracts the string. Each component knows what to expect and what to produce.

Example 2: Adding a Retriever to Your Chain

This is where LCEL really starts to shine. Let’s build a simple RAG (Retrieval-Augmented Generation) chain. For this, we’ll need a dummy retriever, but imagine this could be pulling from a vector store like Pinecone or Chroma.


from langchain_core.runnables import RunnablePassthrough
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# Dummy retriever for demonstration
def dummy_retriever(question_dict):
 question = question_dict["question"]
 # In a real scenario, this would query a vector DB
 if "Paris" in question:
 return "Paris is the capital and most populous city of France."
 elif "AI" in question:
 return "Artificial intelligence (AI) is intelligence demonstrated by machines."
 else:
 return "No specific context found for your query."

# Define your components
retriever = RunnablePassthrough.assign(context=dummy_retriever) # We'll explain RunnablePassthrough in a sec
prompt = ChatPromptTemplate.from_messages([
 ("system", "You are an AI assistant. Use the following context to answer the question:\n\n{context}"),
 ("user", "{question}")
])
model = ChatOpenAI(model="gpt-4")
output_parser = StrOutputParser()

# The RAG chain!
rag_chain = retriever | prompt | model | output_parser

# Let's test it
print("Query 1:", rag_chain.invoke({"question": "Tell me about Paris."}))
print("Query 2:", rag_chain.invoke({"question": "What is AI?"}))
print("Query 3:", rag_chain.invoke({"question": "Tell me about cars."}))

Okay, let’s break down that RunnablePassthrough.assign(context=dummy_retriever) part. This is a super handy LCEL construct. When you have a chain, sometimes you want to add a new piece of information (like context from our retriever) without losing the original input (the question). RunnablePassthrough lets the original input flow through, and .assign() lets you add new keys to the dictionary being passed along.

So, {"question": "..."} goes into retriever. The retriever calls dummy_retriever with the question, gets the context, and then retriever outputs {"question": "...", "context": "..."}. This dictionary then flows into the prompt, which correctly fills both placeholders. Brilliant!

Example 3: Handling Multiple Inputs with RunnableParallel

What if you need to fetch information from two different sources simultaneously and then combine them? Enter RunnableParallel. I used this recently when building a tool that needed to both summarize a web page AND extract key entities from it, then combine those two outputs for a final LLM call. Running them sequentially was too slow.


from langchain_core.runnables import RunnableParallel
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

# Simulate two different "retrievers"
def get_summary(text_input):
 return f"Summary of the text: {text_input['text'][:50]}..."

def get_keywords(text_input):
 return f"Keywords: important, key, {text_input['text'].split()[0]}"

# Define the parallel processing step
# The output of this will be a dictionary like {"summary": "...", "keywords": "..."}
parallel_info = RunnableParallel(
 summary=get_summary,
 keywords=get_keywords
)

# The final prompt that uses both pieces of info
final_prompt = ChatPromptTemplate.from_messages([
 ("system", "Based on the summary and keywords, provide a concise overview."),
 ("user", "Summary: {summary}\nKeywords: {keywords}\nOriginal Text: {text}")
])

model = ChatOpenAI(model="gpt-4")
output_parser = StrOutputParser()

# The full chain
full_processing_chain = (
 RunnablePassthrough.assign(
 processed_info=parallel_info # This will create a dictionary under the key 'processed_info'
 )
 | {
 "summary": lambda x: x["processed_info"]["summary"],
 "keywords": lambda x: x["processed_info"]["keywords"],
 "text": lambda x: x["text"] # Pass original text through
 }
 | final_prompt
 | model
 | output_parser
)

# Test with some example text
sample_text = "The quick brown fox jumps over the lazy dog. This sentence is often used for typing tests and demonstrates all letters of the English alphabet."
result = full_processing_chain.invoke({"text": sample_text})
print(result)

Okay, that might look a bit more complex, but let’s unpack it.
The RunnableParallel runs get_summary and get_keywords in parallel. Each of these receives the original input dictionary (in this case, {"text": sample_text}).
The output of parallel_info will be {"summary": "...", "keywords": "..."}.

Then, the RunnablePassthrough.assign(processed_info=parallel_info) step takes the original input {"text": "..."} and adds a new key "processed_info" whose value is the output of parallel_info. So we end up with something like {"text": "...", "processed_info": {"summary": "...", "keywords": "..."}}.

The next dictionary in the pipe, { "summary": lambda x: x["processed_info"]["summary"], ... }, is a clever way to reshape the dictionary for the final_prompt. It pulls out the specific keys that final_prompt expects (summary, keywords, text) from the larger input dictionary. This reshaping is incredibly flexible for manipulating data flow.

Actionable Takeaways for Your Next AI Project

If you’re still on the fence about LCEL, here’s my plea to you:

  1. Start Simple: Don’t try to refactor your most complex chain first. Take a simple prompt-model-parser chain and convert it. Get a feel for the pipe syntax.
  2. Embrace RunnablePassthrough: This component is your best friend for maintaining original inputs while adding new data to your chain. Use RunnablePassthrough.assign() liberally.
  3. Understand Input/Output Types: LCEL works best when you clearly understand what each component expects as input and what it produces as output. LangChain’s documentation is getting really good at specifying these.
  4. Use LangSmith: If you’re using LangSmith for tracing, LCEL chains make the traces so much cleaner and easier to debug. You can visually see the data flowing from one step to the next.
  5. Think in Pipelines, Not Scripts: Shift your mindset from writing a sequence of imperative commands to designing a data flow. Each component transforms the data and passes it along.

LCEL isn’t just a syntactic sugar; it’s a fundamental shift in how you build with LangChain. It makes your AI applications more robust, easier to understand, and a joy to extend. Trust me, your future self (and anyone else who has to read your code) will thank you. Now go forth and pipe your way to better AI chains!

đź•’ Published:

đź§°
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring
Scroll to Top