\n\n\n\n My LCEL Experience: Streamlining AI App Development - AgntBox My LCEL Experience: Streamlining AI App Development - AgntBox \n

My LCEL Experience: Streamlining AI App Development

📖 11 min read2,083 wordsUpdated Apr 1, 2026

Hey there, agntbox readers! Nina here, back with another dive into the ever-shifting world of AI tools. Today, we’re not just looking at a tool; we’re looking at something that feels like it just landed from the future, but in a totally approachable way. We’re talking about LangChain Expressions Language (LCEL), and specifically, how it’s making my life as someone who builds and tests AI applications a whole lot easier and more structured.

I remember just a couple of years ago, building anything more complex than a single-turn prompt with an LLM felt like trying to duct-tape together a Rube Goldberg machine. You’d pass outputs from one function to the input of another, handling error states manually, and praying your data types lined up. It worked, mostly, but it was clunky. LangChain came along and offered a framework, which was a huge step forward. But even then, chaining components together still had a bit of a boilerplate feel, especially when you wanted to do something custom or slightly outside the pre-built chains.

Enter LCEL. When I first started seeing chatter about it late last year, I admit I was a little skeptical. Another abstraction? Did we really need more layers? But after spending a solid few weeks building out a fairly complex internal content summarization and categorization tool for agntbox using it, I’m a convert. LCEL isn’t just another feature; it’s a fundamental shift in how you compose LLM applications within the LangChain ecosystem. It’s like going from writing raw SQL queries for every database interaction to using a really well-designed ORM – you still have control, but the common patterns are just… smoother.

LCEL: More Than Just Chaining

So, what is LCEL? At its core, it’s a way to declaratively compose runnable sequences. Think of it as a set of rules and primitives that allow you to string together LLMs, prompt templates, parsers, and custom functions in a highly flexible and efficient manner. It’s designed to be:

  • Composable: You can combine small, independent components into larger, more complex ones.
  • Streamable: It supports asynchronous operations and streaming, which is fantastic for real-time applications.
  • Parallelizable: It handles concurrent execution of components gracefully.
  • Inspectable: You can trace the execution of your chains, which is a lifesaver for debugging.
  • Portable: Once you define a runnable, it can be invoked in a variety of ways.

The “Expressions Language” part might sound a bit intimidating, but it’s really about using Python’s native operators (like | for piping) in a specific way to build these sequences. It feels very Pythonic once you get the hang of it.

My “Aha!” Moment: Building a Content Classifier

Let me tell you about the project that really sold me on LCEL. Here at agntbox, we get a ton of submissions for articles, and sometimes, the initial categorization isn’t quite right, or an article might touch on multiple topics. My goal was to build a tool that could take an article draft, summarize it, and then suggest primary and secondary categories from a predefined list, along with a confidence score. Before LCEL, I would have probably written a few different functions:

  1. A function to get the summary.
  2. A function to get the categories based on the summary.
  3. Error handling and data parsing for each step.

It would have worked, but imagine wanting to swap out the summarization model, or add an extra step to check for plagiarism before categorization. Each change would have meant digging into a specific function and potentially breaking something else.

With LCEL, the whole process felt like assembling LEGOs. Let’s walk through a simplified version of how I built it.

Step 1: The Basic Components

First, I defined my LLM (I’m using OpenAI’s gpt-4-turbo for this, but you could easily swap it). Then, I needed a prompt for summarization and another for categorization.


from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field

# My LLM
llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)

# 1. Summarization Prompt
summary_prompt = ChatPromptTemplate.from_template(
 "Please provide a concise summary of the following article for internal review. "
 "Focus on the main arguments and key takeaways, aiming for about 150-200 words.\n\nArticle: {article_content}"
)

# 2. Categorization Prompt
# Define the expected output format for categorization
class CategorySuggestions(BaseModel):
 primary_category: str = Field(description="The most relevant primary category.")
 secondary_category: str | None = Field(description="A relevant secondary category, if applicable.")
 confidence_score: float = Field(description="A confidence score between 0.0 and 1.0 for the primary category.")

category_prompt = ChatPromptTemplate.from_messages([
 ("system", "You are an expert content classifier. Given a summary and a list of available categories, "
 "suggest the best primary and secondary categories. Also provide a confidence score for the primary category. "
 "The categories are: AI Tools, Machine Learning, Data Science, Software Development, Cloud Computing, Cybersecurity, Robotics, Ethics in AI."),
 ("human", "Summary: {summary}\n\nSuggest categories and a confidence score in JSON format.")
])

# Output parsers
str_parser = StrOutputParser()
json_parser = JsonOutputParser(pydantic_object=CategorySuggestions)

Notice how I’m already thinking about structured output for the categorization step using Pydantic and JsonOutputParser. This is where LCEL really shines – making it easy to enforce data integrity across steps.

Step 2: Composing the Summary Chain

The first part of my tool is getting a summary. With LCEL, this is super clean:


summary_chain = (
 summary_prompt
 | llm
 | str_parser
)

This sequence reads almost like English: “Take the summary_prompt, pipe its output to the llm, and then pipe the LLM’s output through the str_parser.” Simple, right?

Step 3: Composing the Categorization Chain

The categorization needs the summary as input, so it’s a bit different. I wanted to feed the output of the summary chain into the category chain. LCEL provides ways to do this using dictionaries for inputs.


categorization_chain = (
 {"summary": summary_chain} # Pass the output of summary_chain as 'summary'
 | category_prompt
 | llm
 | json_parser
)

Here, {"summary": summary_chain} is a runnable that takes the input for the whole process (the article content), passes it to the summary_chain, and then maps the result to a key named "summary". This dictionary is then passed as input to category_prompt.

Step 4: Putting It All Together

Now, I wanted to run both the summarization and categorization. The categorization depends on the summary, so it’s not parallel. But what if I wanted to do something else in parallel with the categorization (e.g., check for keywords)? For this particular flow, it’s sequential, but LCEL makes even complex branching clear.

For my content classifier, I decided to keep it simple and just combine the steps sequentially, but making sure the overall output was a combined dictionary of both results:


from langchain_core.runnables import RunnablePassthrough, RunnableParallel

# A runnable that takes article_content and returns a dictionary with summary and categorization
full_classifier_chain = (
 RunnablePassthrough.assign(
 summary=summary_chain
 )
 | RunnablePassthrough.assign(
 categorization=categorization_chain
 )
)

Let’s break that down:

  • RunnablePassthrough.assign(summary=summary_chain): This takes the initial input (article_content) and passes it through. It also runs summary_chain with that same input and assigns its result to a new key called "summary". So, the output of this first stage is {"article_content": "...", "summary": "..."}.
  • The | then pipes this dictionary to the next RunnablePassthrough.assign.
  • RunnablePassthrough.assign(categorization=categorization_chain): This takes the existing dictionary (which now has article_content and summary), passes it through, and crucially, runs categorization_chain. The categorization_chain expects summary as an input, which it finds in the dictionary passed to it. Its output is then assigned to the "categorization" key.

The final output of full_classifier_chain would be a dictionary like: {"article_content": "...", "summary": "...", "categorization": {"primary_category": "...", ...}}.

To use it:


article_draft = """
The latest advancements in large language models are pushing the boundaries of what's possible in conversational AI. Researchers at XYZ Labs recently unveiled 'Synthetica-7', a model capable of generating highly nuanced and contextually aware responses, even in long-form discussions. Unlike previous iterations that struggled with maintaining coherence over several turns, Synthetica-7 employs a novel attention mechanism that prioritizes long-range dependencies. This breakthrough has significant implications for customer service chatbots, educational tutors, and even creative writing assistants. The model was trained on a massive corpus of diverse text, including scientific papers, fiction, and technical manuals, allowing it to adapt its tone and style dynamically. Furthermore, XYZ Labs has open-sourced a smaller, fine-tuned version for academic research, fostering a collaborative environment for further innovation in the field. Ethical considerations around AI bias and misuse were also a central focus during its development, with extensive testing conducted to mitigate potential harmful outputs.
"""

# Invoke the chain
result = full_classifier_chain.invoke({"article_content": article_draft})
print(result["summary"])
print(result["categorization"])

This setup is incredibly powerful. If I wanted to add a step to check the article’s tone, I could just add another RunnablePassthrough.assign with a new prompt and LLM call. If I wanted to switch from OpenAI to Cohere, it’s a single line change at the llm definition. The modularity is fantastic.

Beyond the Basics: Parallelism and Fallbacks

LCEL isn’t just for simple sequential chains. It offers powerful ways to handle more complex scenarios:

  • Parallel Execution: Use RunnableParallel to run multiple components at the same time and combine their outputs into a dictionary. This is great for, say, getting multiple perspectives from different LLMs or running a classification alongside a summarization where neither depends on the other.
  • Fallbacks: The .with_fallbacks() method allows you to define alternative runnables to try if the primary one fails. This is huge for building more resilient applications, maybe trying a cheaper, faster model first and falling back to a more expensive, robust one if the first fails or gives a low-confidence output.
  • Custom Functions: You can easily integrate any Python function into your LCEL chain using RunnableLambda. This is how I’d add a custom pre-processing step or a post-processing validation.

For example, if I wanted to get the summary and also check for keywords from the original article simultaneously, I could do:


from langchain_core.runnables import RunnableParallel

keyword_extraction_prompt = ChatPromptTemplate.from_template(
 "Extract 5-10 key topics or keywords from the following article. List them as comma-separated values.\n\nArticle: {article_content}"
)

keyword_chain = (
 keyword_extraction_prompt
 | llm
 | str_parser
 | (lambda x: [kw.strip() for kw in x.split(',')]) # Custom parsing for keywords
)

# Run summary and keyword extraction in parallel
parallel_analysis = RunnableParallel(
 summary=summary_chain,
 keywords=keyword_chain
)

# This will run both 'summary_chain' and 'keyword_chain' concurrently
# result_parallel = parallel_analysis.invoke({"article_content": article_draft})
# print(result_parallel)

This RunnableParallel makes complex workflows feel intuitive and performant. You’re not waiting for one LLM call to finish before starting another if they don’t depend on each other.

Actionable Takeaways for Your AI Projects

So, after playing with LCEL extensively, here’s what I’d recommend if you’re building LLM-powered applications:

  1. Start Simple, Build Up: Don’t try to architect your entire application with LCEL from day one. Begin with a single, clear chain (like a prompt -> LLM -> parser). As you get comfortable, introduce more complex elements like parallel execution or fallbacks.
  2. Embrace Structured Output: Use Pydantic models with JsonOutputParser or similar methods. This makes your chains much more reliable, as you’re enforcing a contract on the LLM’s output. It drastically reduces parsing errors downstream.
  3. Think in Runnables: Every component in LCEL is a “runnable.” This includes prompts, LLMs, parsers, and even custom Python functions. Understanding this helps you see how everything can fit together.
  4. Debug with Tracing: LangChain provides tracing tools (like LangSmith) that integrate seamlessly with LCEL. Use them! Being able to visualize the execution path and inputs/outputs at each step is invaluable when something goes wrong.
  5. Consider Streaming: If you’re building interactive applications (like chatbots), LCEL’s support for streaming results can significantly improve user experience by showing partial responses as they’re generated.

LCEL isn’t just a minor update; it’s a fundamental improvement to how we build with LangChain. It makes building robust, modular, and performant LLM applications feel less like a hackathon and more like structured software engineering. If you’ve been on the fence about diving deeper into LangChain or felt overwhelmed by its earlier complexities, now is definitely the time to give LCEL a serious look. It’s made my development workflow smoother, and I’m confident it can do the same for yours.

That’s all for this one, folks! Happy building, and I’ll catch you next time here at agntbox.com.

🕒 Published:

🧰
Written by Jake Chen

Software reviewer and AI tool expert. Independently tests and benchmarks AI products. No sponsored reviews — ever.

Learn more →
Browse Topics: AI & Automation | Comparisons | Dev Tools | Infrastructure | Security & Monitoring

Partner Projects

AgntaiClawdevBotclawClawseo
Scroll to Top