Agents

Agents decide and act. Pipelines chain steps together. Workflows fire on events. Orchestration coordinates all three. This is where AI systems stop being tools and start being autonomous.

Three Types of AI Automation

Not every automated system is an agent. The distinctions matter because they determine how much autonomy you're granting, how much control you retain, and where things can go wrong.

Agents

An agent receives a goal and decides how to achieve it. It chooses which tools to call, in what order, and adapts its approach based on what comes back. The key property is autonomy — the agent makes decisions at runtime that weren't predetermined by a developer.

Example: a support agent that reads a ticket, decides whether to look up the customer's account, search the knowledge base, or escalate — all without being told which to do.

Pipelines

A pipeline has a fixed sequence of steps. Data flows in, gets processed through each stage, and comes out the other end. There's no decision-making at runtime — the steps and their order are defined in advance. Pipelines are predictable, testable, and easy to debug.

Example: an intake pipeline that extracts text from a PDF, chunks it, generates embeddings, and writes them to a vector store. Same steps every time.

Workflows

A workflow triggers on an event and executes a defined sequence of actions, often with conditional branching. Unlike pipelines, workflows respond to external signals — a new email arrives, a form is submitted, a threshold is crossed. They're the connective tissue between systems.

Example: when a new lead fills out a form, enrich the data via an API, score it with a model, and route it to the right salesperson based on the score.

The Orchestration Layer

Orchestration sits above agents, pipelines, and workflows. It's the layer that coordinates between them — deciding which system handles what, routing data between them, and managing failure.

In simple systems, you might have a single agent or a single pipeline. But production systems almost always need multiple components working together. An agent might kick off a pipeline. A workflow might spawn an agent. A pipeline might trigger a workflow when it finishes. Someone needs to manage that coordination.

That's what the orchestration layer does. It handles four things:

Coordination

Determining which agent, pipeline, or workflow should handle a given task. Routing incoming requests to the right system based on intent, context, or load.

Routing

Passing data between components. When an agent finishes its work, the orchestrator sends the output to the next step — whether that's another agent, a pipeline, or an external system.

Failure Handling

Retries, fallbacks, and graceful degradation. When an agent times out or a pipeline step fails, the orchestrator decides what happens next — retry, use a fallback, or escalate to a human.

State Management

Keeping track of where a multi-step process is. Which steps have completed, what data has been gathered, and what still needs to happen. This is especially critical for long-running processes.

What's the Difference?

The lines between these concepts can blur. Here's the practical distinction I use when designing systems for clients.

Agents choose what to do dynamically. They receive a goal and figure out the steps themselves. An agent interacting with MCP tools might call a database lookup, then a web search, then a calculation — all based on what it learned from the previous step. The developer defines the tools and constraints, not the execution path.

Pipelines have pre-determined steps. The developer defines exactly what happens and in what order. There's no AI decision-making about the process itself — though individual steps might use AI (like a summarisation step in a document processing pipeline). Pipelines are the right choice when you know the process won't vary.

Workflows trigger on events. They're similar to pipelines in having defined steps, but they're reactive — they start when something happens. A workflow might include conditional logic (if the score is above 80, do X; otherwise, do Y), but the conditions are defined in advance.

Most real systems use a mix. I rarely build something that's purely one type. A typical production system might use a workflow to handle incoming events, an agent to make decisions about how to process them, and a pipeline to execute the actual data transformation. The orchestration layer ties it all together.

Pairs With

Agents don't work alone. Here's how they connect to the other building blocks in a production system.

MCP

Agents decide what to do. MCP gives them the tools to do it. The Model Context Protocol is the open standard that lets agents connect to databases, APIs, file systems, and external services — without hardcoding integrations for every tool.

Context

Agents draw from RAG and memory to make informed decisions. Without context, an agent is just guessing. With a well-structured context layer — retrieval-augmented generation, conversation memory, system prompts — agents make decisions grounded in your actual data.

Safety

Guardrails and human-in-the-loop checkpoints constrain what agents can do. The more autonomy you grant an agent, the more important it is to define boundaries — what it's allowed to do, what requires approval, and what's off-limits entirely.

Observability

Logging and monitoring track every decision an agent makes. When an agent acts autonomously, you need a clear record of what it decided, why, and what the outcome was. Observability turns a black box into something you can audit, debug, and improve.

Need help building your agent system?

I design and build agent architectures — from single-purpose agents to multi-agent orchestration systems. If you're figuring out where agents fit in your stack, I can help.