Software - General
1853242 Members
6706 Online
104084 Solutions
New Discussion

AWS Build Diaries: EventBridge vs Lambda to Lambda

 
pearlwilson
HPE Pro

AWS Build Diaries: EventBridge vs Lambda to Lambda

From “One Ticket = One Lambda” to Event-Driven Architecture

I was tasked with building a sizing engine for cluster t-shirts, that had to be integrated into our web app, and I needed a single DynamoDB, that had sizing details from multiple data sources. I didn’t really design this system upfront but rather built it the way a lot of us do, one ticket at a time.

The goal sounded simple enough, get data into Amazon DynamoDB. But the data wasn’t coming from just one place. It was scattered across OneView, an Oracle database, Prism Central, and even Excel files housed in S3.

And every time a new requirement showed up, I handled it the same way:

I’d pick up the ticket, write a new AWS Lambda, and move on.

That pattern worked, until it didn’t.

Because before I realized it, my architecture wasn’t something I had designed, it was something that had been built from a growing pile of tickets.

One ticket. One Lambda. Over and over again.

First Implementation: One Lambda for each Data Source

It started clean.

Lambda A - OneView - DynamoDB

Lambda B - Oracle - DynamoDB

Lambda C - Prism - DynamoDB

Each Lambda:

  • Pulled data from its source
  • Transformed it
  • Wrote to DynamoDB

This felt modular. Even elegant.

Why This Approach Worked (At First)

  • Clear ownership per data source
  • Easy to implement per ticket
  • Independent deployments

If a OneView change came in, I only touched Lambda A.

The Cracks Started Showing

The system didn’t break, it changed and over time, patterns started repeating:

  1. Duplicate Logic Everywhere

Each Lambda had its own version of:

  • Validation
  • Transformation
  • Error handling

Same logic. Copied three times. Then five.

  1. Inconsistent Data Handling
  • Oracle data was cleaned differently than OneView
  • Excel uploads had slightly different validation rules

The same entity in DynamoDB started looking different depending on its source.

  1. Cross-Source Dependencies Appeared

New requirements came in:

  • “Combine Oracle + OneView data before writing”
  • “Enrich Prism data with values from S3 Excel”
  • “Run a full refresh across all systems daily”

Now things got complicated. I couldn’t keep following my “follow the instructions on the ticket” rule anymore.

Second Implementation: Connecting the Lambdas

To handle dependencies, I started linking Lambdas together. Calling one Lambda from another Lambda.

Lambda A - Lambda B - Lambda C - DynamoDB

Or sometimes:

Lambda A + Lambda B - Lambda C - DynamoDB

Now Lambdas weren’t independent anymore.

They had to:

  • Know about each other
  • Wait for each other
  • Handle each other’s failures

Third Implementation: Merge Everything

At this point, I asked the obvious question:

“Why not just have one Lambda that does everything?”

It seemed like a way to:

  • Eliminate duplication
  • Centralize logic
  • Simplify orchestration

But this introduced a different set of problems.

Why a Single Lambda Didn’t Work

With all sources combined, the function started looking like:

def handler(event):

    if source == "oneview":

        fetch_oneview()

    elif source == "oracle":

        fetch_oracle()

    elif source == "prism":

        fetch_prism()

    elif source == "excel":

        fetch_excel()

What went wrong:

  • The function kept growing with every new ticket
  • Conditional logic exploded
  • A fix for one source risked breaking another
  • Testing became harder
  • Deployments became riskier

The system was no longer modular, it was tightly coupled in one place.

Fourth and Final Implementation: Thinking in Events, Not Functions

The real breakthrough came when I stopped asking:

“Which Lambda should call the next one?”

And instead asked:

“What just happened in the system?”

That’s when I introduced Amazon EventBridge.

The Final Design: Event-Driven

Instead of direct calls, everything became event-based:

Event: OneViewDataFetched - Processing

Event: OracleDataFetched - Processing

Event: PrismDataFetched - Processing

Event: ExcelDataUploaded - Processing

Then:

Event: DataProcessed - Enrichment

Event: DataReady - Write to DynamoDB

Each step:

  • Listens for events
  • Does its job
  • Emits a new event

What Changed

No More “One Ticket = One Lambda” Chaos

I still created Lambdas - but now they were:

  • Based on responsibilities
  • Not data sources

Logic Became Centralized (Without Tight Coupling)

  • Transformation logic lived in one place
  • Enrichment in another
  • Storage in another

No duplication, no giant function.

Adding New Sources Became Easy

New ticket?

  • Emit a DataFetched event
  • Plug into the pipeline

No need to touch existing Lambdas.

Scheduling Became Trivial

Using EventBridge, I added a daily trigger to refresh all data.

No cron jobs. No extra services.

Just a scheduled event kicking off the pipeline.

The Real Lesson

At the start, my architecture followed my tickets.

One requirement - One Lambda.

But over time, I realized:

Systems shouldn’t be designed around tickets - they should be designed around events.

Final Thought

I didn’t adopt EventBridge because my Lambdas were wrong.

I adopted it because my system outgrew the idea that functions should call each other.

Once I started treating everything as events, the architecture stopped fighting me - and started scaling with me.

 

I'm an HPE employee.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
1 REPLY 1
yashwanth_k
HPE Pro

Re: AWS Build Diaries: EventBridge vs Lambda to Lambda

Well written blog!