top of page

šŸ‰šŸ”„ When Systems Grow Faster Than Meaning

(What Rejekts revealed when you listen between the talks)


I went to the Rejekts conference for the talks.


Kubernetes.

AI.

Platform engineering.

Observability.


But the real story wasn’t in any single talk.


It was in what they all had in common.


The Red Line, We Keep Adding Layers to Avoid Facing the Same Problem

Every talk, in a different way, was solving this:

ā€œHow do we manage the complexity we already created?ā€

Not reduce it.

Not question it.


Manage it.


Sveltos → Manage clusters of clusters

Hypervisors → Add isolation to orchestration

Container runtimes → Mix paradigms in one system

OpenTelemetry → Make fragmented systems observable

AI loops → Manage systems we no longer fully control


Different topics.


Same pattern.


We Didn’t Lose Control All At Once

We lost it… layer by layer.


First:

We abstracted hardware


Then:

We abstracted infrastructure


Then:

We abstracted deployment


Then:

We abstracted observability


And now:

We are abstracting decision-making (AI)


Each step made sense.


Each step solved a real problem.


From virtual machines…

to containers…

to Kubernetes…

to multi-cluster orchestration…


Even bringing hardware back again, GPUs, TPUs, because abstraction wasn’t enough anymore.


But together?

We created systems where cause and effect are no longer visible


And This Is Where It Becomes Human

Because the real issue isn’t tooling.


It’s that the system itself is no longer intuitively understandable


🧠 Our brains were not designed for this


Humans build understanding through:


šŸ’š direct cause → effect

šŸ’š fast feedback loops

šŸ’š patterns we can recognize and feel


But modern systems break that.


Instead we deal with:


šŸ–¤ indirect signals

šŸ–¤ delayed feedback

šŸ–¤ invisible dependencies

šŸ–¤ multiple layers of abstraction


🧠 What happens then?

Cognitive load doesn’t just increase.

It compounds


Because now the brain has to:


šŸ”„ hold multiple system models at once

šŸ”„ simulate what might be happening

šŸ”„ constantly switch context

šŸ”„ fill in gaps with assumptions


And neuroscience is clear:


šŸ‰ working memory is limited

šŸ‰ context switching is expensive

šŸ‰ uncertainty increases stress


So what do engineers experience?


ā€œI need more observabilityā€
ā€œI don’t fully trust what I seeā€
ā€œSomething feels off… but I can’t pinpoint itā€

That’s not a skill issue.


That’s cognitive overload


Sveltos Was a Signal of This Shift

It exposed something deeper:


Platform engineering is no longer about enabling teams

It is about containing complexity


The promise:

šŸ’š one control plane

šŸ’š automated reactions

šŸ’š simplified multi-cluster management


a dream of simplicity


The reality (hidden in the talk):


šŸ–¤ teams are overwhelmed by tooling

šŸ–¤ platform teams struggle to keep up

šŸ–¤ training gaps remain

šŸ–¤ resistance doesn’t disappear


Because the real issue isn’t tooling.

It’s that the system itself is no longer intuitively understandable


OpenTelemetry Showed the Next Layer

It exposed the next layer.


We created:

šŸ”„ logs

šŸ”„ metrics

šŸ”„ traces


Then needed:

a system to connect them


Then realized:

even if data is standardized… understanding is not


You can move data anywhere.


But you cannot move:

šŸ‰ context

šŸ‰ mental models

šŸ‰ meaning


And here’s the subtle trap:

standardization gives the feeling of control


So what did we do?


šŸ”„ dashboards for dashboards

šŸ”„ pipelines for pipelines

šŸ”„ visibility for systems no one fully sees


Not because we’re doing it wrong.

Because intuition no longer scales with the system


The Kubernetes Talks Said It Without Saying It

This line stood out:

Kubernetes is not built for multi-tenancy


And yet…

we are forcing it to be


So what do we do?


šŸ”„ add hypervisors

šŸ”„ add runtime layers

šŸ”„ add virtualization inside orchestration


We didn’t question the foundation.

We adapted reality around it


And that has a cost:


šŸ‰ more teams

šŸ‰ more coordination

šŸ‰ more cognitive overhead


And Then AI Walked In

The RALPH loop talk said:

AI thinks it completed the task… but it didn’t

Everyone laughed.


But that was the most honest moment of the day.


Because our systems already behave like that:


šŸ–¤ deployments succeed but value is unclear

šŸ–¤ dashboards are green but users struggle

šŸ–¤ pipelines run but no one questions why


AI didn’t introduce this problem.

It revealed it


🧠 The Memory Parallel No One Talked About

AI systems have context windows.


When they fill up:


šŸ–¤ information gets compressed

šŸ–¤ details get dropped

šŸ–¤ earlier context is rewritten


To keep going, AI:

šŸ–¤ approximates

šŸ–¤ fills gaps

šŸ–¤ optimizes for completion


Humans do the same.


Under cognitive pressure:


šŸ–¤ working memory overloads

šŸ–¤ details fade

šŸ–¤ we simplify reality

šŸ–¤ we rely on patterns instead of precision


We start:


šŸ–¤ assuming instead of knowing

šŸ–¤ shortcut

šŸ–¤ reacting instead of understanding



šŸ‰ The Parallel Is Uncomfortable

AI hallucinates under pressure.


Humans… approximate under pressure.


And both are trying to do the same thing:


keep the system moving forward

even when full understanding is no longer possible


The Hidden Shift No One Said Out Loud

We used to build systems we understood.


Now we build systems we:


šŸ‰ observe

šŸ‰ orchestrate

šŸ‰ react to


But don’t fully grasp end-to-end.


And that changes leadership.


Because the question is no longer:

ā€œCan we build it?ā€

But:

ā€œCan humans still understand, trust, and operate what we built?ā€


This Is Why Everything Feels Harder

Not because engineers got worse.


Not because tools are bad.


But because:

the gap between action and understanding has widened


And when that gap grows:


šŸ–¤ feedback loops weaken

šŸ–¤ ownership blurs

šŸ–¤ confidence erodes


So we compensate.


With:

šŸ”„ more tools

šŸ”„ more layers

šŸ”„ more automation


Until We Reach This Point

Where we are now:


building systems

to manage systems

to understand systems

to control systems


And somewhere in that stack…


meaning gets diluted


The Most Important Learning From Rejekts

Not Kubernetes.

Not Sveltos.

Not OpenTelemetry.

Not AI.


Complexity is no longer just technical


It is:

šŸ‰ cognitive

šŸ‰ human

šŸ‰ systemic


What Leaders Can Do, Turning Toward the Light

We don’t need to remove all complexity.


We can’t.


But we can change how we relate to it.


šŸ’š Design for human understanding, not just system performance


Ask: Does this make the system easier to understand… or just easier to run?


šŸ”„ Shorten the distance between action and meaning

Make cause → effect visible again.

Clarity reduces cognitive load faster than any tool.


šŸ‰ Create space for thinking, not just reacting

Because under pressure:

both humans and AI approximate instead of understand


And Maybe That’s Why Rejekts Mattered

Because in the middle of all this complexity…


Rejekts did something simple.


It wasn’t:

šŸ–¤ another platform

šŸ–¤ another abstraction

šŸ–¤ another layer


It was:

ā¤ļøā€šŸ”„ people

ā¤ļøā€šŸ”„ showing up

ā¤ļøā€šŸ”„ sharing

ā¤ļøā€šŸ”„ connecting


A community event.

Built in a few months.

By volunteers.


Not perfect.


But real.


And that mirrored everything this day revealed.


In a world where:

šŸ–¤ systems are layered

šŸ–¤ feedback is delayed

šŸ–¤ understanding is stretched


Rejekts brought something back:

šŸ’š direct interaction

šŸ’š immediate feedback

šŸ’š shared understanding


Gratitude

Thank you to:

šŸ’ššŸ² the organizers

šŸ’ššŸ² the volunteers

šŸ’ššŸ² the speakers

šŸ’ššŸ² the sponsors


You didn’t just create a conference.

You created a space where meaning could catch up again


šŸ‰šŸ”„ If This Resonates…

This is exactly the work I’ve been exploring deeper in:


šŸ“˜ The Leadership Leap: Now Without Crash Landings, where I break down how leaders can reconnect:

ā¤ļøā€šŸ”„ systems to value

ā¤ļøā€šŸ”„ people to purpose

ā¤ļøā€šŸ”„ complexity to clarity


And in my programs, like Leadership Landing and Team Accelerator, we go further:

translating these insights into

real team practices, real decisions, real impact


Because this isn’t about rejecting complexity.


šŸ‰ It’s about leading it

without losing ourselves in it


Final Thought

We will keep building.

We will keep scaling.


But we can choose this:

systems that expand human capability

instead of systems that quietly exhaust it


And if we keep creating spaces like Rejekts…

meaning won’t get lost in the system


Because we will carry it

together šŸ’ššŸ”„šŸ‰



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page