The Most Dangerous Part of AI Is Not the Technology
- Sarah Gruneisen

- 3 hours ago
- 8 min read
Tonight I went to a meetup hosted by Tergos: Cloud DevOps Meet AI. Damn, what a great meetup! Great food, great conversations!
And a engaging knowledgeable speaker who will soon be moving to the Australia!
“We stepped of the plane and my daughter said in the first moment, we have to live here”
“It is like the USA in the 1990s”
On the surface, it was a technical evening.
Cloud. DevOps. AI. Infrastructure. Models. Automation. Incident response. Deployment gates. Chaos engineering. Neural nets. GPUs. Backpropagation. Runbooks becoming action. Systems learning patterns. Systems making decisions.
But beneath all of that?
This was not really a talk about technology.
It was a talk about power.
⚡ And power always reveals something
There was a moment tonight that stayed with me.
A slide.
A sentence.
Almost casually said:
We are moving from infrastructure as code to infrastructure that decides.
And I felt it.
Because this is not just a technical shift.
This is a shift in who decides.
From systems that execute to systems that interpret
The meetup opened with a question:
What happens when systems behave exactly as designed, but not as expected?
We’ve always dealt with this as engineers and leaders.
We design systems.
We define rules.
We build processes.
And then reality doesn’t follow.
But now something is different.
We are no longer just building systems that execute our intent.
We are building systems that:
🔥 interpret signals
🔥 prioritize actions
🔥 suggest decisions
🔥 and sometimes… act
From runbooks to action
One example made it very tangible.
Runbooks.
Those documents nobody reads until everything is on fire 🔥
Now imagine:
🐲 AI reads the logs
🐲 AI finds the root cause
🐲 AI maps it to the runbook
🐲 AI proposes or executes the fix
And the engineer?
Reviews.
Approves.
Supervises.
That sounds efficient.
But it triggered something deeper for me.
Because I’ve seen this pattern before.
The Formula X trap, approval without understanding
In many organizations today, we already have a version of this.
Managers who need to approve decisions
without fully understanding what they are approving.
Not because they are incapable.
But because the system requires it.
So what happens?
They sign.
They trust.
They move things forward.
Because someone has to take accountability.
And here is the uncomfortable truth:
That doesn’t increase safety.
It often blocks flow.
If you’ve read Formula X, you’ll recognize this immediately.
High-performing organizations don’t add approval layers.
They remove blockers.
They place decisions where understanding lives.
Because every approval without understanding creates:
🖤 friction
🖤 delay
🖤 false confidence
🖤 and a bottleneck disguised as control
Now we are rebuilding this pattern with AI
Look at what’s emerging:
🔥 AI suggests fixes
🔥 AI reviews code
🔥 AI gates deployments
🔥 AI proposes actions
And humans?
Approve.
Do you see the risk?
We are recreating the same anti-pattern:
Approval without understanding
Accountability without true ownership
But now it’s more subtle.
Because instead of:
“I don’t understand this system”
It becomes:
“The system suggested this”
Probability is not wisdom
Earlier in the talk, there was a deep technical explanation.
AI doesn’t know.
It predicts.
It gives the most likely answer based on patterns.
And most of the time…
That works.
But:
Likely ≠ right
Confident ≠ true
Fast ≠ thoughtful
So what happens when:
humans approve what they don’t understand
based on systems that also don’t truly understand
That’s not intelligence.
That’s layered probability.
The speed illusion
Here’s where it gets dangerous.
In traditional organizations:
Approval without understanding —> slows everything down
In AI-driven systems:
Approval without understanding can do two things:
🔥 slow everything
because people hesitate
or
🔥🔥 accelerate everything
because people stop questioning
And that second one?
That’s where mistakes scale.
Fast.
Bias doesn’t disappear. It scales.
There was a simple example in the talk:
Generate a female boxer and a man holding a sign.
The system struggled.
Why?
Because it hasn’t seen that pattern enough.
AI doesn’t invent.
It recombines.
Which means:
It inherits our biases
and scales them
Now combine that with:
Approval without understanding
You don’t just get bias.
You get unquestioned bias at scale.
The illusion of control
For years, we’ve been optimizing for:
Predictable
Observable
Repeatable
Infrastructure as code gave us a sense of control.
“If we define it well, it will behave.”
Now we are entering a world where:
Even if we define it perfectly…
It will still behave in ways we didn’t expect.
Because it learns.
Because it adapts.
Because it interprets.
And suddenly:
Control is no longer about defining everything.
It’s about understanding what you cannot fully predict.
The real danger is not AI
Let me say this clearly.
The danger is not that AI becomes too powerful.
The danger is that humans become too passive.
Because tonight I saw something underneath all the innovation:
Relief.
Relief that something else can decide.
Relief that something else can suggest.
Relief that something else can carry part of the weight.
And that is where things shift.
And then I realized, this is where it can go right
Because I’ve actually designed this differently.
In my work, I designed:
AI as an Engineering Value Multiplier.
Not to replace engineers.
Not to remove responsibility.
But to reduce cognitive load
and improve decision quality.
The idea was simple:
AI should help with:
💚 understanding complexity
💚 navigating systems
💚 surfacing insights
💚 supporting better decisions
Not just:
faster output
more automation
more actions
Because that’s the trap.
If AI only accelerates doing…
We risk losing thinking.
But if AI reduces cognitive load…
Engineers don’t become passive.
They become more capable.
And I saw another interesting idea at a meetup a few weeks ago.
You can even let AI grade itself.
Not as a replacement for human judgment.
But as an additional layer to reduce cognitive load.
Imagine:
🐲 AI proposes a solution
🐲 AI evaluates its own confidence
🐲 AI highlights uncertainty or edge cases
🐲 AI suggests where human attention is most needed
Now the human is not reviewing everything equally.
They are focusing where it matters.
That’s a very different system.
Not:
🖤 human vs AI
But:
💚 AI supporting better human judgment
And that’s where this becomes powerful.
Because we’re not removing responsibility.
We’re guiding attention.
Engineers have more space to:
💚 understand
💚 question
💚 challenge
💚 decide
And that’s the difference.
💬 The question I left with
As our systems become more autonomous…
Are we removing blockers?
Or are we rebuilding them in a more sophisticated form?
Because that answer will shape not just our systems.
It will shape our leadership.
And I see this pattern often already, even without AI.
A team does the work.
The expertise sits close to the work.
The context sits close to the work.
But then something has to move upward for approval anyway.
Why?
So someone higher up can “take accountability.”
But often they do not really understand the decision deeply enough to improve it.
So they either block it, slow it down, or sign it off with limited insight.
That does not create flow.
It breaks it.
That does not create trust.
It signals the opposite.
It says:
“I need control even where I do not hold understanding.”
And if we bring that same habit into AI, we will make the same mistake again.
Only faster.
Only at scale.
So how do we lead with AI in a Formula X way?
Not by adding more oversight.
Not by turning humans into sign-off machines.
Not by accelerating decisions without understanding.
That is not trust.
That is delay disguised as accountability.
But also not by removing responsibility.
We lead by designing systems where:
❤️🔥 AI reduces cognitive load
❤️🔥 humans deepen understanding
❤️🔥 and decisions stay with those who truly understand the context
A Formula X approach to AI leadership would look different
It would start with trust.
Not blind trust in AI.
Trust in people.
Trust in expertise.
Trust in those closest to the work.
That means leaders do not use AI to centralize more control.
They use it to remove friction.
They ask:
💚 Which approvals are truly needed?
💚 Which approvals exist only to protect hierarchy?
💚 Where are people waiting for permission from someone who cannot really improve the decision?
💚 Where are we slowing down flow in the name of accountability, while actually weakening ownership?
That is where leadership has to get honest.
Because AI can easily become another excuse to add layers:
“Let’s have the model suggest it.”
“Then let’s have the engineer review it.”
“Then let’s have the manager approve it.”
“Then let’s have architecture sign off.”
“Then let’s have governance validate it.”
And suddenly the promised speed of AI disappears into the same old swamp of mistrust.
Not because the technology failed.
Because leadership did.
So what should leaders remove?
If we lead this well, we remove:
Unneeded approvals.
Rubber-stamp reviews.
Escalations that add no insight.
Accountability theater.
Decision layers that exist only because someone feels safer when their name is attached.
But this is where nuance matters.
Because removing approvals does not mean removing responsibility.
And this is where many organizations get it wrong.
We cannot be irresponsible.
Not with systems that:
🖤 can be wrong
🖤 can amplify bias
🖤 can act faster than we can fully comprehend
So the goal is not:
🔥 less control
The goal is:
🔥🔥 better placed control
And that’s a very different thing.
What do we keep and strengthen?
We keep:
Clear guardrails.
Explicit ownership.
Defined decision boundaries.
Fast feedback loops.
The ability to challenge AI output, and the expectation to do so.
Because trust does not mean:
“just let it run”
Trust means:
“I know where responsibility sits and I trust the people there to act, and to question.”
A Formula X approach to AI leadership
A Formula X approach does not say:
“remove all approvals”
It says:
🔥 remove approvals that do not add understanding
🔥🔥 and strengthen responsibility where understanding exists
So instead of:
More layers
More sign-offs
More distance from the work
We design for:
💚 decisions close to the work
💚 accountability that matches context
💚 leaders who enable, not block
💚 systems that support judgment, not replace it
Because here is the real risk:
Not that AI makes decisions.
But that humans stop engaging with them critically.
Trust without awareness is not trust
If we blindly approve AI:
We become passive.
If we over-control AI:
We become the bottleneck.
So the balance is this:
💚 Trust people more than process
💚 Trust judgment more than hierarchy
💚 Trust AI as input, not authority
What trust looks like with AI
Leading with trust in an AI-shaped world means:
The people closest to the work can act.
The people with the deepest context can decide.
AI supports, but does not replace, judgment.
And leaders?
Design systems that make this possible.
Not by standing above the flow.
But by shaping it.
❤️🔥 When do we move without approval?
❤️🔥 When do we slow down intentionally?
❤️🔥 Where must humans intervene?
❤️🔥 Where is AI allowed to act?
That clarity is leadership.
Not control.
🐉 Dragon wisdom
A dragon does not remove responsibility to move faster.
A dragon removes what blocks flow
while protecting awareness.
Because trust without awareness
is not trust.
It is risk disguised as speed.
Final reflection
The future is not about whether AI can make decisions.
It is about whether we can design systems where:
💚 trust is real
💚 responsibility is clear
💚 and understanding is not lost
If we use AI to multiply approvals, we will multiply friction.
If we remove responsibility, we multiply risk.
If we remove understanding, we multiply mistakes.
But if we:
Remove unnecessary blockers
Place responsibility where understanding lives
And stay consciously engaged with what AI suggests
Then we might finally build something better.
Not a system where humans disappear.
A system where trust becomes real.






Very insightful.