Day Four at Xebia: The Moment the Conversation Became Real š
- Sarah Gruneisen

- 4 days ago
- 8 min read
Updated: 3 days ago
By day four of onboarding at Xebia, something shifted again.
The first days had not primarily been about AI.
Not really.
They had been about:
š culture
š identity
š belonging
š uncertainty
š transformation
š growth
š adaptation
š psychological safety
The first days felt deeply human.
We talked about what it means to join an organization during a time where almost nobody can confidently say:
āI fully understand what the next five years will look like.ā
There was honesty in the room.
And that honesty mattered.
Because I have worked in enough organizations to know how rare it is for leaders and experts to openly admit uncertainty.
Especially in technology.
Especially in consulting.
Especially during disruption.
But by day fourā¦
the conversation changed shape.
Not away from humanity.
Into the machinery underneath the transformation itself.
And suddenly we were no longer only talking about:
š becoming
We were standing inside:
š„ acceleration
The room no longer felt theoretical
Earlier onboarding days still held some emotional distance from the technology itself.
Day four removed that distance.
Now we were:
š building MCP servers (Model Context Protocol, giving AI secure ābridgesā to tools, databases, and systems it normally cannot access)
š spawning sub-agents (smaller AI workers focused on specific tasks)
š discussing multi-agent systems (multiple AI agents collaborating in parallel)
š creating worktrees (isolated copies of the same codebase so parallel AI agents do not overwrite each other)
š orchestrating workflows (coordinating how humans, tools, and AI systems interact)
š experimenting with shell automation (using AI directly from the command line)
š debating token windows (the memory/context limits AI models can handle at once)
š discussing hallucinations (AI confidently generating incorrect information)
š testing orchestration patterns (different ways of coordinating AI agents and tasks)
š discussing context pollution (when too much irrelevant AI context reduces quality)
š exploring review workflows (humans increasingly reviewing AI-generated work instead of writing everything manually)


And the energy in the room became fascinating.
Because the room simultaneously felt:
š excited
š playful
š intellectually curious
while also becoming:
š„ deeply reflective
š„ cautious
š„ philosophical
š„ uneasy
That contradiction stayed with me the rest of the evening.
The dragon in the room nobody could ignore anymore
Somewhere during the discussions, the emotional center of gravity changed.
Many moments felt almost magical.
š Someone generated a functioning interface in minutes.
š Someone conversationally interacted with structured data.
š Someone built workflows that previously would have required far more setup.
š Someone demonstrated parallel AI agents working simultaneously on isolated worktrees.
š Someone casually explained they maintain multiple AI accounts because token windows are now becoming operational bottlenecks.
š And at one point, I created a dragon-themed bookstore directly from a database connection. Snicker š¤
And everyone laughed.
But underneath the laughter, something else was spreading quietly through the room:
š„ these systems are no longer ātoysā
And I do not think society has emotionally caught up to that reality yet.
Because many public conversations still frame AI as:
š¤ a helper
š¤ a chatbot
š¤ a writing assistant
š¤ a productivity tool
But what I witnessed during day four at Xebia looked far more significant.
Not complete replacement.
Not Artificial General Intelligence.
Not science fiction.
Something more subtle.
And perhaps far more disruptive.
A new operational layer forming underneath knowledge work itself.
Because AI is not magic.
And despite how suddenly this moment feels to many peopleā¦
AI itself is not new.
Recommendation engines.
Search ranking.
Fraud detection.
Predictive text.
Probability models.
Statistical pattern recognition.
Much of modern AI is still fundamentally built on probabilities, predictions, training data, biases, and patterns.
In many ways, these systems increasingly mirror humanity itself:
š our knowledge
š our creativity
š our assumptions
š our blind spots
š our historical biases
š our contradictions
What changed recently was not suddenly ācreating intelligence.ā
What changed was scale.
Computing power expanded.
Data exploded.
Models became larger.
Interfaces became accessible.
Context windows grew.
Tooling matured.
The dragon did not suddenly appear.
We simply started feeding it enough fire.
The moment the room collectively realized something important
One sentence changed the emotional tone of the room:
āTyping was never the hard part.ā
That sentence hit harder than many technical demonstrations combined.
Because many engineers already know this instinctively.
The hard part was never:
š„ syntax
š„ semicolons
š„ remembering commands
š„ writing boilerplate
š„ scaffolding structures
The hard part was always:
š„ understanding systems
š„ understanding people
š„ understanding tradeoffs
š„ understanding ambiguity
š„ understanding consequences
š„ understanding architecture
š„ understanding business value
š„ understanding organizational dynamics
š„ understanding when NOT to build something
AI is becoming extraordinarily good at reducing implementation friction.
But reducing implementation friction does not create wisdom.
And the more the day progressedā¦
the more obvious that distinction became.
AI does not remove complexity
It relocates it.
This became one of my strongest insights from the day.
For years, organizations focused heavily on:
āHow do we build faster?ā
Now the bottleneck is beginning to move toward:
āHow do we think clearly enough to build the right things?ā
That is a radically different problem.
Because when implementation becomes dramatically cheaper:
š¤ weak ideas scale faster
š¤ unclear priorities scale faster
š¤ bad architecture scales faster
š¤ shallow thinking scales faster
š¤ technical debt scales faster
š¤ organizational dysfunction scales faster
AI amplifies.
That is its nature.
And amplification without clarity becomes dangerous quickly.
MCPs, agents, workflows⦠and what they actually revealed
Technically, the day covered an enormous amount.
We discussed:
š MCPs (Model Context Protocols)
š APIs versus skills
š shell automation
š hooks
š sub-agents
š multi-agent systems
š worktree isolation
š fork modes
š background execution
š orchestration patterns
š agent coordination
š GitHub integrations
š context windows
š token pressure
š review systems
š automation pipelines
But underneath the technical detailsā¦
the conversations were actually about something much more human.
The room kept circling around one core question:
š„ āHow do intelligent systems coordinate without collapsing into chaos?ā
And honestlyā¦
that is not only a software question anymore.
It is an organizational question.
A leadership question.
Because modern organizations already resemble fragmented multi-agent systems:
š¤ partial information
š¤ competing priorities
š¤ isolated workstreams
š¤ coordination overhead
š¤ duplicated effort
š¤ communication bottlenecks
š¤ conflicting incentives
AI is not inventing these problems.
It is exposing them.
Faster.
One of the most fascinating tensions of the day
At one point, the room began discussing productivity research around AI.
And this part was incredibly important.
Because emotionally, AI feelsĀ like massive acceleration.
You genuinely feel superhuman at moments.
Especially when:
šŖšæš² scaffolding
šŖšæš² debugging
šŖšæš² onboarding
šŖšæš² interface generation
šŖšæš² repetitive tasks
šŖšæš² documentation
šŖšæš² exploration
šŖšæš² architecture navigation
suddenly become dramatically easier.
But then came the tension.
Research discussed during the sessions suggested something fascinating:
Developers often feelĀ dramatically more productive than measurable output improvements actually show.
That observation stuck with me deeply.
Because AI changes not only output.
It changes perception.
And perception influences leadership decisions.
Which means organizations may begin making very large strategic assumptions based on emotional acceleration rather than measured value.
That distinction matters enormously.
Especially for leaders.
Because:
š¤ excitement can distort prioritization
š¤ velocity can create illusion
š¤ polished output can disguise shallow thinking
š¤ speed can hide fragility
And many organizations are not yet mature enough to distinguish those things well.
The junior engineer question
This became one of the deepest discussions of the day.
Because AI clearly helps junior engineers tremendously.
That part was undeniable.
Faster onboarding.
Faster debugging.
Faster exploration.
Faster scaffolding.
Less blank-page paralysis.
And honestly?
That part is beautiful.
Watching people become empowered faster is wonderful!!!
But then the room moved into a much harder question:
š„ If AI removes too much struggle too early⦠how do people develop deep intuition?
That question lingered heavily.
Because many senior engineers became senior through:
š„ painful debugging
š„ production failures
š„ edge cases
š„ repetition
š„ broken systems
š„ architectural mistakes
š„ years of pattern recognition development
Not because they memorized syntax.
But because they suffered through systems deeply enough to understand them.
So what happens if future engineers increasingly interact with abstraction layers instead of raw friction?
The room did not fully answer that question.
And honestlyā¦
I appreciated that.
Because pretending certainty here would have felt dishonest.
The room slowly became philosophical
This was one of the most interesting emotional arcs of the day.
The deeper the technical conversations becameā¦
the more philosophical the room became too.
People began wrestling with:
š„ dependency
š„ automation
š„ identity
š„ learning
š„ craftsmanship
š„ trust
š„ expertise
š„ ownership
š„ quality
š„ responsibility
One trainer said something that stayed with me:
āYou become less of a coder and more of a reviewer.ā
And the room laughed.
But it was uncomfortable laughter.
Because underneath it was grief.
Not dramatic grief.
Subtle grief.
The kind that appears when people quietly realize:
š¤ part of their identity may be changing
And I think many technology conversations underestimate that emotional layer entirely.
Because engineers are not only producing code.
Many are expressing:
š mastery
š creativity
š logic
š identity
š problem-solving
š craftsmanship
through the act of building. Thatās fun! Thatās energizing!!! Itās why we rebel against becoming coding monkeys!š
So when AI begins changing the act of building itselfā¦
people feel that psychologically.
Even if they cannot fully articulate it yet.
One of the most important side conversations of the day
Somewhere between discussions about workflows, coordination, and automationā¦
the room shifted toward diversity and communication.
Not performatively.
Practically.
Because different people genuinely experience technical spaces differently.
Not only women.
Not only neurodivergent people.
Not only cultural minorities.
Broader than that.
Different people:
ā¤ļøāš„ notice different risks
ā¤ļøāš„ process ambiguity differently
ā¤ļøāš„ communicate differently
ā¤ļøāš„ recognize emotional undercurrents differently
ā¤ļøāš„ prioritize differently
ā¤ļøāš„ interpret systems differently
And when AI starts accelerating organizational executionā¦
those differences become even more important.
Because homogeneous thinking combined with accelerated execution can become dangerous very quickly.
Especially when:
š¤ confidence outpaces wisdom
š¤ delivery outpaces ethics
š¤ automation outpaces reflection
Diversity is not merely a social conversation.
It is increasingly becoming a resilience conversation.
A systems-thinking conversation.
A survival conversation.
The hidden danger nobody talks about enough
The deeper we went into orchestration, automation, and AI-assisted workflowsā¦
the more one risk quietly kept surfacing:
š„ outsourcing thinking itself
This may become one of the defining challenges of the next decade.
Because AI can absolutely:
š support thinking
š accelerate thinking
š organize thinking
š challenge thinking
But polished output is not the same as understanding.
And confident responses are not the same as wisdom.
Especially when less experienced people may not yet have enough depth to recognize when the AI is confidently wrong.
That tension appeared repeatedly throughout the day.
And honestlyā¦
I think leadership conversations around AI are still far too shallow.
Most conversations focus on:
š„ efficiency
š„ replacement
š„ productivity
š„ cost savings
But I think the deeper challenge is:
š„ āHow do we preserve human depth while embracing acceleration?ā
That is a much harder problem.
What impressed me most about Xebia
Not certainty.
Not hype.
Not pretending to have solved the future already.
What impressed me most was the willingness to openly wrestle with complexity.
People openly saying:
š āWe are still figuring this out.ā
š āThis changes constantly.ā
š āSome of this is hype.ā
š āSome of this is genuinely transformational.ā
š āNobody fully understands where this leads yet.ā
That honesty matters.
Because organizations pretending certainty right now may actually be the least prepared for what is coming.
Adaptability may become far more important than confidence.
The final wall
At the end of the day, the room filled with sticky notes.
Reflections everywhere.
Things people loved.
Things people hated.
Things that excited them.
Things that worried them.
And ā¦
that wall became the perfect metaphor for the current AI transition itself.
Messy.
Hopeful.
Overwhelming.
Brilliant.
Uncomfortable.
Human.
Some people saw liberation.
Others saw dependency.
Some saw creativity.
Others saw erosion.
Most saw both.
And maybe that is the healthiest response possible right now.
Not blind optimism.
Not blind fear.
But conscious engagement.
My biggest takeaway
Day four did not convince me that AI will replace humans.
It convinced me human depth matters more than ever.
Because when execution accelerates dramatically:
ā¤ļøāš„ clarity matters more
ā¤ļøāš„ ethics matter more
ā¤ļøāš„ systems thinking matters more
ā¤ļøāš„ emotional intelligence matters more
ā¤ļøāš„ diversity matters more
ā¤ļøāš„ wisdom matters more
ā¤ļøāš„ leadership matters more
Not less.
And perhaps the biggest misunderstanding of all is believing this transition is primarily technological.
I do not think it is.
I think this is fundamentally a human transformation disguised as a technical one.
š

























































Comments