top of page

Day Four at Xebia: The Moment the Conversation Became Real šŸ‰

Updated: 3 days ago

By day four of onboarding at Xebia, something shifted again.


The first days had not primarily been about AI.

Not really.


They had been about:

šŸ’Ž culture

šŸ’Ž identity

šŸ’Ž belonging

šŸ’Ž uncertainty

šŸ’Ž transformation

šŸ’Ž growth

šŸ’Ž adaptation

šŸ’Ž psychological safety


The first days felt deeply human.


We talked about what it means to join an organization during a time where almost nobody can confidently say:

ā€œI fully understand what the next five years will look like.ā€

There was honesty in the room.

And that honesty mattered.


Because I have worked in enough organizations to know how rare it is for leaders and experts to openly admit uncertainty.


Especially in technology.

Especially in consulting.

Especially during disruption.


But by day four…

the conversation changed shape.


Not away from humanity.

Into the machinery underneath the transformation itself.


And suddenly we were no longer only talking about:

šŸ’š becoming


We were standing inside:

šŸ”„ acceleration


The room no longer felt theoretical

Earlier onboarding days still held some emotional distance from the technology itself.


Day four removed that distance.


Now we were:

šŸ’š building MCP servers (Model Context Protocol, giving AI secure ā€œbridgesā€ to tools, databases, and systems it normally cannot access)

šŸ’š spawning sub-agents (smaller AI workers focused on specific tasks)

šŸ’š discussing multi-agent systems (multiple AI agents collaborating in parallel)

šŸ’š creating worktrees (isolated copies of the same codebase so parallel AI agents do not overwrite each other)

šŸ’š orchestrating workflows (coordinating how humans, tools, and AI systems interact)

šŸ’š experimenting with shell automation (using AI directly from the command line)

šŸ’š debating token windows (the memory/context limits AI models can handle at once)

šŸ’š discussing hallucinations (AI confidently generating incorrect information)

šŸ’š testing orchestration patterns (different ways of coordinating AI agents and tasks)

šŸ’š discussing context pollution (when too much irrelevant AI context reduces quality)

šŸ’š exploring review workflows (humans increasingly reviewing AI-generated work instead of writing everything manually)




And the energy in the room became fascinating.


Because the room simultaneously felt:

šŸ’š excited

šŸ’š playful

šŸ’š intellectually curious


while also becoming:

šŸ”„ deeply reflective

šŸ”„ cautious

šŸ”„ philosophical

šŸ”„ uneasy


That contradiction stayed with me the rest of the evening.


The dragon in the room nobody could ignore anymore

Somewhere during the discussions, the emotional center of gravity changed.


Many moments felt almost magical.


šŸ’š Someone generated a functioning interface in minutes.

šŸ’š Someone conversationally interacted with structured data.

šŸ’š Someone built workflows that previously would have required far more setup.

šŸ’š Someone demonstrated parallel AI agents working simultaneously on isolated worktrees.

šŸ’š Someone casually explained they maintain multiple AI accounts because token windows are now becoming operational bottlenecks.

šŸ’š And at one point, I created a dragon-themed bookstore directly from a database connection. Snicker 🤭


And everyone laughed.


But underneath the laughter, something else was spreading quietly through the room:


šŸ”„ these systems are no longer ā€œtoysā€


And I do not think society has emotionally caught up to that reality yet.


Because many public conversations still frame AI as:

šŸ–¤ a helper

šŸ–¤ a chatbot

šŸ–¤ a writing assistant

šŸ–¤ a productivity tool


But what I witnessed during day four at Xebia looked far more significant.


Not complete replacement.

Not Artificial General Intelligence.

Not science fiction.


Something more subtle.

And perhaps far more disruptive.


A new operational layer forming underneath knowledge work itself.


Because AI is not magic.


And despite how suddenly this moment feels to many people…

AI itself is not new.


Recommendation engines.

Search ranking.

Fraud detection.

Predictive text.

Probability models.

Statistical pattern recognition.


Much of modern AI is still fundamentally built on probabilities, predictions, training data, biases, and patterns.


In many ways, these systems increasingly mirror humanity itself:

šŸ’š our knowledge

šŸ’š our creativity

šŸ’š our assumptions

šŸ’š our blind spots

šŸ’š our historical biases

šŸ’š our contradictions


What changed recently was not suddenly ā€œcreating intelligence.ā€


What changed was scale.

Computing power expanded.


Data exploded.

Models became larger.

Interfaces became accessible.

Context windows grew.

Tooling matured.


The dragon did not suddenly appear.

We simply started feeding it enough fire.


The moment the room collectively realized something important

One sentence changed the emotional tone of the room:

ā€œTyping was never the hard part.ā€

That sentence hit harder than many technical demonstrations combined.


Because many engineers already know this instinctively.


The hard part was never:

šŸ”„ syntax

šŸ”„ semicolons

šŸ”„ remembering commands

šŸ”„ writing boilerplate

šŸ”„ scaffolding structures


The hard part was always:

šŸ”„ understanding systems

šŸ”„ understanding people

šŸ”„ understanding tradeoffs

šŸ”„ understanding ambiguity

šŸ”„ understanding consequences

šŸ”„ understanding architecture

šŸ”„ understanding business value

šŸ”„ understanding organizational dynamics

šŸ”„ understanding when NOT to build something


AI is becoming extraordinarily good at reducing implementation friction.


But reducing implementation friction does not create wisdom.


And the more the day progressed…

the more obvious that distinction became.


AI does not remove complexity

It relocates it.


This became one of my strongest insights from the day.


For years, organizations focused heavily on:

ā€œHow do we build faster?ā€

Now the bottleneck is beginning to move toward:

ā€œHow do we think clearly enough to build the right things?ā€

That is a radically different problem.


Because when implementation becomes dramatically cheaper:

šŸ–¤ weak ideas scale faster

šŸ–¤ unclear priorities scale faster

šŸ–¤ bad architecture scales faster

šŸ–¤ shallow thinking scales faster

šŸ–¤ technical debt scales faster

šŸ–¤ organizational dysfunction scales faster


AI amplifies.

That is its nature.


And amplification without clarity becomes dangerous quickly.


MCPs, agents, workflows… and what they actually revealed

Technically, the day covered an enormous amount.


We discussed:

šŸ’š MCPs (Model Context Protocols)

šŸ’š APIs versus skills

šŸ’š shell automation

šŸ’š hooks

šŸ’š sub-agents

šŸ’š multi-agent systems

šŸ’š worktree isolation

šŸ’š fork modes

šŸ’š background execution

šŸ’š orchestration patterns

šŸ’š agent coordination

šŸ’š GitHub integrations

šŸ’š context windows

šŸ’š token pressure

šŸ’š review systems

šŸ’š automation pipelines


But underneath the technical details…

the conversations were actually about something much more human.


The room kept circling around one core question:

šŸ”„ ā€œHow do intelligent systems coordinate without collapsing into chaos?ā€

And honestly…

that is not only a software question anymore.


It is an organizational question.

A leadership question.


Because modern organizations already resemble fragmented multi-agent systems:

šŸ–¤ partial information

šŸ–¤ competing priorities

šŸ–¤ isolated workstreams

šŸ–¤ coordination overhead

šŸ–¤ duplicated effort

šŸ–¤ communication bottlenecks

šŸ–¤ conflicting incentives


AI is not inventing these problems.

It is exposing them.

Faster.


One of the most fascinating tensions of the day

At one point, the room began discussing productivity research around AI.


And this part was incredibly important.


Because emotionally, AI feelsĀ like massive acceleration.

You genuinely feel superhuman at moments.


Especially when:

šŸ’ŖšŸæšŸ² scaffolding

šŸ’ŖšŸæšŸ² debugging

šŸ’ŖšŸæšŸ² onboarding

šŸ’ŖšŸæšŸ² interface generation

šŸ’ŖšŸæšŸ² repetitive tasks

šŸ’ŖšŸæšŸ² documentation

šŸ’ŖšŸæšŸ² exploration

šŸ’ŖšŸæšŸ² architecture navigation

suddenly become dramatically easier.


But then came the tension.


Research discussed during the sessions suggested something fascinating:

Developers often feelĀ dramatically more productive than measurable output improvements actually show.


That observation stuck with me deeply.

Because AI changes not only output.

It changes perception.


And perception influences leadership decisions.

Which means organizations may begin making very large strategic assumptions based on emotional acceleration rather than measured value.


That distinction matters enormously.

Especially for leaders.


Because:

šŸ–¤ excitement can distort prioritization

šŸ–¤ velocity can create illusion

šŸ–¤ polished output can disguise shallow thinking

šŸ–¤ speed can hide fragility


And many organizations are not yet mature enough to distinguish those things well.


The junior engineer question

This became one of the deepest discussions of the day.


Because AI clearly helps junior engineers tremendously.

That part was undeniable.


Faster onboarding.

Faster debugging.

Faster exploration.

Faster scaffolding.

Less blank-page paralysis.


And honestly?

That part is beautiful.


Watching people become empowered faster is wonderful!!!


But then the room moved into a much harder question:

šŸ”„ If AI removes too much struggle too early… how do people develop deep intuition?


That question lingered heavily.


Because many senior engineers became senior through:

šŸ”„ painful debugging

šŸ”„ production failures

šŸ”„ edge cases

šŸ”„ repetition

šŸ”„ broken systems

šŸ”„ architectural mistakes

šŸ”„ years of pattern recognition development


Not because they memorized syntax.

But because they suffered through systems deeply enough to understand them.


So what happens if future engineers increasingly interact with abstraction layers instead of raw friction?

The room did not fully answer that question.

And honestly…

I appreciated that.


Because pretending certainty here would have felt dishonest.


The room slowly became philosophical

This was one of the most interesting emotional arcs of the day.


The deeper the technical conversations became…

the more philosophical the room became too.


People began wrestling with:

šŸ”„ dependency

šŸ”„ automation

šŸ”„ identity

šŸ”„ learning

šŸ”„ craftsmanship

šŸ”„ trust

šŸ”„ expertise

šŸ”„ ownership

šŸ”„ quality

šŸ”„ responsibility


One trainer said something that stayed with me:

ā€œYou become less of a coder and more of a reviewer.ā€

And the room laughed.

But it was uncomfortable laughter.

Because underneath it was grief.


Not dramatic grief.

Subtle grief.


The kind that appears when people quietly realize:

šŸ–¤ part of their identity may be changing

And I think many technology conversations underestimate that emotional layer entirely.

Because engineers are not only producing code.


Many are expressing:

šŸ’š mastery

šŸ’š creativity

šŸ’š logic

šŸ’š identity

šŸ’š problem-solving

šŸ’š craftsmanship

through the act of building. That’s fun! That’s energizing!!! It’s why we rebel against becoming coding monkeys!šŸ’


So when AI begins changing the act of building itself…

people feel that psychologically.

Even if they cannot fully articulate it yet.


One of the most important side conversations of the day

Somewhere between discussions about workflows, coordination, and automation…

the room shifted toward diversity and communication.


Not performatively.

Practically.

Because different people genuinely experience technical spaces differently.


Not only women.

Not only neurodivergent people.

Not only cultural minorities.

Broader than that.


Different people:

ā¤ļøā€šŸ”„ notice different risks

ā¤ļøā€šŸ”„ process ambiguity differently

ā¤ļøā€šŸ”„ communicate differently

ā¤ļøā€šŸ”„ recognize emotional undercurrents differently

ā¤ļøā€šŸ”„ prioritize differently

ā¤ļøā€šŸ”„ interpret systems differently

And when AI starts accelerating organizational execution…

those differences become even more important.


Because homogeneous thinking combined with accelerated execution can become dangerous very quickly.


Especially when:

šŸ–¤ confidence outpaces wisdom

šŸ–¤ delivery outpaces ethics

šŸ–¤ automation outpaces reflection


Diversity is not merely a social conversation.


It is increasingly becoming a resilience conversation.

A systems-thinking conversation.

A survival conversation.


The hidden danger nobody talks about enough

The deeper we went into orchestration, automation, and AI-assisted workflows…

the more one risk quietly kept surfacing:

šŸ”„ outsourcing thinking itself


This may become one of the defining challenges of the next decade.


Because AI can absolutely:

šŸ’š support thinking

šŸ’š accelerate thinking

šŸ’š organize thinking

šŸ’š challenge thinking


But polished output is not the same as understanding.


And confident responses are not the same as wisdom.

Especially when less experienced people may not yet have enough depth to recognize when the AI is confidently wrong.


That tension appeared repeatedly throughout the day.

And honestly…

I think leadership conversations around AI are still far too shallow.


Most conversations focus on:

šŸ”„ efficiency

šŸ”„ replacement

šŸ”„ productivity

šŸ”„ cost savings


But I think the deeper challenge is:

šŸ”„ ā€œHow do we preserve human depth while embracing acceleration?ā€

That is a much harder problem.


What impressed me most about Xebia

Not certainty.

Not hype.

Not pretending to have solved the future already.


What impressed me most was the willingness to openly wrestle with complexity.


People openly saying:

šŸ’œ ā€œWe are still figuring this out.ā€
šŸ’œ ā€œThis changes constantly.ā€
šŸ’œ ā€œSome of this is hype.ā€
šŸ’œ ā€œSome of this is genuinely transformational.ā€
šŸ’œ ā€œNobody fully understands where this leads yet.ā€

That honesty matters.

Because organizations pretending certainty right now may actually be the least prepared for what is coming.


Adaptability may become far more important than confidence.


The final wall

At the end of the day, the room filled with sticky notes.

Reflections everywhere.


Things people loved.

Things people hated.

Things that excited them.

Things that worried them.

And …

that wall became the perfect metaphor for the current AI transition itself.


Messy.

Hopeful.

Overwhelming.

Brilliant.

Uncomfortable.

Human.



Some people saw liberation.

Others saw dependency.

Some saw creativity.

Others saw erosion.

Most saw both.


And maybe that is the healthiest response possible right now.


Not blind optimism.

Not blind fear.

But conscious engagement.


My biggest takeaway

Day four did not convince me that AI will replace humans.


It convinced me human depth matters more than ever.


Because when execution accelerates dramatically:

ā¤ļøā€šŸ”„ clarity matters more

ā¤ļøā€šŸ”„ ethics matter more

ā¤ļøā€šŸ”„ systems thinking matters more

ā¤ļøā€šŸ”„ emotional intelligence matters more

ā¤ļøā€šŸ”„ diversity matters more

ā¤ļøā€šŸ”„ wisdom matters more

ā¤ļøā€šŸ”„ leadership matters more

Not less.


And perhaps the biggest misunderstanding of all is believing this transition is primarily technological.


I do not think it is.


I think this is fundamentally a human transformation disguised as a technical one.

šŸ‰




Ā 
Ā 
Ā 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page