The Real Choice After AI
The defining tension of 2026 isn't technological. It's human. Do you become legible to your systems, or do you make your systems legible to you?
This is part two of my 2026 trends series. Part one explored AI becoming economically real - the shift from demos to infrastructure, from clever to reliable, from hype to hard constraints. This piece looks at what follows once that shift is no longer theoretical, and the consequences start showing up in organizations, culture, and power.
🏴☠️
Just like the first piece, this isn't forecasting. It's more like watching a wave that's already formed and guessing where it breaks. These are the tensions and developments I see in the market. Tell me if you agree with it in the comments.
Disposable systems, permanent consequences
Over the last one and a half months, I built more than ten software products. Small sites. Internal tools. Apps for my kids. Fully custom workflows. I even built and released Resonance - a vibe coding framework - as open source.
A year ago, any of this would have felt like an accomplishment. Now? It’s just what you do on a Tuesday.
This should feel liberating. In some ways it is. But when building becomes trivial, design changes in ways we’re still learning to navigate. Things get created to solve a moment, not to last. A spreadsheet turns into a tool. A workaround becomes infrastructure. Once it works, it quietly becomes relied on - even if no one feels responsible for it.
By 2026, this pattern is everywhere. In Organizations an underground of small systems emerges: automations no one audits, scripts no one owns, tools everyone assumes “just work.” They rarely fail loudly. They drift - until something breaks.
The problem isn’t bad intentions or incompetence. It’s speed. Systems now accumulate faster than understanding.
Trust under volume
As tools, content, and systems multiply, trust stops scaling. Not because people are more cynical, but because evaluation collapses under volume. When everything looks plausible, verification becomes too expensive. Familiarity turns into the default heuristic.
This works for a while. Then it doesn’t.
The most common failures in 2026 won’t look like attacks. They’ll look like ordinary people solving real problems. A password hardcoded in the source. An endpoint someone forgot to secure. A database exposed because no one realized it mattered.
These mistakes are inevitable in environments where creation is easy and ownership is diffuse. What changes is that they’re now discoverable at scale. Systems continuously probe other systems for weakness - think AI-powered vulnerability scanning running constantly. Security becomes an ongoing cleanup operation, not a state you ever reach (not that we ever did; but now it’s even more obvious).
As this cat and mouse game goes on we realize that true security becomes nearly impossible when attacking - including social engineering - becomes automated and nearly free.
Trust doesn’t disappear. It concentrates. Around brands, around institutions, around things that feel boring and stable. That concentration becomes both valuable and fragile.
In a world of infinite AI slop - fake videos, fake voices, fake code - proof itself becomes infrastructure. Mathematical verification of humanness. Blockchain-based trust layers. Technologies that once seemed speculative become necessary plumbing for verifying reality.
The quiet hollowing of expertise
Organizations respond to pressure by removing friction. Routine work disappears first. That also means entry points narrow.
When AI handles entry-level tasks, there’s no apprenticeship pathway. Beginners don’t get reps on small problems before tackling big ones. Junior developers never debug simple functions. New analysts never build basic models. The learning ladder loses its bottom rungs.
By 2026, fewer people are trained the long way. Not because learning isn’t valued, but because there’s no obvious place to practice without risk. Experts get stretched thin, covering more ground with less time to teach.
You end up with a strange situation. More systems, fewer people who actually know how they work.
Senior professionals feel this too. Surface-level competence becomes cheap. What remains scarce is judgment - knowing when not to automate, when to slow down, when something feels wrong even if the dashboard says otherwise.
That kind of expertise is hard to measure and easy to undervalue. Until it’s gone.
Leverage as the new fault line
Organizations feel this tension long before they can name it. And when they try to respond, they reach for the wrong metric.
Increasingly, we realize that headcount tells you very little. Leverage tells you almost everything.
Some individuals and teams multiply their output dramatically. Others stagnate. Not because of laziness, but because the tools around them amplify different skills.
Administrative and repetitive work fades into the background. Humans become orchestrators. They have to constantly steer and decide. If they are directionally right, speed wins. Teams that can just move run circles around those designed for consensus.
This won’t lead to mass unemployment. It leads to uneven pressure. Certain roles compress. Others expand. The uncomfortable part is that the dividing line is rarely formal. It’s not job titles. It’s adaptability, judgment, and comfort with ambiguity.
By 2026, a one-person unicorn becomes plausible. Not common, maybe not likely (yet), but possible. AI collapses the distance between idea and execution. Products that once took days to build get created in hours.
Headcount stops being the primary constraint on company building. A high headcount might even become a countersignal.
What matters instead: judgment, distribution, timing. Europe should be well positioned here - strong technical talent, cost advantages. The risk is that over-regulation prevents capturing this shift before it solidifies elsewhere.
Leadership slows under infinite options
We assumed AI would make leaders faster. Often it does the opposite.
With cheap simulations and infinite counterfactuals, every decision comes with a stack of plausible alternatives. Analysis multiplies. The bottleneck moves from lack of information to fear of commitment.
Leaders get stuck - not because they lack data, but because the world gets increasingly complex and unpredictable and choosing one path feels reputationally risky when three others look equally plausible because an overconfident LLM said so. Every choice carries the ghost of the options you didn’t take, and those ghosts are now quantified, visualized, and sitting in a shared deck somewhere.
Committing to anything starts to feel dangerous.
Invisible systems take over
By 2026, chatbots stop being exciting. Not because they vanish, but because the money moves elsewhere.
Value shifts to agentic workflows - quiet systems that route emails, reconcile invoices, update records, and keep operations moving without demanding attention. The boring work, the stuff companies actually pay for.
Enterprise AI remains constrained. It’s not the maturity of technology (although some might claim otherwise). Organisations and processes are hard to change. For now, AI helps mainly in supporting processes. It speeds things up. It does not transform the core yet.
Trust in AI remains fragile. But the direction is clear. The advantage goes to those who started learning early - not tools, but pattedrns. How to supervise systems. How to recover when they fail. How to know when automation has gone too far.
Late adopters won’t be blocked by access. They’ll be blocked by understanding.
Reclaiming agency through creation
Something’s shifting in how people relate to the internet. The exhaustion is real.
As AI-generated content floods every feed, signal degrades. Everything feels noisy, dull and repetitive. Engagement stagnates. Not because people stopped caring, but because trust erodes under volume.
This doesn’t look like a dramatic exodus. It looks like quiet withdrawal. Fewer platforms. Less posting. More time away from screens. This will be directionally right and might not be visible at first as some people keep scrolling.
But others are doing something different. They’re making things again. 😱
Small games. Small videos. Small projects for small groups. Not for visibility. Not to build an audience. But because creation itself is an act of reclaiming what makes us human.
When your hands shape something - when you solve a problem, make a choice, leave your mark on materials or pixels or words - you’re asserting something the feed can’t give you: that you exist, that you matter, that your particular way of seeing has value. Making is how we remember we’re not just consumers or nodes in a network. We’re people with agency, with taste, with the capacity to bring something new into the world that wouldn’t exist without us.
And here's what's strange: as AI makes everything look polished, perfect, and average, that perfection becomes boring. Grainy photos. Unedited text. Lo-fi videos. Messy, human imperfections are becoming cool again. If it looks too slick, people assume it's fake. Imperfection becomes the new credibility.
This is one of the more hopeful shifts. Making becomes a way out of exhaustion. It’s not about scale or reach. It’s about ownership and the deeply human need to create order from chaos, to transform raw materials into meaning. To stretch, to seek, to pursue.
In a world increasingly mediated by algorithms and artificial generation, the act of making something yourself - however small - becomes resistance and renewal. It’s how we reclaim the part of ourselves that was dulled. The part that makes us human.
Scarcity shifts to what machines can’t do
This shift shows up everywhere, but nowhere more clearly than in education.
Public schools are struggling to adapt to what matters in the age of AI. This will trigger a longterm trend that will emerge in 2026. Parents are looking for alternatives.
They invest in small groups focused on talking, debating, and making things with their hands. Skills AI can’t replicate - creativity, judgment, collaboration, craft - are becoming the most expensive education you can buy.
Scarcity drives value in what machines can’t do.
War and the cost of legacy systems
War is changing faster than defense budgets. The billion-dollar aircraft carrier still floats, but a €2,000 drone with the right payload can mission-kill it. A swarm of fifty drones costs less than a single missile - and overwhelms systems designed to fight different threats.
This isn’t theoretical. Ukraine proved it. Small, cheap, rapidly iterated systems beat expensive legacy platforms that take years to build and can’t adapt once deployed. The advantage goes to whoever can manufacture at scale, iterate based on battlefield feedback, and deploy updates in weeks, not years.
This is disposable systems thinking applied to hardware. When iteration speed matters more than perfection, the economics of defense flip.
2026 makes this shift visible beyond Ukraine. The implications aren’t just military - they reshape how countries think about defense strategy. Nations that never built traditional defense tech suddenly have an opening. The capital requirements are different. The supply chains favor agility over precision engineering. Success looks less like Lockheed Martin and more like a scaled-up hardware startup.
Europe, with its manufacturing base and engineering talent, is positioned to move here - if it can organize procurement and production around speed rather than legacy defense contracting. The countries that figure this out don’t just defend better; they build an export industry.
Europe’s Opportunity
The internet fragments. Data becomes strategic. Countries protect their citizens and their history - Brazil, Europe, and others prevent data from leaving borders. Sovereignty now includes algorithms, not just territory.
Europe enters 2026 with fewer illusions about its place in this race. The talent is there. The constraint isn’t imagination - it’s coordination and will. Europe knows it relies too heavily on external infrastructure, and that dependency has become a strategic vulnerability it can no longer afford.
Geopolitics now shapes technology strategy directly. Europe stops asking whether it can build technology and starts asking whether it can operate it at continental scale.
Governments subsidize adoption, not just invention. Policy, capital, and buyers begin to align around concrete sectors: semiconductors, energy systems, defense, drones, robotics, batteries, and space access. These aren’t abstract bets - they’re industries where Europe has buyers, supply chains, and a reason to move fast.
European AI companies are generating real revenue - not from consumer plays, but through enterprise sales, specific modalities, and disciplined distribution. Europe won’t dominate frontier models (I expect Mistral to do decently well, though). But it might shape AI norms through reuse, evaluation, governance, and adaptation. That’s a different kind of influence - slower to see, harder to measure, but potentially more durable.
Europe historically invented well and scaled elsewhere. That failure becomes too expensive to repeat. By 2026, the foundations will be in place for a more self-reliant continent.
The Real Choice
The defining tension of 2026 isn’t technological. It’s human.
Systems accelerate. Institutions adapt slowly. People search for footing. These gaps don’t close - they widen and create pressure in unexpected places.
Judgment matters more than raw output, but judgment is exactly what’s hardest to teach or scale. Agency becomes something you actively protect, not something you can assume you have. Trust concentrates around what feels stable, even when stability is an illusion.
In 2026, the question isn’t whether to adopt AI. Everyone will. The question is what you protect while doing it.
Systems will multiply. Some will work. Most will quietly drift. The temptation is to treat this as a management problem - to build better oversight, tighter processes, more audits. That’s not wrong, but it misses something.
The actual choice is this: Do you become legible to your systems, or do you make your systems legible to you?
The first path is easier. Let the tools shape how you work. Optimize for what’s measurable. Automate what can be automated. Many organizations will go this way. They’ll move fast. They’ll scale efficiently. They’ll also become rigid in ways they won’t notice until it’s too late - structured around what their tools can handle rather than what actually matters.
The second path is harder. It means slowing down when everything says to speed up. It means saying no to leverage when it’s available. It means investing in judgment, in apprenticeship, in the boring work of actually understanding how things function. It means treating “because the AI suggested it” as the beginning of a question, not the end.
This isn’t nostalgia. It’s not about resisting change. It’s about understanding that not all speed is progress, and not all scale is strength.
By 2026, the obvious moves become obvious to everyone simultaneously. The tools are available. The playbooks are shared. Competitive advantage doesn’t come from access anymore. It comes from the things that can’t scale.
Your judgment about when to slow down. Your ability to spot the weak signal in the noise. The relationships you build that aren’t mediated by a platform. The craft you develop that can’t be prompted into existence. The small, weird thing you make because it matters to you, not because it optimizes a metric.
These things take time. They’re inefficient. They don’t compound the way systems do. That’s exactly what makes them valuable.
The world is getting faster, cheaper, and more automated. What becomes scarce is anything that requires you to be fully present - fully human - in ways that can’t be replicated or automated.
Making something small and real. Teaching someone face-to-face. Building something that only works at the scale of attention you can actually give it. These acts aren’t just personally meaningful. They’re becoming economically strategic.
In a world of infinite leverage, the bottleneck shifts. It’s no longer what you can build. It’s what you can care about - and care about well enough that it matters.
The organizations, careers, and lives that matter in 2026 won’t be the ones that moved fastest. They’ll be the ones that knew what they were moving toward - and what they refused to leave behind.
The tools are here. The question is whether we use them, or they use us.
That’s the wave. The rest is just deciding where you want to be when it breaks.
🙏
Be kind,
Manuel






Wow, lots of angels and topics, very cool!
The one that resonated the most with me was "As tools, content, and systems multiply, trust stops scaling. Not because people are more cynical, but because evaluation collapses under volume. When everything looks plausible, verification becomes too expensive. Familiarity turns into the default heuristic."