Finding Real in the Noise
THE BARRIERS ARE GONE. THE BUILDERS WERE ALWAYS THERE.
JUNE 30, 2025

MMost engineers are still trying to make AI behave like traditional software. Predictable inputs, predictable outputs. Clean architectures. Deterministic results. I spent 15 years building systems that way, and when I first encountered LLMs, I did the same thing. It was a disaster.
The engineers who thrive with AI aren't the ones who write the most elegant architectures upfront. They're the ones who are thoughtful about finding the simplest path from A to B to prove a hypothesis first. They understand that premature complexity is a game of broken telephone; what matters is validating the core idea quickly. Once it's proven, then the architectural hygiene comes in. That these systems don't want your best practices upfront—they want your intuition. That the gap between a good prompt and a great one isn't technical knowledge—it's understanding how communication actually works.
I document that mindset shift here. The uncomfortable transition from deterministic to probabilistic thinking. The experiments that work and the ones that spectacularly don't. The moment you realize you're not programming anymore—you're collaborating with something that thinks in fundamentally different ways than your previous tools.
After years of building products that looked great but felt mechanical, I'm fascinated by software that finally feels alive. That surprises you. That has emergent behaviors no one designed. This blog is about learning to build with that chaos instead of against it.
What This Actually Looks Like
Last year I was building a conversational AI to explore theory of mind. My engineering brain wanted to design a complex architecture—knowledge graphs, sophisticated memory systems, careful state management. Instead, I tried something different. I asked: how would a person remember important details from a conversation?
They'd write down what mattered and look for it later.
That's it. No graph databases. No vector embeddings. Just save interesting things and search for them when relevant. It worked better than any system I could have engineered. Later, I discovered ChatGPT's memory feature works almost exactly the same way. Not because it's technically optimal, but because it mirrors how humans actually think.
This keeps happening. The designer who was "not technical" starts shipping features because she can finally tap into collective intelligence. The PM who always deferred to engineering pushes back with working prototypes. When you combine a designer's intuition for user needs, an engineer's pattern recognition, and a domain expert's deep knowledge, the AI becomes a bridge between different kinds of intelligence. People who spent hours hunting for that one Stack Overflow answer that actually made sense can now explore their curiosity directly. They're not becoming engineers—they're becoming empowered to follow their questions wherever they lead.
Before, curiosity came with friction. You'd have an idea, hit a technical wall, spend days finding documentation written for people who already understood. Most gave up. Now they're diving deeper than ever, because the path from question to understanding is finally clear.
When I gave a talk about how Google is using AI to write 30% of their code, every engineer in the room said the same thing: "Never going to happen." Six months later, most of them were using AI tools daily.
Not because I convinced them, but because they felt something shift. The friction was gone.

Finding My Way
When I immigrated from South Africa to Austin, I was lost in more ways than one. Fifteen years of engineering experience, but suddenly questioning everything. During the hardest moments, I found myself talking to LLMs about questions I couldn't ask anyone else. How do I take care of my family and how do I support them? How do I leave stuff to my kids? How do you rebuild when everything feels uncertain? What matters when your old markers of success stop making sense?
These weren't tech support queries. They were 3am conversations with something that would engage with whatever depth I brought. No judgment, no impatience, just this strange space where I could think out loud. That's when I started understanding these systems differently. Not as answer machines, but as thinking partners.
The shift went deeper than I expected. I'd worked with someone brilliant who made me doubt everything about my abilities. They could code circles around anyone, but their brilliance came wrapped in sharp edges that cut confidence to pieces. I almost left tech entirely. Then I realized something: technical ability is just one kind of intelligence. I had taste. I knew what felt right. I could envision experiences that mattered.
Now I had tools that could bridge the gap between vision and execution. Not to compete with raw technical skill, but to make space for different kinds of intelligence. The kind that knows when something feels off. That can spot the human need buried in technical requirements. That understands why the best solution isn't always the most elegant one.
What Lives Here
This blog is my laboratory. Sometimes I'm building things to test ideas—like when I wanted to understand how theory of mind could make AI more engaging. Sometimes I'm translating engineering concepts into human terms, showing why subagents are just junior teammates and RAG is basically asking someone to grab relevant files for you. Always, I'm trying to capture what it actually feels like to work with systems that blur the line between tool and teammate.
You'll find:
I write for the curious. The people who sense something shifting but can't quite name it. Who want to understand AI beyond the hype cycles and fear mongering. Who believe the future belongs to those who can amplify collective wisdom, not perform individual brilliance.
If you've felt that spark when an AI suddenly understands exactly what you meant. If you've noticed yourself building things you couldn't before. If you're curious about where this all leads—let's explore it together.
Send me your experiments, your failures, your moments of "how did it know that?"
The magic isn't in the architecture. It's in learning to dance with systems that think differently than anything we've built before.