Skip to content

2024

From 0 to 1,000,000 ... Particles: Finding Joy in Building Circle Snakes

As 2024 drew to a close, I found myself buried under an avalanche of context switching—client projects, personal ventures, life admin—all piling up until I hit that familiar wall of burnout.

That's when I decided to do something different:

I chose to work on a project with zero financial upside

Those next 4-5 days brought me more joy than I'd experienced in months.

The Beauty of Building for Joy

In the tech world, we often measure success in metrics—user growth, revenue targets, deployment speed.

Every project becomes a calculated step toward some future payoff. Spending months in this rat race makes it easy to lose sight of why we started coding in the first place.

But there's a different kind of metric that we rarely talk about:

The simple joy of watching something you built come to life.

No stakeholders to please, no KPIs to hit—just you and your creation, evolving together.

It's in these moments that we rediscover the pure joy of creation.

Your Word is Your Bond: Building Trust in AI Consulting

"If you tell the truth, you don't have to remember anything." - Mark Twain

In every client call, I spend most of my time explaining why they shouldn't work with me.

In these conversations:

  • I deliberately highlight project complexities, expose risks, and challenge their assumptions
  • I tell them why their timelines are too aggressive and budgets need to be larger
  • I even explain there's a real chance we won't achieve their dream outcome

And here's the strangest part:

This approach has led to some of my most successful client relationships.

In the fast-paced world of AI consulting, this might sound insane.

The industry runs on hype cycles and overpromised capabilities.

Having worked with some of the most recognizable names in the space, I've watched the "fast money culture" infect the entire landscape.

But here's what I've learned: actively discouraging clients from certain approaches isn't just ethical

It's the most powerful way to build trust and ensure project success.

Your word is everything, and building lasting success requires embracing this counterintuitive truth.

The Secret to Better LLM Outputs: Multiple Structured Reasoning Steps

Traditional chain-of-thought prompting is leaving performance on the table.

While working on a recent client project, we A/B tested different prompting approaches.

Breaking LLM reasoning into multiple structured steps was preferred 80% of the time over traditional methods.

Instead of one meandering thought stream, we can greatly boost reliability now get precise by using a tightly controlled response model

  • Analyze the example structure
  • Analyze the example style
  • Generate the output based on the previous steps

I'll show you exactly how to implement this approach using the Instructor library, with real examples you can use today.

You Don't Need to Fine-Tune to Clone YOUR Report Style

"This doesn't sound like us at all."

It's the all-too-familiar frustration when organizations try using AI to generate reports and documentation.

While AI can produce grammatically perfect content, it often fails at the crucial task of matching an organization's voice - turning what should be a productivity boost into a major bottleneck.

I'll show you how we solved this using a novel two-step approach that separates style from data.

By breaking down what seemed like an AI fine-tuning problem into a careful prompt engineering solution, we achieved something remarkable:

AI-generated reports that practitioners couldn't distinguish from their own writing.

Here's what we delivered:

  • Style matching so accurate that practitioners consistently approved the outputs
  • Complete elimination of data contamination from example reports
  • A solution that scales effortlessly from 10 to 1000 users
  • Zero need for expensive fine-tuning or ML expertise

Best of all? You can implement this approach yourself using prompt engineering alone - no complex ML infrastructure required.