Skip to content

Practical AI Insights from the Trenches

I write about building reliable AI systems that work in production—sharing real failures, unexpected wins, and concrete lessons from shipping AI features.

My focus is on practical implementation, consulting insights, and the occasional creative experiment.

For posts about LLMs, RAG systems, and AI implementation, check out the category labels in the sidebar. Here are some recent highlights:

LLM and AI Implementation

Consulting and Career Growth

Technical Deep Dives

Personal Projects and Experiments

Talks and Interviews

Coming Soon

AI Darwinism: Why RAG Will Never Die

The Predictable Death of RAG (According to Twitter)

Like clockwork, every time a new large language model (LLM) announces a bigger context window, the hot takes flood social media:

"RAG is dead! Just stuff everything into the 10 million token context!"

This take is not just wrong, it's idiotic.

While massive context windows are impressive, simply dumping data into them is like trying to find a specific sentence by reading an entire library.

It's inefficient and ignores the real challenge: feeding the LLM the right information at the right time.

Anyone building real-world LLM applications knows this.

The secret isn't just more context; it's smarter context.

This post introduces the concept of Context Optimization—the evolution of RAG in the era of large context windows.

You'll learn why strategically selecting and presenting relevant information is crucial for maximizing performance, minimizing costs, and building AI systems that actually work in production.

Internalize this Context Optimization mindset, and you'll understand why RAG, far from being dead, is more vital than ever.

Let's dive in.

Transactions, Commits, and Rollbacks in SQLModel: A Mental Model That Actually Makes Sense

Every developer working with databases has experienced that frustrating moment when their application grinds to a halt because of connection issues or unexplained slowdowns.

If you've ever wondered why your SQLModel operations are crawling along or connections are being dropped, you're not alone.

The problem isn't your code's logic - it's how you're managing database sessions.

I learned this the hard way after creating bottlenecks by constantly opening new connections instead of properly managing and reusing sessions.

By the end of this post, you'll understand:

  • Why creating a new session for every operation kills performance
  • When exactly you should commit() or rollback()
  • A simple mental model that makes SQLModel transactions intuitive
  • Bulletproof patterns for managing database connections efficiently

The best part? You'll never again wonder why your application is suddenly crawling along due to connection pool exhaustion.

Inside the Eye of AI's Storm: The Ghiblication Moment

Every technological revolution has its moments of collective exhale—brief pauses where wonder overtakes worry and creativity eclipses concern.

For AI in 2025, that moment came wrapped in the nostalgic, hand-painted aesthetic of Studio Ghibli.

In the relentless acceleration of AI advancement, "Ghiblification" emerged as a rare moment of calm in the center of the storm—where instead of fretting about jobs or fixating on business cases, people simply played, created, and shared joy.

But this calm center tells us something profound about both where we've been and where we're heading.

As someone who's navigated these waters professionally since before ChatGPT altered our landscape, I see something important in this brief respite that might help you prepare for what comes next.

Vibes Fade, Knowledge Lasts: The Case Against Lazy Coding Culture

Every time I scroll through my Twitter feed lately, I'm bombarded with content celebrating "vibe coding" – the idea that you don't need deep technical knowledge anymore, just the right "vibes" and an LLM to generate your code.

Something about this trend doesn't sit right with me.

It's not professional jealousy. As someone who uses AI coding tools daily, I recognize their transformative potential. Rather, it's the underlying philosophy that troubles me – the casual dismissal of accumulated knowledge and the experts who built it.

While watching the anime Orb: On the Movements of the Earth recently, I had an epiphany about what specifically bothers me about vibe coding culture.

Skip Manual Labeling: How You Can Automatically Caption Images with Spacial Awareness for Any Product

Have you ever stared at thousands of product images, dreading the manual labor of tagging each one for AI training?

Capturing every nuance by hand is a daunting (and expensive) task.

Yet structured annotations are the lifeblood of machine learning.

The rule is simple: garbage in, garbage out.

A high quality image caption needs to capture:

  • Exact object locations in complex scenes
  • Relationships with surrounding elements
  • Environmental context and lighting conditions
  • Consistent descriptions at scale

That's exactly where our client found themselves - facing 10,000+ images of custom textured walls that needed precise labeling for fine-tuning a diffusion model.

Using a combination of Florence 2, GPT-4o Vision, and the Instructor library, you'll see how to build a reliable system that:

  • Automatically detects and localizes objects
  • Generates structured, validated descriptions
  • Handles spatial relationships systematically
  • Scales from 50 to 50,000+ images without compromising quality

Best of all? We did it without any custom models or infrastructure.

Here's the complete technical breakdown of how we turned a month-long manual process into an automated pipeline that runs in hours.

How You Can Save 20,000+ Hours a Year with a Secure, GPT-Driven Meeting to Email Workflow

Your team is wasting thousands of hours manually writing follow-up emails after Zoom meetings.

Every day, they:

  • Battle with meeting recordings
  • Miss capturing action items
  • Triple-check that sensitive data hasn't been exposed

For a mid-sized organization, this adds up to tens of thousands of wasted hours annually.

What if you could transform every Zoom transcript into a perfectly structured follow-up email in under 60 seconds, while keeping your sensitive data completely secure?

This post will show you how to:

  • Build a GPT-powered system that automatically converts meetings into action-ready emails
  • Protect sensitive data by keeping everything in your control
  • Save your organization 20,000+ hours annually on email drafting
  • Ensure 100% accuracy with domain-specific terminology correction
  • Create traceable links between action items and meeting timestamps

See It In Action

In this demo, you'll see:

  • A real meeting transcript being processed in under 60 seconds
  • The automated extraction of key points and action items
  • How sensitive data is handled securely
  • The final formatted email output ready to send

This automated workflow reduces a 30-minute manual process to just a few clicks while maintaining complete data security and accuracy.


The Real Cost of Manual Meeting Follow-ups

For a team of 50 people averaging just two client calls per week, manual follow-up emails waste 12,500 hours annually.

Here's what your team currently spends 30 minutes doing after every call:

The Honest Path to Leveling Up Your AI Consulting Career

Last month, I did something that would make most consultants cringe:

I offered to work for free on a $20-30 million company's AI strategy.

When Alex Hormozi said, "You're not good enough yet... and that's okay," it resonated deeply as I stared at a potential six-figure contract, knowing I wasn't quite ready for it.

We're constantly told to "charge what we're worth" and "never work for free."

But here's the uncomfortable truth about scaling an AI consulting practice.

Sometimes, the fastest way up is to admit you're not at the top yet.

The real challenge of enterprise AI consulting isn't just technical expertise – it's the catch-22 of needing enterprise experience to land enterprise clients.

You can't access the rooms where high-level decisions happen until you've already been in those rooms.

Here's what nobody tells you:

The path to doubling or tripling your consulting income isn't always about charging more – sometimes it's about making strategic "losses" that compound into massive gains.

In this post, I'll show you exactly how I'm using what I call the Strategic Loss Leader approach to:

  • Land clients 10x larger than my usual target market
  • Transform "free" work into six-figure opportunities
  • Position myself for deals I currently have no right to win

It's not about underselling yourself.

It's about being strategic and honest about where you are in your journey – and then doing something about it.

From 0 to 1,000,000 ... Particles: Finding Joy in Building Circle Snakes

As 2024 drew to a close, I found myself buried under an avalanche of context switching—client projects, personal ventures, life admin—all piling up until I hit that familiar wall of burnout.

That's when I decided to do something different:

I chose to work on a project with zero financial upside

Those next 4-5 days brought me more joy than I'd experienced in months.

The Beauty of Building for Joy

In the tech world, we often measure success in metrics—user growth, revenue targets, deployment speed.

Every project becomes a calculated step toward some future payoff. Spending months in this rat race makes it easy to lose sight of why we started coding in the first place.

But there's a different kind of metric that we rarely talk about:

The simple joy of watching something you built come to life.

No stakeholders to please, no KPIs to hit—just you and your creation, evolving together.

It's in these moments that we rediscover the pure joy of creation.

Your Word is Your Bond: Building Trust in AI Consulting

"If you tell the truth, you don't have to remember anything." - Mark Twain

In every client call, I spend most of my time explaining why they shouldn't work with me.

In these conversations:

  • I deliberately highlight project complexities, expose risks, and challenge their assumptions
  • I tell them why their timelines are too aggressive and budgets need to be larger
  • I even explain there's a real chance we won't achieve their dream outcome

And here's the strangest part:

This approach has led to some of my most successful client relationships.

In the fast-paced world of AI consulting, this might sound insane.

The industry runs on hype cycles and overpromised capabilities.

Having worked with some of the most recognizable names in the space, I've watched the "fast money culture" infect the entire landscape.

But here's what I've learned: actively discouraging clients from certain approaches isn't just ethical

It's the most powerful way to build trust and ensure project success.

Your word is everything, and building lasting success requires embracing this counterintuitive truth.

The Secret to Better LLM Outputs: Multiple Structured Reasoning Steps

Traditional chain-of-thought prompting is leaving performance on the table.

While working on a recent client project, we A/B tested different prompting approaches.

Breaking LLM reasoning into multiple structured steps was preferred 80% of the time over traditional methods.

Instead of one meandering thought stream, we can greatly boost reliability now get precise by using a tightly controlled response model

  • Analyze the example structure
  • Analyze the example style
  • Generate the output based on the previous steps

I'll show you exactly how to implement this approach using the Instructor library, with real examples you can use today.