Why I’m Building a GPU-First Programming Language in 2026

The Drift and the Draft (Circa 2017)

Every slightly unhinged project has an origin story, and Miri’s starts back in 2017. I was between jobs, living mostly off savings, and doing some occasional freelancing. My previous job as a CTO had left me completely burnt out, so I was in absolutely no rush to jump back into the corporate grind.

Instead, I was drifting, floating through a sea of different ideas. At the time, I was a massive fan of Ruby, but its performance left a lot to be desired. I knew Python wasn’t going to win any land speed records either (I actively avoided it back then, though I’d end up using it a lot later).

It got me wondering: Why couldn’t there be a language as ridiculously fast as C++ but as joyfully easy to use as Ruby?

So, I did what any procrastinating developer does. I created an empty GitHub repository and started piling up ideas. If you look at the early commits or closed issues from that era, you will laugh. It was a masterclass in productive procrastination—zero actual code, just a mountain of pipe dreams. Later that year, I moved to Germany, and my focus rightfully shifted to my family’s financial stability. The repo sat there gathering digital dust. I had free time, sure, but it was easily swallowed up by family life, sports, hobbies, and my PlayStation.

Losing—and Finding—Myself (2024–2025)

Fast forward to 2024 and 2025. On paper, my career was soaring. I had been promoted to Head of Platform Engineering, managing 40 people across Germany and Canada. But behind the scenes, my personal life was fracturing. I went through a divorce, moved into an empty new home, and felt like I was navigating my entire life on autopilot.

I realized I had completely lost myself.

Looking back at my backlog of side projects, most of them felt soulless—purely business-related and money-driven. There was only one idea that still held any real emotional weight: Miri.

But the world of software had fundamentally shifted since 2017. Large Language Models (LLMs) were suddenly writing code. At first, I felt the bitter sting of a missed opportunity. What was the point of writing a new language now?

I started brainstorming with ChatGPT, fully expecting it to tell me to give up. To my surprise, it didn’t. Instead, it helped me reframe the entire vision: Don’t just build a language for humans. Build a language for a world where AI generates most of the code. Existing languages carry decades of baggage, and AI is forced to use these legacy tools. We need a language designed for agentic engineering, where humans define the intent and AI fills in the safe, verifiable, high-performance implementations. Furthermore, even if AI writes the code, humans still need to read and verify it—so it can’t look like alien hieroglyphics.

Along with the AI revolution came the absolute dominance of GPUs. It hit me that bolting GPU support onto a language as an afterthought is a mistake. Miri needed to be a GPU-first language.

Right now, if you want to write serious GPU code, you’re usually stuck wrestling with C++ and CUDA, which is notoriously unforgiving. There are modern alternatives like Triton and Mojo that build on top of Python, and while they are stepping in the right direction, I believed I could do even better by baking GPU constructs natively into a statically-typed, compiled language.

That realization was the spark. I finally had my “why.” I needed to build Miri for myself—to finally close the loop on a decade-old dream, to demystify how compilers and parallel computing actually work, and to build something unapologetically cool. And with the help of modern AI, I knew I could move exponentially faster than I ever could have in 2017. No more excuses.

Why I Chose Rust (Despite My Initial Grudges)

I always assumed I’d write Miri in C++. It’s the battle-tested standard for building compilers, and I already had experience with it. So, I started dusting off my C++ skills—which was an adventure in itself, considering the language had evolved significantly since I last touched it a decade ago.

I had, of course, heard of Rust. But if I’m being completely honest, I just didn’t like how it looked. The syntax felt noisy to me.

But as I was getting back up to speed with C++, the buzz around Rust became impossible to ignore. I kept reading about its legendary memory safety, its insanely robust tooling, and how massive players like Microsoft were actively adopting it for mission-critical systems. The more I looked into it, the more I realized that if I was going to build a modern, high-performance programming language, I needed to build it on a modern foundation. So, I pivoted. Miri is written entirely in Rust.

I jumped into a Udemy course to learn it. Was it easy? Absolutely not. Rust’s learning curve is famously steep. But here’s the beauty of the modern world: learning a notoriously strict language is exponentially faster when you have an LLM in your corner. Instead of banging my head against the wall trying to decode esoteric compiler errors, I could just ask my AI assistant to explain exactly why the borrow checker was yelling at me, and get a crystal-clear answer instantly.

Today, I think Rust is an absolute masterpiece of engineering. Its ecosystem and tooling (like Cargo) are a breath of fresh air. But did I ever grow to love its syntax? Not exactly.

And honestly, that’s the best part. It gave me the perfect excuse to push forward with Miri. I get to borrow Rust’s brilliant architectural concepts and use it to drive Miri’s standard compiler pipeline (moving from Lexer to Parser, AST, MIR, and finally Native Codegen via Cranelift), all while designing a clean, indentation-sensitive syntax that I actually enjoy looking at. It’s the best of both worlds.

How I Build a Compiler with LLMs

My relationship with AI coding assistants has evolved right alongside Miri.

In the beginning, I used ChatGPT purely as a sounding board to refine the project structure and generate initial boilerplate. I was taking compiler development courses, and I wrote the lexer and parser mostly by hand to ensure I actually understood the fundamentals. When I tried to use early agentic coding tools, the results were messy. Because I was still leveling up my Rust skills and learning compiler theory, I spent more time debugging the AI’s hallucinations than writing features.

But as LLMs evolved, they became very useful. Today, my workflow is heavily augmented:

  • Claude Code & Antigravity (Claude + Gemini): I use these constantly for brainstorming, generating complex MIR (Mid-level Intermediate Representation) lowering passes, writing comprehensive test suites, and refactoring. I honestly cannot imagine building Miri at this pace without them. At the same time, I’m always careful to double-check their work and make sure it aligns with my vision. I have other projects where I can let LLM run wild, without looking over its shoulder, but not with Miri.
  • NotebookLM: This is my dedicated research assistant. I use it to synthesize and understand dense, complex topics like memory management, Cranelift backend implementations, and Vulkan/SPIR-V GPU programming.

Every now and then, I ask the LLMs to re-validate Miri’s architecture and feasibility. So far, they still agree it needs to exist. And honestly? So do I.