Beautiful software doesn’t just help developers write faster. It helps developers think better.
That’s one reason I invested in Poolside early, and why I’ve stayed excited watching them build quietly for years.
I knew Jason Warner before Poolside existed, though not personally. I knew him through the products he helped ship, and through what those products said about how he thinks. At GitHub, he oversaw an era that reshaped how developers work. GitHub Actions, Codespaces, Advanced Security, and the early incubation of Copilot all happened under his watch. Before that, Heroku became one of the most beloved developer platforms of the decade because it took infrastructure pain off the table and let people focus on building. What stood out wasn’t that Jason was technical. It was that he understood something many leaders miss: engineering is not a support function. It’s often where the most meaningful product leverage gets created.
I knew Eiso through a different path. Before Poolside, he spent years building at the intersection of AI and developer tooling long before it became fashionable. At source{d}, he was already working on large language models and machine learning systems applied to code years before the rest of the market became obsessed with AI coding. He later built Athenian, which helped engineering teams better understand how they build. Across both companies, there was a consistent curiosity about how software gets created, where friction emerges, and what better systems could look like.
That history matters because Poolside feels less like a reaction to the current AI cycle and more like the natural continuation of two long-running obsessions: making developers dramatically more effective, and rethinking how software gets made.
Today, Poolside is releasing Laguna M.1 and Laguna XS.2, its first publicly available foundation models built for agentic coding and long-horizon software work. Laguna XS.2 is also their first open-weight model, released under Apache 2.0.
That alone is worth paying attention to. But what stood out from today’s launch wasn’t the release of another frontier model. It was how carefully they seem to have thought about how developers actually want to work.
You can download the open model through Hugging Face, route it through OpenRouter, deploy it through Baseten, run it locally through Ollama, use it via Poolside’s API, or experiment directly in the browser through Shimmer. That ecosystem approach is unusually thoughtful at a moment when many AI companies are still trying to force users into tightly controlled interfaces and calling that a product strategy.
The best developer companies have historically won by respecting developer time and cognitive load. They remove invisible friction. They make complexity feel manageable, sometimes even elegant. That’s what beautiful software looks like: not software that’s aesthetically polished, but software that’s deeply considerate of the person using it. Software that helps people stay in flow.
That principle matters right now because most AI software still feels like a demo pretending to be a product. It can generate impressive moments, but often collapses under real workflows, messy edge cases, and sustained use.
That’s why Poolside’s focus on long-horizon software work is worth taking seriously. Most coding products today optimize for short loops: generating snippets, writing functions faster, accelerating repetitive tasks. Those improvements matter, but they don’t touch the hardest parts of software engineering.
Real engineering work means maintaining context across large codebases, debugging systems that fail in unpredictable ways, navigating ambiguity, making architectural decisions, and recovering when the first five approaches don’t work. It requires persistence, planning, and the ability to reason across longer stretches of work.
Software may also be one of the best training grounds for broader AI systems because it offers something most domains don’t: tight feedback loops. Code either works or it doesn’t. Systems either break or they don’t. Hallucinations get exposed quickly. Progress requires iteration.
That makes this a much harder problem than autocomplete, and a far more interesting one.
I’m an early investor in Poolside, so I’ve had a front-row seat to this. But what made me write that first check wasn’t just conviction in the market. I believed in the team’s ambition, and I told them early on that I wanted to help however I could, especially by connecting them with exceptional AI talent.
That offer still stands.
If you’re a researcher, engineer, or technical builder who cares about the future of software creation and wants to work on one of the hardest problems in AI, feel free to reach out.
Ambitious infrastructure companies often underestimate product. Product companies often underestimate technical depth.
Poolside is trying to do both.
Congrats to Jason, Eiso, and the team. Excited to see what developers build with this.


