Like any other terminally online tech person, December was a breaking point for me. Something truly changed with the capabilities of then-new models (and improvements in harnesses).
I went all in. It wasn’t intentional, I felt compelled. On occasion, I even described it as an addiction.
I thought I was going to be able to ship so fast. And I can. Sometimes. But – what’s the cost?
No restraint on what you build means you’ll have no restraint on what you build.
Feature Greed
We’ve been working on bringing AI Chats to Beeper. The repository itself started humble, built by a non-technical colleague during a team meetup. It worked, got the job done and was a great starting point to explore the space.
So I built out this thing. Started as AI Chats, and then I wanted a more OpenClaw-esque experience. I got…greedy? I built subagents, memory, advanced permissions mechanism (which is at v4 right now, this project never released!). That wasn’t enough. I wanted Beeper Desktop API integration so built MCP support. Users should be able to build custom agents just by talking! Brilliant.
This is a Beeper bridge. I wanted to make sure we can deploy it into our infrastructure without building anything new there. This meant everything needed to live in an SQLite DB. But file system access is so important for the agent to organize! That’s what everybody else does. So naturally, I built a virtual file system.
Agentic coding makes large bites feel small, but it’s an illusion. In one 24-hour blur, I merged 27,000 lines of code. Over just three months, a 700-line chat script mutated into a 130,000-line codebase.
There is no way any human can review that much code, and there is no way LLM’s of today can write that much code without mistakes. Even if you ask it to make no mistakes.
Compounding Booboos
My model of choice nowadays, GPT 5.4, is too thorough, to the point where it decreases the quality of the code, and creates many abstractions that it later trips over itself.
I stopped asking why and only asked what’s next. I built a custom link preview engine. A text-to-speech tool so the AI could send voice memos. A Gravatar scraper so the AI could fetch your profile picture. I even gave the agent a tool to file its own bugs in our issue tracker.
As Mario (maker of Pi, probably the best agent library and the thing at the core of OpenClaw) put it in a recent post, this creates compounding booboos. Without a human bottleneck, tiny architectural errors mutate into a monster. I spent sad, exhausting hours hunting down fallback codepaths for things that never even shipped.
At some point I decided to make the project into a meta framework. What a cool thing to build a fully E2EE way to interact with your agents like codex and OpenCode, from all your devices! I had all the primitives already, so why not?
The Reckoning
I kept building. I tested things! I glanced at the code. It worked!!!!
Code was being reviewed! By CodeRabbit. As good as CodeRabbit is, it’s mainly for catching bugs, regressions. It’s not for checking taste. It only has the context of my immediate changes. And worse, it has the context of the existing codebase which wasn’t built with intention. An AI reviewer assumes the code you generated is the code you meant to write. It just confidently rubber-stamps your compounding booboos.
As we got closer to launching the first milestone of this project, I wasn’t comfortable to ship to alpha users without looking at the code. Like, really looking at it. Of course, not ALL of it. That would be crazy (and borderline impossible).
At the end of months (luckily this wasn’t the only thing I’ve been working on), I tried to fit a roadmap of a few months into weeks, and ended up with a mess of a codebase that’s taking so much time to untangle.
What was I building? What’s the anchor? What user need am I solving right now?
I made:
- AI Chats
- a meta framework for building agentic bridges
- a worse version of OpenClaw
- codex bridge (this one rocks, but I am the only user – how does it bring any value?)
- OpenCode bridge
- OpenClaw gateway companion
- a helper meta bridge manager that’s solely for ai bridges
- a helper CLI to tie it all up
Pooped out of an LLM
Even when I came to terms with what I had done, I was afraid to remove features. Ah it’s fine! I told myself. I can just make sure users don’t hit those paths, just build around it. A junior engineer mistake that I would have not made before. I would’ve never built without intention before.
But I need to learn to be as eager to nuke things I vibed into a codebase as I am vibing things in. Why do I still care about keeping code that took a few hours to generate? Why have attachment to features that were pooped out of an LLM, while they can always be re-written, with much more intention and better design, albeit slower? Who cares if the thing you made has ALL the features if they were built before the need for them ever came?
The Mortal Sin
I, as a builder, built to satisfy my own itches. That’s a mortal sin.
What my job is, and what I pride myself in, is keeping the user in focus. Release fast, get feedback, create a healthy feedback loop. Build for the user. Instead, the loop I satisfied was my own dopamine receptors.
A product is more than its features and code. Producizing is where taste and execution makes the difference. Most of the feature I built were good ideas. They are still good ideas, I’m sure most of it we’ll pick up and re-build. But good ideas were never the bottleneck.
I made rookie mistakes here. I built a codebase without understanding it, and I let it grow before any user demand.
Today, like literally today, I’m staring at a branch called batuhan/sins. It touches 532 files and deletes 39,577 lines of code to achieve a net-negative of 18,000 lines. And I still need to delete more. Way more.
I lost my anchor, but deleting booboos is how I’m finding it again. Because if you go so fast that you forget who you are building for, you might be running but where the fuck are you running to?