One of the exciting things to me about the ai agents is how they push and allow you to build processes that we’ve always known were important but were frequently not prioritized in the face of shipping the system.
You can use how uncomfortable you are with the ai doing something as a signal that you need to invest in systematic verification of that something. As a for instance in the link, the team could build a system for verifying and validating their data migrations. That would move a whole class of changes into the ai relm.
This is usually much easier to quantify and explain externally than nebulous talk about tech debt in that system.
Some thoughts:
- Is there a more elegant way to organize the prompts/specifications for LLMs in a codebase? I feel like CLAUDE.md, SPEC.mds, and AIDEV comments would get messy quickly.
- What is the definition of "vibe-coding" these days? I thought it refers to the original Karpathy quote, like cowboy mode, where you accept all diffs and hardly look at code. But now it seems that "vibe-coding" is catch-all clickbait for any LLM workflow. (Tbf, this title "shipping real code with Claude" is fine)
- Do you obfuscate any code before sending it to someone's LLM?
Lot of visual noise because of model specific comments. Or maybe that's just the examples here.
But as a human, I do like the CLAUDE.md file. It's like documentation for dev reasoning and choices. I like that.
Is this faster than old style codebases but with developers having the LLM chat open as they work? Seems like this ups the learning curve. The code here doesn't look very approachable.
I think most of this is good stuff but I disagree with not letting Claude touch tests or migrations at all. Handing writing tests from scratch is the part I hate the most. Having an LLM do a first pass on tests which I add to and adjust as I see fit has been a big boon on the testing front. It seems the difference between me and the author is I believe whether code was generated by an LLM or not the human still takes ownership and responsibility. Not letting Claude touch tests and migrations is saying you rightfully dont trust Claude but are giving ownership to Claude for Claude generated code. That or he doesn't trust his employees to not blindly accept AI slop, the strict rules around tests and migrations is to prevent the AI slop from breaking everything or causing data loss.
I finally decided few days ago to try this Claude Code thing in my personal project. It's depressingly efficient. And damn expensive - I used over 10 dollars in one day. But I'm afraid it is inevitable - I will have to pay tax to AI overlords just to be able to keep my job.
Pretty disingenuous to emphasize "building a culture of transparency" while simultaneously not disclosing how heavily AI was [very evidently] used in writing this post.
Coming up next on Hackernews: Field Notes from Eating Real Pizza Made with Claude:
"The glue adds a flavor and texture profile that traditionalists may not be used to on their cheese pizza. But I've had Michelin-star quality pizzas without glue in them that weren't half as delicious as this one was with. AI-mediated glue-za is the future of pizza, no doubt about it."
[flagged]
Author here: To be honest, I know there are like a bajillion Claude code posts out there these days.
But, there are a few nuggets we figured are worth sharing, like Anchor Comments [1], which have really made a difference:
——
——[1]: https://diwank.space/field-notes-from-shipping-real-code-wit...