Superpowers: How I'm using coding agents in October 2025

Ch00k | 211 points

I can't recommend this post strongly enough. The way Jesse is using these tools is wildly more ambitious than most other people.

Spend some time digging around in his https://github.com/obra/Superpowers repo.

I wrote some notes on this last night: https://simonwillison.net/2025/Oct/10/superpowers/

simonw | 7 hours ago

This article left me wishing it was "How I'm using coding agents to do <x> task better"

I've been exploring AI for two years now. It's certainly upgraded itself from the toy classification to a basic utility. However, I increasingly run into its limitations and find reverting to pre-LLM ways of working more robust, faster, and more mentally sustainable.

Does someone have concrete examples of integrating LLM in a workflow that pushes state-of-the-art development practices & value creation further?

d_sem | 4 hours ago

> It made sense to me that the persuasion principles I learned in Robert Cialdini's Influence would work when applied to LLMs. And I was pleased that they did.

No, no. Stop.

What is this? What're we doing here?

This goes past developping with AI into something completely different.

Just because AI coding is a radical shift doesn't mean everything has changed. There needs to be some semblance of structure and design. Instead what we're getting is straight up vodoo nonsense.

preommr | 27 minutes ago

I am only on the first page and saw this blurb and was immediately annoyed.

  @/Users/jesse/.claude/plugins/cache/Superpowers/...
The XDG spec has been out for decades now. Why are new applications still polluting my HOME? Also seems weird that real data would be put under a cache/ location, but whatever.
3eb7988a1663 | 4 hours ago

This style of prompting, where you set up a dire scenario in order to try to evoke some "emotional" response from the agent, is already dated. At some point, putting words like IMPORTANT in all uppercase had some measurable impact, but at the present time, models just follow instructions.

Save yourself the experience of having to write and maintain prompts like this.

tcdent | 6 hours ago

documents like https://github.com/obra/superpowers/blob/main/skills/testing... are very confusing to read as a human. "skills" in this project generally don't seem to follow set format and just look like what you would get when prompting an LLM to "write a markdown doc that step by step describes how to do X" (which is what actually happened according to the blog post).

idk, but if you already assume that the LLM knows what TDD is (it probably ingested ~100 whole books about it), why are we feeding a short (and imo confusing) version of that back to it before the actual prompt?

i feel like a lot of projects like this that are supposed to give LLMs "superpowers" or whatever by prompt engineering are operating on the wrong assumption that LLMs are self-learning and can be made 10x smarter just by adding a bit of magic text that the LLM itself produced before the actual prompt.

ofc context matters and if i have a repetitive tasks, i write down my constraints and requirements and paste that in before every prompt that fits this task. but that's just part of the specific context of what i'm trying to do. it's not giving the LLM superpowers, it's just providing context.

i've read a few posts like this now, but what i am always missing is actual examples of how it produces objectively better results compared to just prompting without the whole "you have skill X" thing.

hoechst | 5 hours ago

> <EXTREMELY_IMPORTANT>…*RIGHT NOW, go read…

I don’t like the looks of that. If I used this, how soon before those instructions would be in conflict with my actual priorities?

Not everything can be the first law.

jmull | 7 hours ago

I often feel these types of blogposts would be more helpful if they demonstrated someone using the tools to build something non-trivial.

Is Claude really "learning new skills" when you feed it a book, or does it present it like that because you're prompting encourages that sort of response-behavior. I feel like it has to demo Claude with the new skills and Claude without.

Maybe I'm a curmudgeon but most of these types of blogs feel like marketing pieces with the important bit is that so much is left unsaid and not shown, that it comes off like a kid trying to hype up their own work without the benefit of nuance or depth.

Avicebron | 8 hours ago

> some of the ones I've played with come from telling Claude "Here's my copy of programming book. Please read the book and pull out reusable skills that weren't obvious to you before you started reading

This is actually a really cool idea. I think a lot of the good scaffolding right now is things like “use TDD” bit if you link citations to the book, then it can perhaps extract more relevant wisdom and context (just like I would by reading the book), weather than using the generic averaged interpretation of TDD derived from the internet.

I do like the idea of giving your Claude a reading list and some spare tokens on the weekend where you’re not working, and having it explore new ideas and techniques to bring back to your common CLAUDE.md.

theptip | 5 hours ago

Maybe this is a naive question, but how are "skills" different from just adding a bunch od examples of good/bad behavior into the prompt? As far as I can tell, each skill file is a bunch of good/bad examples of something. Is the difference that the model chooses when to load a certain skill into context?

daemontus | 6 hours ago

It's not a superpower if everybody has that same power.

amelius | 8 hours ago

This is so interesting but it reads like satire. I'm sure folks who love persuading and teaching and marshalling groups are going to do very well in SWEng.

According to this, we'll all be reading the feelings journals of our LLM children and scolding them for cheating on our carefully crafted exams instead of, you know, making things. We'll read psychology books, apparently.

I like reading and tinkering directly. If this is real, the field is going to leave that behind.

jvanderbot | 8 hours ago

The "How to create skills" link is broken. This is the new location: https://github.com/obra/superpowers/blob/personal-superpower...

lcnPylGDnU4H9OF | 4 hours ago

I am not ashamed to admit this whole agentic coding movement has moved beyond me.

Not only do I have know everything about the code, data and domain, but now I need to understand this whole AI system which is a meta skill of its own.

I fear I may never be able catch up till someone comes along and simplifies it for pleb consumption.

spprashant | 6 hours ago

Seems cute, but ultimately not very valuable without benchmarks or some kind of evaluation. For all I know, this could make Claude worse.

jackblemming | 7 hours ago

This isnt science, or engineering.

This is voodoo.

It likely works - but knowing that YAGNI is a thing, means at some level you are invoking a cultural touchstone for a very specific group of humans.

Edit -

I dug into the superpowers and skills for a bit. Definitely learned from it.

There’s stuff that doesn’t make sense to me on a conceptual basis. For example in the skill to preserve productive tensions. There’s a part that goes :

> The trade-off is real and won't disappear with clever engineering

There’s no dimension for “valid” or prediction for tradeoff.

I can guess that if the preceding context already outlines tradeoffs clearly, or somehow encodes that there is no clever solution that threads the needle - then this section can work.

Just imagining what dimensions must be encoding some of this suggests that it’s … it won’t work for situations where the example wasn’t already encoded in the training. (Not sure how to phrase it)

intended | 6 hours ago

> It also bakes in the brainstorm -> plan -> implement workflow I've already written about. The biggest change is that you no longer need to run a command or paste in a prompt. If Claude thinks you're trying to start a project or task, it should default into talking through a plan with you before it starts down the path of implementation.

... So, we're refactoring the process of prompting?

> As Claude and I build new skills, one of the things I ask it to do is to "test" the skills on a set of subagents to ensure that the skills were comprehensible, complete, and that the subagents would comply with them. (Claude now thinks of this as TDD for skills and uses its RED/GREEN TDD skill as part of the skill creation skill.)

> The first time we played this game, Claude told me that the subagents had gotten a perfect score. After a bit of prodding, I discovered that Claude was quizzing the subagents like they were on a gameshow. This was less than useful. I asked to switch to realistic scenarios that put pressure on the agents, to better simulate what they might actually do.

... and debugging it?

... How many other basic techniques of SWEng will be rediscovered for the English programming language?

zahlman | 5 hours ago

I'm not sure exactly what I just read...

Is this just someone who has tingly feelings about Claude reiterating stuff back to them? cuz that's what an LLM does/can do

4b11b4 | 6 hours ago

What's the cost of running with agents like this?

tobbe2064 | 7 hours ago

The post reads like the someone throwing bones and reading their fortune. That part where Claude did its own journaling was so cringe it was hilarious. The tone of the journal entry was exactly like the blog author, which suggests to me Claude is reflecting back what the author wants to hear. I feel like Jesse is consumed in a tornado of llm sycophancy.

yoyohello13 | 4 hours ago

How are skills different from tools? Looks like another layer of abstraction. What for?

jstummbillig | 6 hours ago

Superpower: AI slop.

cynicalsecurity | 6 hours ago

Has anyone ever seen an instance in which the automated "How" removal actually improves an article title on HN rather than just making them wrong?

(There probably are some. Most likely I notice the bad ones more than the good ones. But it does seem like I notice a lot of bad ones, and never any good ones.)

[EDITED to add:] For context, the actual article title begins "Superpowers: How I'm using ..." and it has been auto-rewritten to "Superpowers: I'm using ...", which completely changes what "Superpowers" is understood as applying to. (The actual intention: superpowers for LLM coding agents. The meaning after the change: LLM coding agents as superpowers for humans.)

gjm11 | 9 hours ago
[deleted]
| 6 hours ago

[flagged]

apwell23 | 6 hours ago

take #73895 on how to fix ur prompt to make ur slop better.

lerp-io | 8 hours ago