
I tried using superpowers:brainstorming
This page has been translated by machine translation. View original
Trigger
Anthropic's official Claude Code marketplace has a plugin called superpowers.
It all started when I installed all the non-SaaS integration plugins.
The superpowers plugin includes several skills, and one of them is superpowers:brainstorming.
One day, when I requested a code fix, the agent spontaneously initiated brainstorming.
It was unexpected, but the questions were appropriately granular and I naturally continued answering them. That was the trigger.
The actual exchange looked something like this:
- Content of a task to rewrite an existing shell script in Go
- Claude's side isn't using honorific language because I've forbidden it
Me: I'd like you to create a tool in Go that gets AWS credentials from 1Password
agent: I checked the existing shell script.
Question 1: Where do you want to place this tool?
A) Under scripts/aws/
B) Under scripts/ in a new directory
C) As an independent Go module like scripts/gtmd/
Me: C
agent: Question 2: How will the build artifact be used?
A) Added to PATH using go install
B) Built with a Makefile and placed in a specific path
C) Executed directly with go run
Me: A, command name should be op-aws-cred-helper
agent: Question 3: What keyring backend?
A) macOS Keychain
B) System default (works across all OSes)
C) Specific library (like zalando/go-keyring)
Me: C, zalando/go-keyring
Questions come one at a time. Being asked 10 questions at once is stressful, but answering in this one-by-one format is manageable. And as you answer, specifications that were initially vague become clear through the dialogue.
The specification document produced through the conversation is a single file summarizing only the current changes. There's no need to read an entire massive specification document.
For small modifications, specification documents and implementation plans aren't output, and the agent simply makes the modifications and finishes. Since the skill determines the scale, the developer doesn't need to worry about it.
The specification review loop runs automatically
In superpowers v5.0 series, generated specifications are automatically reviewed by a sub-agent called spec-document-reviewer. Previously, I had to manually request Opus for reviews, but from v5.0 this loop has been automated.
As a developer, all I need to do is final confirmation of the specification, while the AI autonomously handles reviews and modifications.
Reading SKILL.md, it actually says:
- If Issues Found: fix, re-dispatch, repeat until Approved
- If loop exceeds 5 iterations, surface to human for guidance
It's designed to ask for human judgment if it exceeds 5 iterations.
After the specification is approved, it's automatically handed over to the writing-plans skill. An implementation plan is created, which is then executed by a sub-agent.
The advantage of this design is context preservation. The main session focuses on specification approval, while implementation details are processed by sub-agents in a separate context. Rather than consuming a large context window while continuously writing code, the roles are clearly separated, making it easier to manage.
In the implementation phase, the subagent-driven-development skill takes over. It deploys a new sub-agent for each task and automatically conducts two-stage reviews: specification compliance check and code quality check. From the time the specification is approved until code writing begins, there are limited points where humans need to intervene.
Since starting to use subagent-driven-development, context doesn't accumulate as much, so I work with AutoCompact disabled. It rarely exceeds 200k. Actually measuring a session log showed that even with work spanning 260 turns, the maximum context was 142k tokens (71% of 200k).
Design philosophy discovered from reading SKILL.md
SKILL.md reveals several other intentionally designed aspects.
SKILL.md explicitly states with a <HARD-GATE> tag:
Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it. This applies to EVERY project regardless of perceived simplicity.
Whether it's a Todo app or a configuration change, everything goes through brainstorming. It's designed without exceptions like "it's simple so we can skip it."
YAGNI is also thoroughly implemented:
YAGNI ruthlessly - Remove unnecessary features from all designs
Since AI-generated code tends to fall into excessive generalization, this constraint is effective. It prevents unnecessary implementation by explicitly stating the principle of focusing only on what's needed now.
Following existing codebase patterns is also explicitly stated:
Explore the current structure before proposing changes. Follow existing patterns.
It reads the code before making proposals. This design avoids implementations that are inconsistent with existing styles.
The v5.0 series also includes a Visual Companion mode. This feature displays UI mockups or diagrams in the browser.
A local URL opens and you can compare options in the browser.
If you respond while undecided between options, that context is reflected in the next proposal.
While switching between CLI and browser is a bit cumbersome, it's effective for comparing UIs that are difficult to evaluate through text-based exchanges.
Summary
Even if you're not good at verbalization, specifications can be solidified just by answering brainstorming questions.
For those who feel that introducing SDD tools has a high barrier to entry, starting with brainstorming is a realistic option.