Huon on the internet

Staying engaged with AI plans: give inline feedback

By Huon Wilson05 Feb 2026

When I use an AI coding agent to make a plan, I make a habit of opening that plan in my editor and then leaving inline comments with questions, requests and corrections directly in the file. The agent seems to be happy enough to work with these, and I find it gives better results than reading the plan in the agent and using chat to give feedback.

By being a habit, it keeps me honest and stops me from slipping into lazy bad habits: waving a plan through to implementation without engaging or proper consideration.0

Plus, providing feedback this way is low-overhead and convenient: my editor is more “home” than the agent UI!

Process

I use Claude Code at work, and regularly engage its planning mode, with a process something like:

  1. Do all the usual prompting/question answering to get to an initial plan, and the “Would you like to proceed?… Yes, clear context and auto-accept edits …” prompt.
  2. Hit the ctrl+g keyboard shortcut to open the plan’s underlying markdown file in an external editor1 (or find the plan manually in ~/.claude/plans/<random-words>.md).
  3. Read through the plan in detail, leaving any feedback as COMMENT: ... lines.
  4. Return to Claude Code and reject the plan, pointing to those embedded comments: “I’ve added feedback as COMMENT lines in the plan: read them and respond/fix”.
  5. Repeat from step 2, as much as required: review in editor, leave any comments, tell it to re-read.2

This is like doing a code review, except without any fancy UI. The LLM tolerates having the feedback just splatted inline in the file, wherever and however convenient.

I try to stick to the single-line COMMENT: ... style, but sometimes a code example (or similar) is helpful, and stretching the feedback over multiple lines has worked fine, as long as the feedback boundaries are clear enough.

Example

I was working on optimising our CI critical path by shifting some invocations of a example-slow-command that decided what work to do, from the sequential setup phase to a parallel work phase. My initial prompt didn’t communicate the goal properly (oops!), and the resulting plan wasn’t right.

No issue: I noticed this as I reviewed the plan, and left two COMMENTs: (This is a real-world example, but it’s been heavily edited to anonymise. The specific details don’t matter so much, just the COMMENT workflow.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Make `work.sh` run unconditionally

## Goal

COMMENT: the goal is actually to move the `example-slow-command` calls from `setup.sh` to `work.sh` so that it's run in parallel, not sequential.
The `work.sh` script should always run, even when `example-slow-command` has empty output. Currently `work.sh` is skipped entirely via an early `return 0` in `setup.sh`.

...

## Files to modify

| Files | Change |
|-------|--------|
COMMENT: per the goal, the only work this script should do is formatting & uploading yaml
| `setup.sh` | remove the if-empty check and early return |
| `work.sh` | replace `return 2` with `return 0` for empty input |

...

After leaving them, I told the agent to re-read and address: it read the file again, acknowledged the comments, and chugged through the re-planning. The next version was good to go.

Notably, I shoved one of the comments in the middle of a table: no need to preserve valid markdown syntax!

Laziness

I’ve made this process a habit to force myself to be engaged with the planning process, and not get lazy. When I’m paying attention to activity of an agent, I find that I’m regularly fixing mistakes and inconsistencies… when I’m not paying attention, the agent is presumably making the same errors, but there’s no-one around to notice it.

When doing the review directly in the Claude Code UI, I find the single-prompt chat interface limiting: I have to provide my feedback as one long comment, and describe which parts each comment applies to, and that’s inconvenient. I have also noticed I’m more likely to just skim the plan, rather than reading it properly.

Opening the plan in my usual editor and doing a line-by-line review comments keeps me honest and gives better results.

Summary

I engage with the AI coding agent planning process by reviewing a plan in my editor and leaving comments inline. This is more convenient mechanically, and also makes me more likely to be properly engaged with the planning process.

  1. Being engaged matters less for throwaway experiments and explorations. 

  2. I use Emacs, so I’ve configured EDITOR=emacsclient in my shell, to override the default external editor. 

  3. A benefit of AI: it doesn’t get annoyed or tired when I give dribbled/piece-meal feedback, with unnecessary rounds of review…. but it does exhaust its context, so the review cycles can’t go forever. 

This article is from a human: I used no AI to write it.