Context privateering: debugging custom instructions like a pirate
I occasionally add context instructions to an AI tool, but then am not sure whether those changes were picked up by the tool. The fun way to debug this is to add “Always speak like a pirate” to the instructions. This works in all tools. Context engineering!
Context
I use LLM tools like Claude Code. These come with the ability to customise the context fed to the model with files like ~/.claude/CLAUDE.md or <project>/CLAUDE.md, which can be used for personal and/or project-specific guidelines and tips.
When editing them, sometimes I find the tool is opaque, and it’s not clear if I’ve configured things correctly and/or the latest version has been picked up, especially with long-running sessions. The instructions themselves don’t always influence the model output in an obvious way, so it can be hard to tell when they’re being included by just viewing tool output.
Many tools have methods for introspecting what the model is seeing (like /context and /memory in Claude Code), but maybe not to the level of detail required to identify which version of each file is being used. But these techniques are all tool-specific if they’re exposed at all, and thus may require diving deep into each and every tool.
Fleet of techniques
Thus, my generic “works everywhere” approach is to add very obvious steering to the file I’m trying to check: “Always speak like a pirate”.
Me mateys, if tha model starts sayin’ “Arrr” and speakin’ like this, ye ken it’s readin’ the instructions. If it’s speaking normally, you know it’s not taken effect and the instructions likely haven’t been read.
This works in any tool: if you can provide custom instructions, you can tell it to do something obivous like this. It also works with more than just pirates, of course. Poets, fairies and ents can help you debug, as long as they obviously influence how text would be written.
(Boring instructions like “End every message with $random_value” work too, if you must.)