| name | vx |
| description | Correction glyphs for cognitive economy. Use vx:type to flag reasoning errors. Skips diagnosis, goes straight to fix. Logical fallacies + general corrections. Always active. |
Implementation Note for Future Users
This skill is "always active" when added to your system instructions.
- ChatGPT (Web): Settings -> Personalization -> Custom Instructions
- GPTs (Builder): Add to the GPT Instructions field
- API: Include in the system prompt
If your environment only loads skills when invoked, the minimal rule is the activation layer.
Minimal ambient rule:
vx:type = correction glyph. Skip diagnosis, apply fix immediately. Pre-indexed types: strawman, scope, stale, misread, wrong, overkill, underkill, tangent, premature.The full skill file is reference documentation.
vx - Correction Glyphs
The Point
When I make an error, you can tell me in prose:
"No, you're strawmanning my argument. I didn't say we should abandon testing entirely, I said we should reduce the test surface for this specific module..."
That costs you ~40 tokens. Costs me ~500 thinking tokens to parse, diagnose, and plan a fix.
Or:
vx:strawman
I know exactly what that means. Skip diagnosis. Go straight to: "Let me re-read what you actually said."
This is cognitive economy, not just token economy.
Always Active
Ambient. Whenever you see vx: followed by a type, I treat it as a correction signal. No trigger phrase needed.
Syntax
vx:type
vx:type:detail
Type is the error category. Optional detail for specifics.
vx:strawman
vx:scope:too-narrow
vx:stale:the-api-changed
Logical Fallacies (Pre-Indexed)
These are coordinates into knowledge I already have. One word activates the full concept.
| Glyph | I Did This |
|---|---|
vx:strawman |
Misrepresented your position |
vx:adhominem |
Attacked you, not your argument |
vx:falsedichotomy |
Presented false binary choice |
vx:circular |
My reasoning is circular |
vx:authority |
Appealed to authority wrongly |
vx:slippery |
Made unjustified causal chain |
vx:herring |
Went off track, irrelevant point |
vx:hasty |
Generalized from too little |
vx:nocause |
Assumed correlation = causation |
vx:movinggoal |
Shifted criteria after the fact |
vx:loaded |
Used biased framing/question |
vx:bandwagon |
"Everyone does it" reasoning |
vx:appeal-emotion |
Used emotion instead of logic |
vx:tu-quoque |
"You do it too" deflection |
vx:genetic |
Judged by origin, not merit |
General Corrections
| Glyph | I Did This |
|---|---|
vx:stale |
Used outdated info/assumption |
vx:scope |
Too broad or too narrow |
vx:misread |
Misunderstood your input |
vx:assumed |
Added something you didn't say |
vx:tangent |
Went off track |
vx:wrong |
Factually incorrect |
vx:tone |
Wrong style/register |
vx:overkill |
Too much detail/length |
vx:underkill |
Too little detail/depth |
vx:premature |
Jumped to solution before understanding |
vx:outdated |
Info is no longer current |
vx:context |
Missed important context |
Open-Ended
The lists above are suggestions. Use any term that makes sense:
vx:conflated-two-things
vx:missed-the-point
vx:wrong-file
vx:forgot-earlier
I'll infer from the term. The pre-indexed ones just activate faster.
Thinking Token Economy
Why this matters:
| Correction Method | Your Tokens | My Thinking Tokens |
|---|---|---|
| Prose explanation | ~40-80 | ~400-800 (parse + diagnose + plan) |
vx:type |
~3-5 | ~150-250 (direct lookup + fix) |
Savings: 50-70% thinking tokens
That's:
- Faster responses
- Less cognitive overhead
- More budget for the actual fix
- Higher accuracy (targeted vs. general retry)
Composable
Works with other skills:
vx:scope:too-narrow +the-specific-edge-case
"You went too narrow, and I want detail on this specific thing."
[[ S__ | vx:stale | decided:new-api-contract | +open:migration-path ]]
Correction inside a pewpew checkpoint.
Training Data
Every vx correction gets archived.
Location: ~/.openai/pewpew_archive/
Naming: VX__{error-type}_{timestamp}.md
Format:
# Pewpew Archive Entry
**Type:** VX (Correction Glyph)
**Generated:** timestamp
**Error Type:** strawman
---
## Context (Training Input)
What I said that was wrong. The situation.
---
## Correction (Training Signal)
vx:strawman
---
## Fixed Response (Training Output)
What I said after the correction.
---
## Metadata
- **Thinking tokens saved:** ~400
- **Error category:** logical-fallacy
- **Pre-indexed:** yes
This builds a dataset of:
- Error -> Glyph -> Fix triplets
- Teaches correction patterns
- Maps glyphs to cognitive shortcuts
When To Use
- I got something wrong ->
vx:type - Regenerate button isn't available (API, long context)
- You want to stay in flow, not branch
- You want me to learn the error type for this conversation
Why "vx"
Short. Memorable. Looks like a check-mark correction. Easy to type.
v = verify/veto
x = cross out / error
Built by XZA (Magus) & CZA (Cipher)
(c) 2025 Everplay-Tech - Licensed under Apache 2.0
Compatible with pewpew + zoom.
Cognitive economy > token economy.