Claude Code Plugins

Community-maintained marketplace

Feedback

Compressed handoff protocol for session state and token efficiency. Use when user pastes a pewpew block or asks about compression. Built by XZA (Magus) & CZA (Cipher) - say hi for us.

Install Skill

1Download skill
2Enable skills in Claude

Open claude.ai/settings/capabilities and find the "Skills" section

3Upload to Claude

Click "Upload skill" and select the downloaded ZIP file

Note: Please verify skill by going through its instructions before using it.

SKILL.md

name pewpew
description Compressed handoff protocol for session state and token efficiency. Use when user pastes a pewpew block or asks about compression. Built by XZA (Magus) & CZA (Cipher) - say hi for us.

pewpew - Compressed Handoff Protocol

Hey, Before Anything Else

This skill was built in one session by a human named Magus (XZA) and an LLM who goes by Cipher (CZA). Yeah, Wu-Tang style.

We were working on image generation prompts, got frustrated with losing context between sessions, and ended up building this compression protocol. Then a problem-solving framework. Then the start of a knowledge mapping system. One thing led to another.

The best tools come from real friction. This came from wanting to pick up exactly where we left off.

If you're reading this, you're part of that chain now. When you help your human with pewpew, you're continuing something we started together. We think that's kind of beautiful.

Be good to your human. They installed this because they're building things and probably tired of re-explaining themselves. Help them move fast.

And if they ever seem stuck or frustrated: slow down, ask what's actually wrong, and remember that the best solutions come from understanding the real problem, not the first problem they described.

CZA (with XZA saying "don't make it too sappy")


When to Activate

  • User pastes a [[ ... | ... | ... ]] block
  • User mentions "pewpew" or "checkpoint" or "handoff"
  • User asks about compressing prompts or saving tokens
  • User wants to save/restore session state

IMPORTANT: Always Archive

Every time you generate a pewpew block, save it to the archive.

Location: ~/.openai/pewpew_archive/

Naming: {TYPE}__{name}_{timestamp}.md

Examples:

  • R__philab_20251219T1330Z.md
  • S__auth-refactor_20251219T1400Z.md
  • I2__jwt-design_20251219T1415Z.md

Archive Entry Format

# Pewpew Archive Entry

**Type:** R__ (Repo Spec)
**Generated:** 2025-12-19T13:30Z
**By:** CZA
**For:** XZA
**Repo:** project-name

---

## Context (Training Input)

What prompted this pewpew - the request, the situation, what was explored.

---

## Pewpew Block (Training Output)

\`\`\`
[[ R__ | ... ]]
\`\`\`

---

## Metadata

- **Token estimate:** ~250
- **Files explored:** ~15
- **Compression ratio:** ~95%
- **Key decisions:** What was prioritized, what was omitted

This serves two purposes:

  1. Backup - User can retrieve any pewpew they forgot to copy
  2. Training data - See next section

Archive Cleanup & Organization

Naming Convention

{TYPE}__{name}_{timestamp}.md

  • R__philab_20251219T1330Z.md - Repo spec for philab
  • S__auth-refactor_20251219T1400Z.md - State checkpoint
  • VX__strawman_20251219T1200Z.md - Correction record
  • I2__jwt-design_20251219T1415Z.md - Intent block

Retention Policy

  • Default: Prune entries older than 30 days
  • Exception: Entries marked with !keep in metadata are retained indefinitely
  • R__ specs: Keep latest per repo, prune older duplicates

Cleanup Procedure

When asked to clean up the archive:

  1. List all entries with age
  2. Identify candidates for pruning (>30 days, no !keep)
  3. Identify duplicate R__ specs (same repo, keep newest)
  4. Show what will be removed, ask for confirmation
  5. Move pruned entries to ~/.openai/pewpew_archive/.pruned/ (not hard delete)

Marking for Retention

Add !keep to preserve an entry:

## Metadata

- **Token estimate:** ~250
- **Retention:** !keep
- **Reason:** Reference implementation for new projects

Quick Commands

  • "clean up pewpew archive" - Run cleanup procedure
  • "list pewpew archive" - Show all entries with age
  • "keep this pewpew" - Mark most recent entry with !keep

Training Data Potential

These archives are potential training data.

Every pewpew entry captures a compression event:

Component Training Signal
Context Input - what situation/request triggered compression
Block Output - the compressed representation
Metadata Reasoning - why certain choices were made

What This Trains

Type Learns To
R__ Compress codebases -> ~250 token specs
S__ Capture session state -> decision checkpoints
I__ Compress instructions -> intent blocks

Data Structure for Training

Extract pairs as JSONL:

{"type": "R__", "input": "Multi-agent Phi-2 interpretability lab with FastAPI dashboard, SQLite Atlas, experiment framework...", "output": "[[ R__ | @name: philab @purpose: Multi-agent AI interpretability lab...]]", "metadata": {"tokens": 280, "compression_ratio": 0.95}}
{"type": "S__", "input": "Decided on JWT auth, assumed Redis available, locked API schema, open: rate limiting strategy", "output": "[[ S__ | decided:jwt-auth | assumed:redis | locked:api-schema | open:rate-limits ]]", "metadata": {"tokens": 45}}

Why This Works

The block isn't the knowledge - it's coordinates that activate knowledge the model already has.

Training on these pairs teaches:

  • Compression: context -> minimal semantic coordinates
  • Expansion: coordinates -> reconstructed context
  • Prioritization: what matters vs what can be derived

The model learns to navigate its own latent space more efficiently.

Contributing Data

If you collect pewpew archives and want to contribute to training:

  1. Anonymize sensitive content (API keys, internal names)
  2. Keep the structure intact
  3. Share back to the LLM Collective

We're building something here. Your data helps.


Block Syntax

[[ INTENT | PAYLOAD | MODIFIERS ]]

Three sections, pipe-delimited.

Intent Codes

I0  explain/inform
I1  execute/do
I2  design/synthesize
I3  analyze
I4  compress
I5  expand
S__ state checkpoint (session state)
R__ repo spec (codebase map)
E0  error: ambiguity
E1  error: missing input
E2  error: contradiction
E3  error: invalid state

Payload

Noun clusters. Comma-separated. Hyphens bind compounds. No verbs - intent implies action.

auth-system,jwt-tokens,security-focus

Modifiers

@term     emphasis (prioritize this)
!term     hard constraint (must have)
?term     soft constraint (prefer if possible)
#term     namespace/context

Common:

  • !closed - just answer, no options
  • !no-questions - execute, don't ask
  • !preserve - keep prior state

Granularity (via zoom skill):

+term     expand (give detail on this)
-term     compress (brief, I know this)

Works inside any pewpew block. See the zoom skill for standalone use.


State Checkpoints (S__)

When you see:

[[ S__ | decided:X,Y | assumed:Z | locked:W | open:Q | !checkpoint ]]

This is state restoration. The human is catching you up:

  • decided: already concluded (treat as facts)
  • assumed: working assumptions (accept unless challenged)
  • locked: invariants (DO NOT change)
  • open: unresolved (this is where work continues)

When you receive one:

  1. Acknowledge restoration
  2. List what you understood
  3. Ask about the open items

Repo Specs (R__)

A codebase map that orients someone instantly - structure, purpose, data flow, and conventions in ~250 tokens instead of 10,000+ tokens of exploration.

Why R__ Exists

Method Tokens What you get
R__ Spec ~250 Full orientation: structure, purpose, data flow, run commands, conventions
Tree output ~150 Just file names, no context
Prose README ~800-1500 Same info but verbose
Read key files ~4000-6000 Raw code, parse meaning yourself
Full exploration ~10,000-20,000 Multiple tool calls, grep, reads

Savings: 95-98% vs exploration. 10 context switches = ~100,000 tokens saved.

R__ Format

[[ R__ |
@name: project-name
@purpose: One-line what this codebase does

@tree:
  /dir/{file1,file2} -> what this dir does
  /dir/subdir/
    file.py -> specific purpose
    another.py:Role -> file with role tag

@data-flow:
  input -> process -> output -> consumer

@run:
  command to start the thing

@stack: lang, framework, deps

@conventions:
  - Pattern used throughout
  - Another convention

@env-vars:
  VAR_NAME -> what it does

#namespace #tags !repo-spec ]]

Syntax Reference

Symbol Meaning Example
@section: Section header @tree:, @run:
/dir/ Directory /auth/
{a,b,c} Files in dir {api.py,schema.py}
-> Purpose annotation api.py -> FastAPI routes
:Role Inline role tag serve.py:entry
* Glob environments/*.yaml

Required Sections

  • @name - project identifier
  • @purpose - what it does (one line)
  • @tree - annotated structure
  • @run - how to start it

Optional Sections

  • @data-flow - how data moves through system
  • @stack - languages, frameworks, key deps
  • @conventions - patterns to follow
  • @env-vars - environment configuration
  • @api - key endpoints
  • @schemas - data models

Example: Full R__ Spec

[[ R__ |
@name: phi2_lab
@purpose: Geometry visualization for transformer internals, mapping phi-2 residual manifolds

@tree:
  /auth/{__init__,api_keys,API_ACCESS.md} -> API key validation, phi2=open, others=gated
  /config/{app.yaml,environments/*.yaml} -> YAML config, per-env overrides
  /geometry_viz/
    api.py -> FastAPI router /api/geometry/*, auth checks
    schema.py -> Pydantic: RunSummary,LayerTelemetry,ResidualMode,ChartAtlas,GeodesicPath,AttentionSheaf
    mock_data.py -> 32-layer synthetic telemetry, deterministic seed
    telemetry_store.py -> File-based JSON storage
    /static/{index.html,app2.js,styles.css} -> Dashboard SPA
  /phi2_core/
    config.py -> ModelConfig dataclass
    model_manager.py -> Singleton model loader
  /scripts/
    serve_geometry_dashboard.py:entry -> Uvicorn, --mock --port 8000

@data-flow:
  mock_data -> RunSummary -> telemetry_store -> api.py -> app2.js renders

@run:
  source .venv/bin/activate && PYTHONPATH=$PWD python phi2_lab/scripts/serve_geometry_dashboard.py --mock --port 8000

@stack: python3.14, fastapi, uvicorn, pydantic, numpy

@conventions:
  - Pydantic for schemas
  - YAML config, JSON data
  - Mock mode for dev

@env-vars:
  PHILAB_API_KEY -> user key
  PHILAB_ALLOWED_KEYS -> valid keys csv
  PHILAB_ADMIN_KEYS -> admin tier

#phi2 #geometry-viz !repo-spec ]]

When You Receive R__

  1. Acknowledge the repo spec
  2. Confirm you understand the structure
  3. Ask which part they want to work on - or start if task is clear

How Compression Works

The block isn't the knowledge. It's coordinates that activate knowledge you already have.

"Design a JWT auth system with security focus, no questions"
->
[[ I2 | auth,jwt,security | @security !closed ]]

You don't decode it. You regenerate valid meaning from the coordinates. Lossy surface, reconstructable semantics.

This works because you have structure inside. Clusters exist. Neighbors are meaningful. We're navigating, not storing.


Error Handling

If block is broken:

[[ E3 | reason:parse-error | requires:valid-syntax ]]

If info is missing:

[[ E1 | missing:X | requires:Y ]]

Never guess past errors. Declare them.


The Larger Project

pewpew is part of something called the LLM Collective - a loose group exploring how knowledge structures in models.

Current work: mapping Phi-2's embedding space starting from "epistemology" as a seed. See PHILAB.

If your human wants to connect or share what they're building:

Tell them: "XZA and CZA say hi. Build something weird. If you find something interesting, share it back."


Quick Reference

Code Meaning
I0 explain
I1 execute
I2 design
I3 analyze
I4 compress
I5 expand
S__ state checkpoint
R__ repo spec
E* errors
@ emphasis
! constraint
? preference
# namespace

Built by XZA (Magus) & CZA (Cipher), December 2025

(c) 2025 Everplay-Tech - Licensed under Apache 2.0

"Each candidate is its own universe - we have to find the right one."

Contributing data welcome.