| name | codex |
| description | Executes OpenAI Codex CLI for code analysis, refactoring, and automated editing. Activates when users mention codex commands, code review requests, or automated code transformations requiring advanced reasoning models. |
Codex Execution Skill
Prerequisites
- Codex CLI installed and configured (
~/.codex/config.toml) - Verify availability:
codex --versionon first use per session
Workflow Checklist
For every Codex task, follow this sequence:
☐ Detect HPC/Slurm environment:
- Check if running on HPC cluster (look for
/home/woody/,/home/hpc/, Slurm env vars) - If HPC detected: Always use
--yoloflag to bypass Landlock sandbox restrictions
- Check if running on HPC cluster (look for
☐ Ask user for execution parameters via
AskUserQuestion(single prompt):- Model:
gpt-5,gpt-5-codex, or default - Reasoning effort:
minimal,low,medium,high
- Model:
☐ Determine sandbox mode based on task:
read-only: Code review, analysis, documentationworkspace-write: Code modifications, file creationdanger-full-access: System operations, network access- HPC override: Always add
--yoloflag (bypasses Landlock restrictions)
☐ Build command with required flags:
codex exec [OPTIONS] "PROMPT"Essential flags:
-m <MODEL>(if overriding default)-c model_reasoning_effort="<LEVEL>"-s <SANDBOX_MODE>(skip on HPC)--skip-git-repo-check(if outside git repo)-C <DIRECTORY>(if changing workspace)--full-auto(for non-interactive execution, cannot be used with --yolo)
HPC command pattern (with
--yoloto bypass Landlock):codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check \ "Analyze this code: $(cat /path/to/file.py)" 2>/dev/nullNote:
--yolois an alias for--dangerously-bypass-approvals-and-sandboxand is REQUIRED on HPC clusters to avoid Landlock sandbox errors. Do not use --full-auto with --yolo as they are incompatible.☐ Execute with stderr suppression:
- Append
2>/dev/nullto hide thinking tokens - Remove only if user requests verbose output or debugging
- Append
☐ Validate execution:
- Check exit code (0 = success)
- Summarize output for user
- Report errors with actionable solutions
- If Landlock/sandbox errors on HPC: verify
--yoloflag was used, retry if missing
☐ Inform about resume capability:
- "Resume this session anytime:
codex resume"
- "Resume this session anytime:
Command Patterns
🔥 HPC QUICK TIP: On HPC clusters (e.g.,
/home/woody/,/home/hpc/), ALWAYS add--yoloflag to avoid Landlock sandbox errors. Example:codex exec --yolo -m gpt-5 ...
Read-Only Analysis
codex exec -m gpt-5 -c model_reasoning_effort="medium" -s read-only \
--skip-git-repo-check --full-auto "review @file.py for security issues" 2>/dev/null
Stdin Input (bypasses sandbox file restrictions)
cat file.py | codex exec -m gpt-5 -c model_reasoning_effort="low" \
--skip-git-repo-check --full-auto - 2>/dev/null
Note: Stdin with - flag may not be supported in all Codex CLI versions.
HPC/Slurm Environment (YOLO Mode - Bypass Landlock)
When running on HPC clusters with Landlock security restrictions, use the --yolo flag:
# Primary solution: --yolo flag bypasses Landlock sandbox
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check \
"Analyze this code: $(cat /path/to/file.py)" 2>/dev/null
Alternative: Manual Code Injection (if --yolo is unavailable):
# Capture code content and pass directly in prompt
codex exec -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check --full-auto \
"Analyze this Python code: $(cat file.py)" 2>/dev/null
Or for large files, use heredoc:
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check "$(cat <<'ENDCODE'
Analyze the following code comprehensively:
$(cat file.py)
Focus on: architecture, algorithms, multi-GPU optimization, potential bugs, code quality.
ENDCODE
)" 2>/dev/null
Note: --yolo is short for --dangerously-bypass-approvals-and-sandbox and is safe on HPC login nodes where you have limited permissions anyway. Do not combine --yolo with --full-auto as they are incompatible.
Code Modification
codex exec -m gpt-5 -c model_reasoning_effort="high" -s workspace-write \
--skip-git-repo-check --full-auto "refactor @module.py to async/await" 2>/dev/null
Resume Session
echo "fix the remaining issues" | codex exec --skip-git-repo-check resume --last 2>/dev/null
Cross-Directory Execution
codex exec -C /path/to/project -m gpt-5 -c model_reasoning_effort="medium" \
-s read-only --skip-git-repo-check --full-auto "analyze architecture" 2>/dev/null
Using Profiles
codex exec --profile production -c model_reasoning_effort="high" \
--full-auto "optimize performance in @app.py" 2>/dev/null
CLI Reference
Core Flags
| Flag | Values | When to Use |
|---|---|---|
-m, --model |
gpt-5, gpt-5-codex |
Override default model |
-c, --config |
key=value |
Runtime config override (repeatable) |
-s, --sandbox |
read-only, workspace-write, danger-full-access |
Set execution permissions |
--yolo |
flag | REQUIRED on HPC - Bypasses all sandbox restrictions (alias for --dangerously-bypass-approvals-and-sandbox). Cannot be used with --full-auto |
-C, --cd |
path |
Change workspace directory |
--skip-git-repo-check |
flag | Allow execution outside git repos |
--full-auto |
flag | Non-interactive mode (workspace-write + approvals on failure). Cannot be used with --yolo |
-p, --profile |
string |
Load configuration profile from config.toml |
--json |
flag | JSON event output (CI/CD pipelines) |
-o, --output-last-message |
path |
Write final message to file |
-i, --image |
path[,path...] |
Attach images (repeatable or comma-separated) |
--oss |
flag | Use local open-source model (requires Ollama) |
Configuration Options
Model Reasoning Effort (-c model_reasoning_effort="<LEVEL>"):
minimal: Quick tasks, simple querieslow: Standard operations, routine refactoringmedium: Complex analysis, architectural decisions (default)high: Critical code, security audits, complex algorithms
Model Verbosity (-c model_verbosity="<LEVEL>"):
low: Minimal outputmedium: Balanced detail (default)high: Verbose explanations
Approval Prompts (-c approvals="<WHEN>"):
on-request: Before any tool useon-failure: Only on errors (default for--full-auto)untrusted: Minimal promptsnever: No interruptions (use with caution)
Configuration Management
Config File Location
~/.codex/config.toml
Runtime Overrides
# Override single setting
codex exec -c model="gpt-5" "task"
# Override multiple settings
codex exec -c model="gpt-5" -c model_reasoning_effort="high" "task"
Using Profiles
Define in config.toml:
[profiles.research]
model = "gpt-5"
model_reasoning_effort = "high"
sandbox = "read-only"
[profiles.development]
model = "gpt-5-codex"
sandbox = "workspace-write"
Use with:
codex exec --profile research "analyze codebase"
Resume Behavior
Automatic inheritance:
- Model selection
- Reasoning effort
- Sandbox mode
- Configuration overrides
Resume syntax:
# Resume last session
codex exec resume --last
# Resume with new prompt
codex exec resume --last "continue with next steps"
# Resume via stdin
echo "new instructions" | codex exec resume --last 2>/dev/null
# Resume specific session
codex exec resume <SESSION_ID> "follow-up task"
Flag injection (between exec and resume):
# Change reasoning effort for resumed session
codex exec -c model_reasoning_effort="high" resume --last
Error Handling
Validation Loop
- Execute command
- Check exit code (non-zero = failure)
- Report error with context
- Ask user for direction via
AskUserQuestion - Retry with adjustments or escalate
Permission Requests
Before using high-impact flags, request user approval via AskUserQuestion:
--full-auto: Automated execution-s danger-full-access: System-wide access--yolo/--dangerously-bypass-approvals-and-sandbox:- HPC clusters: No approval needed (required for operation)
- Personal machines: Request approval (full system access)
Partial Success Handling
When output contains warnings:
- Summarize successful operations
- Detail failures with context
- Use
AskUserQuestionto determine next steps - Propose specific adjustments
Troubleshooting
File Access Blocked
Symptom: "shell is blocked by the sandbox" or permission errors
Root cause: Sandbox read-only mode restricts file system
Solutions (priority order):
Stdin piping (recommended):
cat target.py | codex exec -m gpt-5 -c model_reasoning_effort="medium" \ --skip-git-repo-check --full-auto - 2>/dev/nullExplicit permissions:
codex exec -m gpt-5 -s read-only \ -c 'sandbox_permissions=["disk-full-read-access"]' \ --skip-git-repo-check --full-auto "@file.py" 2>/dev/nullUpgrade sandbox:
codex exec -m gpt-5 -s workspace-write \ --skip-git-repo-check --full-auto "review @file.py" 2>/dev/null
Invalid Flag Errors
Symptom: "unexpected argument '--add-dir' found"
Cause: Flag does not exist in Codex CLI
Solution: Use -C <DIR> to change directory:
codex exec -C /target/dir -m gpt-5 --skip-git-repo-check \
--full-auto "task" 2>/dev/null
Exit Code Failures
Symptom: Non-zero exit without clear message
Diagnostic steps:
- Remove
2>/dev/nullto see full stderr - Verify installation:
codex --version - Check configuration:
cat ~/.codex/config.toml - Test minimal command:
codex exec -m gpt-5 "hello world" - Verify model access:
codex exec --model gpt-5 "test"
Model Unavailable
Symptom: "model not found" or authentication errors
Solutions:
- Check configured model:
grep model ~/.codex/config.toml - Verify API access: Ensure valid credentials
- Try alternative model:
-m gpt-5-codex - Use OSS fallback:
--oss(requires Ollama)
Session Resume Fails
Symptom: Cannot resume previous session
Diagnostic steps:
- List recent sessions:
codex history - Verify session ID format
- Try
--lastflag instead of specific ID - Check if session expired or was cleaned up
HPC/Slurm Sandbox Failures
Symptom: "Landlock sandbox error", "LandlockRestrict", or all file operations fail
Root Cause: HPC clusters use Landlock/seccomp kernel security modules that block Codex's default sandbox
✅ SOLUTION: Use the --yolo flag (priority order):
YOLO Flag (PRIMARY SOLUTION - WORKS ON HPC):
# Bypasses Landlock restrictions completely codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check \ "Analyze this code: $(cat /full/path/to/file.py)" 2>/dev/nullWhy this works:
--yolo(alias for--dangerously-bypass-approvals-and-sandbox) disables the Codex sandbox entirely, allowing direct file access on HPC systems. Note: Do not use --full-auto with --yolo as they are incompatible.Manual Code Injection (fallback if --yolo unavailable):
# Pass code directly in prompt via command substitution codex exec -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check --full-auto \ "Analyze this code comprehensively: $(cat /full/path/to/file.py)" 2>/dev/nullHeredoc for Long Code:
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check "$(cat <<'EOF' Analyze the following Python code for architecture, bugs, and optimization opportunities: $(cat /home/user/script.py) Provide technical depth with actionable insights. EOF )" 2>/dev/nullRun on Login Node (if compute node blocks outbound):
# SSH to login node first, then run codex there (not in Slurm job) ssh login.cluster.edu codex exec --yolo -m gpt-5 --skip-git-repo-check "analyze @file.py" 2>/dev/nullUse Apptainer/Singularity (if cluster supports):
# Build image with Codex installed, then run via Slurm singularity exec codex.sif codex exec --yolo -m gpt-5 "task"
Best Practice for HPC:
- Always use
--yoloflag on HPC clusters - it's safe on login nodes where you already have limited permissions - Run analysis on login nodes, submit only heavy compute jobs to Slurm
- Keep code files on shared filesystem readable from login nodes
- Combine
--yolowith$(cat file.py)for maximum compatibility
Best Practices
Reasoning Effort Selection
- minimal: Syntax fixes, simple renaming
- low: Standard refactoring, basic analysis
- medium: Complex refactoring, architecture review
- high: Security audits, algorithm optimization, critical bugs
Sandbox Mode Selection
- read-only: Default for any analysis or review
- workspace-write: File modifications only
- danger-full-access: Network operations, system commands (rare)
Stderr Suppression
- Always use
2>/dev/nullunless:- User explicitly requests thinking tokens
- Debugging failed commands
- Troubleshooting configuration issues
Profile Usage
Create profiles for common workflows:
review: High reasoning, read-onlyrefactor: Medium reasoning, workspace-writequick: Low reasoning, read-onlysecurity: High reasoning, workspace-write
Stdin vs File Reference
- Stdin: Single file analysis, avoids permissions
- File reference: Multi-file context, codebase-wide changes
Safety Guidelines
HPC Clusters - --yolo is SAFE and REQUIRED:
- HPC login nodes already have strict permissions (no root access, no network modification)
--yolobypasses Codex sandbox but you still operate within HPC user restrictions- Always use
--yoloon HPC to avoid Landlock errors
General Use - Exercise Caution:
- Don't use
--yoloon unrestricted systems (your laptop, cloud VMs with full sudo) - Prefer
--full-auto+-s workspace-writefor normal development
Always verify before:
- Using
danger-full-accesssandbox (outside HPC) - Disabling approval prompts on production systems
- Running with
--yoloon personal machines with sudo access
Ask user approval for:
- First-time
workspace-writeusage - System-wide access requests
- Destructive operations (deletions, migrations)
Advanced Usage
CI/CD Integration
codex exec --json -o result.txt -m gpt-5 \
-c model_reasoning_effort="medium" \
--skip-git-repo-check --full-auto \
"run security audit on changed files" 2>/dev/null
Batch Processing
for file in *.py; do
cat "$file" | codex exec -m gpt-5 -c model_reasoning_effort="low" \
--skip-git-repo-check --full-auto "lint and format" - 2>/dev/null
done
Multi-Step Workflows
# Step 1: Analysis
codex exec -m gpt-5 -c model_reasoning_effort="high" -s read-only \
--full-auto "analyze @codebase for architectural issues" 2>/dev/null
# Step 2: Resume with changes
echo "implement suggested refactoring" | \
codex exec -s workspace-write resume --last 2>/dev/null
When to Escalate
If errors persist after troubleshooting:
Check documentation:
WebFetch https://developers.openai.com/codex/cli/referenceWebFetch https://developers.openai.com/codex/local-config#cliReport to user:
- Error message verbatim
- Attempted solutions
- Configuration details
- Exit codes and stderr output
Request guidance:
- Alternative approaches
- Configuration adjustments
- Manual intervention points
Model Selection Guide
| Task Type | Recommended Model | Reasoning Effort |
|---|---|---|
| Quick syntax fixes | gpt-5 |
minimal |
| Code review | gpt-5 |
medium |
| Refactoring | gpt-5-codex |
medium |
| Architecture analysis | gpt-5 |
high |
| Security audit | gpt-5 |
high |
| Algorithm optimization | gpt-5-codex |
high |
| Documentation generation | gpt-5 |
low |
Common Workflows
Code Review Workflow
- Ask user: model + reasoning effort
- Run read-only analysis
- Present findings
- If changes needed: resume with workspace-write
- Validate changes
- Inform about resume capability
Refactoring Workflow
- Ask user: model + reasoning effort
- Analyze current code (read-only)
- Propose changes
- Get user approval
- Apply changes (workspace-write)
- Run validation/tests
- Report results
Security Audit Workflow
- Use high reasoning effort
- Run comprehensive analysis (read-only)
- Document findings
- Propose fixes
- Apply fixes if approved (workspace-write)
- Re-audit to verify
- Generate report