| name | fetch-remote |
| description | Download files and directories from remote machines via SSH/SCP or HTTP/HTTPS. Use when user mentions downloading from remote server, scp, rsync, fetch files from remote host, or getting files from remote machine. Supports progress display, resume, and auto-extraction. |
| allowed-tools | Bash, Read, Write, Glob, AskUserQuestion |
fetch-remote - Remote File Download Skill
Intelligent remote file and directory download tool supporting SSH/SCP, rsync, and HTTP/HTTPS with automatic protocol detection.
When to Use This Skill
- User wants to download file(s) from a remote server
- User mentions: "scp from server", "download from remote", "fetch file from host"
- User provides a remote path like
user@host:/path/to/file - User wants to sync files from remote machine
- User needs to download and extract archives from remote servers
Core Capabilities
Auto-detect Protocol
user@host:/path→ Use SCP/rsynchttp://...orhttps://...→ Use wget/curl- Intelligent fallback between methods
File & Directory Support
- Single files
- Entire directories (recursive)
- Multiple files with wildcards
- Preserve permissions and timestamps
Smart Features
- Progress display with transfer speed
- Resume interrupted downloads
- Auto-extract archives (.tar.gz, .zip, etc.)
- Verify file integrity after download
SSH Authentication
- Use SSH keys from ~/.ssh/ (preferred)
- Support custom SSH key paths
- Use SSH config hosts
- Handle SSH agent
Instructions
1. Parse Remote Source
When user requests to download from remote:
Detect source format:
- SSH format:
user@host:/path/to/fileorhost:/path(uses SSH config) - HTTP format:
http://example.com/fileorhttps://... - Auto-detect based on pattern
- SSH format:
Extract components:
- For SSH: username, hostname, remote path
- For HTTP: full URL
- Destination path (optional, defaults to current directory)
Check if directory or file:
- Ask user if ambiguous
- Assume directory if path ends with
/ - Use
ssh user@host "test -d /path"to verify
2. Determine Download Method
Decision tree:
If HTTP/HTTPS URL:
- Use
wget --continue --progress=bar(if available) - Fall back to
curl -L -C - -#(if wget not available) - Show download progress
- Use
If SSH source (user@host:path):
- For small files (<100MB): Use
scp -v - For large files/directories: Use
rsync -avz --progress - For resume capability: Prefer
rsyncwith--partial
- For small files (<100MB): Use
Check SSH connectivity first:
ssh -o ConnectTimeout=5 user@host "echo connected" 2>/dev/null
3. Execute Download
For SSH/SCP Downloads:
Single file (small):
scp -v user@host:/remote/path/file.txt ./local/path/
Single file (large, with resume):
rsync -avz --progress --partial user@host:/remote/path/file.txt ./local/path/
Directory (recursive):
rsync -avz --progress user@host:/remote/path/directory/ ./local/path/directory/
With custom SSH key:
scp -i ~/.ssh/custom_key user@host:/path/file ./
rsync -avz -e "ssh -i ~/.ssh/custom_key" user@host:/path/ ./local/
With SSH config host:
# If ~/.ssh/config has entry for "prod-server"
scp prod-server:/path/file ./
rsync -avz --progress prod-server:/path/ ./local/
For HTTP/HTTPS Downloads:
Using wget (preferred):
wget --continue --progress=bar:force --show-progress \
-O local_filename https://example.com/file.tar.gz
Using curl (fallback):
curl -L -C - -# -o local_filename https://example.com/file.tar.gz
4. Handle Download Progress
Show real-time progress:
- For
rsync: Progress is built-in with--progress - For
scp: Use-vfor verbose output - For
wget: Use--progress=bar:force --show-progress - For
curl: Use-#for progress bar
Parse and display:
- File size
- Transfer speed
- ETA (estimated time)
- Percentage complete
5. Post-Download Actions
After successful download:
Verify download:
- Check file exists locally
- Compare file size (if known)
- Verify checksum if available
Auto-extract if archive:
- Detect:
.tar.gz,.tgz,.tar.bz2,.zip,.tar.xz - Ask user: "Extract archive? (yes/no)"
- If yes:
# .tar.gz, .tgz tar -xzvf file.tar.gz # .tar.bz2 tar -xjvf file.tar.bz2 # .zip unzip file.zip # .tar.xz tar -xJvf file.tar.xz
- Detect:
Preserve permissions (for rsync):
-aflag preserves permissions, ownership, timestamps- Report if permissions changed
Report summary:
- Files downloaded
- Total size
- Download time
- Average speed
- Local path
6. Handle Interruptions & Resume
For interrupted downloads:
Detect partial file:
- Check if
.partfile exists (wget) - Check if destination file exists (rsync partial)
- Check if
Offer to resume:
- "Found partial download. Resume? (yes/no)"
- If yes: Add
--continue(wget) or--partial(rsync)
Resume commands:
# wget resume wget --continue URL # rsync resume rsync -avz --progress --partial user@host:/path ./ # curl resume curl -C - -o file URL
SSH Authentication Handling
Priority Order:
SSH agent (if running)
- Check:
ssh-add -l - Use automatically if keys loaded
- Check:
Default SSH keys
~/.ssh/id_rsa~/.ssh/id_ed25519~/.ssh/id_ecdsa
SSH config file (
~/.ssh/config)- Check for host alias
- Use configured IdentityFile
Custom key path
- If user specifies:
-i /path/to/key
- If user specifies:
SSH Config Detection:
Check if host is in SSH config:
ssh -G hostname | grep "^hostname" | grep -v "^hostname hostname$"
If found, extract:
- Actual hostname
- Username
- Port
- IdentityFile
Handle SSH Key Permissions:
If key permission error:
chmod 600 ~/.ssh/id_rsa
chmod 700 ~/.ssh
Suggest to user:
SSH key permissions issue detected.
Run: chmod 600 ~/.ssh/your_key
Smart Defaults
- Download location: Current directory
- Preserve structure: Yes (with rsync)
- Show progress: Always
- Resume on failure: Offer to user
- Extract archives: Ask user
- SSH timeout: 30 seconds
- Retry on failure: 3 times with exponential backoff
Common Scenarios
Scenario 1: Download Single File via SSH
User: "Download /var/log/app.log from prod-server"
Actions:
- Check if "prod-server" is in SSH config
- Use:
scp prod-server:/var/log/app.log ./ - Show progress
- Report: "Downloaded app.log (2.3 MB) to ./"
Scenario 2: Download Directory with Resume
User: "Download /data/backups from user@192.168.1.100"
Actions:
- Verify it's a directory:
ssh user@192.168.1.100 "test -d /data/backups" - Use:
rsync -avz --progress --partial user@192.168.1.100:/data/backups/ ./backups/ - If interrupted, offer resume
- Report total files and size
Scenario 3: Download from HTTP
User: "Download https://example.com/dataset.tar.gz"
Actions:
- Check if wget available:
which wget - Use:
wget --continue --progress=bar:force https://example.com/dataset.tar.gz - After download, ask: "Extract dataset.tar.gz?"
- If yes:
tar -xzvf dataset.tar.gz
Scenario 4: Batch Download
User: "Download all .log files from server:/var/logs/"
Actions:
- Use rsync with include pattern:
rsync -avz --progress --include='*.log' --exclude='*' \ user@server:/var/logs/ ./logs/ - Report number of files downloaded
Scenario 5: Download with Custom SSH Key
User: "Download file.txt from backup-server using key at ~/.ssh/backup_key"
Actions:
- Verify key exists and has correct permissions
- Use:
scp -i ~/.ssh/backup_key backup-server:/path/file.txt ./ - If permission error on key, suggest:
chmod 600 ~/.ssh/backup_key
Error Handling
Connection Failed
Error: ssh: connect to host X port 22: Connection refused
Actions:
- Check if host is reachable:
ping -c 3 hostname - Check if SSH port is open:
nc -zv hostname 22 - Suggest:
- Verify hostname/IP
- Check if SSH service is running
- Try different port:
ssh -p 2222 user@host
Authentication Failed
Error: Permission denied (publickey)
Actions:
- Check SSH keys:
ssh-add -l - Suggest adding key:
ssh-add ~/.ssh/id_rsa - Or try password auth:
scp -o PreferredAuthentications=password user@host:file ./
File Not Found
Error: No such file or directory
Actions:
- Verify path exists:
ssh user@host "ls -la /path/to/file" - Suggest:
- Check file path spelling
- Verify you have read permissions
- List directory:
ssh user@host "ls /path/to/"
Disk Space Full
Error: No space left on device
Actions:
- Check local disk space:
df -h . - Suggest:
- Free up space
- Download to different location
- Use compression
Network Interrupted
Error: Connection reset by peer or timeout
Actions:
- Offer to resume: "Download was interrupted. Resume?"
- If yes, use rsync with
--partial - If repeated failures, suggest:
- Check network stability
- Try during off-peak hours
- Use screen/tmux for long transfers
Advanced Features
1. Parallel Downloads
For multiple files:
# Download 4 files in parallel
parallel -j 4 scp user@host:/path/file{} ./ ::: 1 2 3 4
2. Bandwidth Limiting
To avoid saturating network:
# Limit to 1MB/s
rsync -avz --progress --bwlimit=1024 user@host:/path ./
# scp with limit
scp -l 8192 user@host:/path ./ # 8192 Kbit/s = 1 MB/s
3. Compression for Faster Transfer
Enable compression:
# rsync with compression
rsync -avz --compress user@host:/path ./
# scp with compression
scp -C user@host:/path ./
4. Exclude Patterns
Skip certain files:
# Exclude .git and node_modules
rsync -avz --progress \
--exclude='.git' --exclude='node_modules' \
user@host:/project/ ./project/
5. Dry Run (Preview)
See what would be downloaded:
rsync -avz --progress --dry-run user@host:/path ./
Ask user: "Preview shows 245 files (1.2 GB). Proceed with download?"
Safety Features
Confirm before large downloads:
- If size > 1GB, ask user to confirm
- Show estimated time based on network speed
Verify SSH fingerprint (first connection):
- Show fingerprint
- Ask user to confirm
Check destination exists:
- Create directory if needed
- Warn if files will be overwritten
Atomic downloads (when possible):
- Download to temp file first
- Move to final location on success
Output Format
Success output:
✓ Downloading from: user@prod-server:/data/backup.tar.gz
✓ Destination: ./backup.tar.gz
✓ Method: rsync (with resume support)
Progress: [████████████████████] 100% | 245 MB | 12.3 MB/s | ETA: 0s
✓ Download complete!
File: backup.tar.gz
Size: 245 MB
Time: 20s
Speed: 12.3 MB/s
Location: /Users/myan/Downloads/backup.tar.gz
Archive detected. Extract now? (yes/no):
Error output:
✗ Failed to connect to remote host
✗ Error: ssh: connect to host prod-server port 22: Connection refused
Troubleshooting:
1. Check if host is reachable: ping prod-server
2. Verify SSH service: nc -zv prod-server 22
3. Check SSH config: cat ~/.ssh/config
4. Try different port: ssh -p 2222 user@host
Retry? (yes/no):
Configuration
Optional config file (~/.fetch-remote.conf):
# Default SSH key
DEFAULT_SSH_KEY=~/.ssh/id_rsa
# Default download directory
DOWNLOAD_DIR=~/Downloads
# Auto-extract archives
AUTO_EXTRACT=ask # Options: yes, no, ask
# Preferred method
PREFER_RSYNC=yes # Use rsync over scp when possible
# Bandwidth limit (KB/s, 0 = unlimited)
BANDWIDTH_LIMIT=0
# Connection timeout (seconds)
TIMEOUT=30
# Number of retries
RETRY_COUNT=3
Integration with SSH Config
Example ~/.ssh/config:
Host prod
HostName prod.example.com
User deploy
Port 22
IdentityFile ~/.ssh/prod_key
Host backup-server
HostName 192.168.1.100
User backup
IdentityFile ~/.ssh/backup_key
Usage with config:
# Just use the host alias
fetch-remote prod:/data/app.log
fetch-remote backup-server:/backups/db.sql.gz
Best Practices
- Use rsync for large transfers - Better resume support
- Keep SSH keys secure - Use ssh-agent, proper permissions
- Use SSH config - Define hosts once, use everywhere
- Enable compression - For slow networks
- Verify downloads - Check file sizes/checksums
- Clean up partial downloads - Remove .part files on failure
- Use bandwidth limits - On shared networks
Requirements
Required tools:
ssh- SSH clientscp- Secure copy (usually bundled with ssh)
Optional but recommended:
rsync- For better performance and resume capabilitywgetorcurl- For HTTP/HTTPS downloadsssh-agent- For key management
Check availability:
which ssh scp rsync wget curl
Usage with Claude Code
Natural commands that trigger this skill:
- "Download file.txt from server:/path/"
- "Get /var/log/app.log from prod-server"
- "Fetch backup from user@192.168.1.100:/backups/"
- "Download https://example.com/dataset.tar.gz"
- "Sync directory from remote server"
- "Download and extract archive from host"
Claude will automatically use this skill and handle all the complexity!