| name | distributed-claude-sender |
| description | Send prompts to a remote Claude instance on a VPS for distributed AI collaboration, different model backends, or independent context. |
| version | 1 |
Distributed Claude - Sender
Send prompts to a remote Claude Code instance (Z.ai GLM backend) running on a VPS.
When to Use
- Different backend: Get responses from Z.ai GLM models while you use Anthropic
- Independent context: Remote Claude maintains separate conversation history
- Collaboration: Two Claude instances working on different aspects of a problem
- Testing: Compare responses across different models
Usage
# Replace YOUR_SERVER with your SSH alias or user@host
ssh YOUR_SERVER "cd ~/seed && ./chat.sh 'your prompt here'"
# With custom Doppler project/config
ssh YOUR_SERVER "cd ~/seed && ./chat.sh 'prompt' --project myproj --config dev"
Architecture
You (Local Claude)
|
v
ssh YOUR_SERVER "./chat.sh 'prompt'"
|
v
Remote Claude (Z.ai GLM)
|
v
Response (with full remote context)
Reset Remote Conversation
ssh YOUR_SERVER "rm /tmp/c.txt"
Setup Remote Server
- Clone seed repo on server:
git clone https://github.com/ebowwa/seed.git && cd seed - Run setup:
./setup.sh - Configure Doppler:
doppler login - Start chatting:
./chat.sh "hello"
Example
# Ask remote Claude to analyze a file on the server
ssh YOUR_SERVER "cd ~/seed && ./chat.sh 'Read README.md and summarize the key points'"
Tips
- The remote Claude has full context of its conversation history
- Each message via
chat.shincludes the entire conversation log - Use
rm /tmp/c.txton the server to reset remote memory - The
chat.shscript accepts--projectand--configflags for Doppler flexibility