| name | flow-nexus-neural |
| description | Train and deploy neural networks in distributed E2B sandboxes with Flow Nexus |
| version | 1.0.0 |
| category | ai-ml |
| tags | neural-networks, distributed-training, machine-learning, deep-learning, flow-nexus, e2b-sandboxes |
| requires_auth | true |
| mcp_server | flow-nexus |
| author | ruv |
When NOT to Use This Skill
- Local development without cloud infrastructure needs
- Simple scripts that do not require sandboxed execution
- Operations without distributed computing requirements
- Tasks that can run on single-machine environments
Success Criteria
- API response time: <200ms for sandbox creation
- Deployment success rate: >99%
- Sandbox startup time: <5s
- Network latency: <50ms between sandboxes
- Resource utilization: <80% CPU/memory per sandbox
- Uptime: >99.9% for production deployments
Edge Cases & Error Handling
- Rate Limits: Flow Nexus API has request limits; implement queuing and backoff
- Authentication Failures: Validate API tokens before operations; refresh expired tokens
- Network Issues: Retry failed requests with exponential backoff (max 5 retries)
- Quota Exhaustion: Monitor sandbox/compute quotas; alert before limits
- Sandbox Timeouts: Set appropriate timeout values; clean up orphaned sandboxes
- Deployment Failures: Implement rollback strategies; maintain previous working state
Guardrails & Safety
- NEVER expose API keys or authentication tokens in code or logs
- ALWAYS validate responses from Flow Nexus API before processing
- ALWAYS implement timeout limits for long-running operations
- NEVER trust user input for sandbox commands without validation
- ALWAYS monitor resource usage to prevent runaway processes
- ALWAYS clean up sandboxes and resources after task completion
Evidence-Based Validation
- Verify platform health: Check Flow Nexus status endpoint before operations
- Validate deployments: Test sandbox connectivity and functionality
- Monitor costs: Track compute usage and spending against budgets
- Test failure scenarios: Simulate network failures, timeouts, auth errors
- Benchmark performance: Compare actual vs expected latency/throughput
Flow Nexus Neural Networks
Deploy, train, and manage neural networks in distributed E2B sandbox environments. Train custom models with multiple architectures (feedforward, LSTM, GAN, transformer) or use pre-built templates from the marketplace.
Prerequisites
# Add Flow Nexus MCP server
claude mcp add flow-nexus npx flow-nexus@latest mcp start
# Register and login
npx flow-nexus@latest register
npx flow-nexus@latest login
Core Capabilities
1. Single-Node Neural Training
Train neural networks with custom architectures and configurations.
Available Architectures:
feedforward- Standard fully-connected networkslstm- Long Short-Term Memory for sequencesgan- Generative Adversarial Networksautoencoder- Dimensionality reductiontransformer- Attention-based models
Training Tiers:
nano- Minimal resources (fast, limited)mini- Small modelssmall- Standard modelsmedium- Complex modelslarge- Large-scale training
Example: Train Custom Classifier
mcp__flow-nexus__neural_train({
config: {
architecture: {
type: "feedforward",
layers: [
{ type: "dense", units: 256, activation: "relu" },
{ type: "dropout", rate: 0.3 },
{ type: "dense", units: 128, activation: "relu" },
{ type: "dropout", rate: 0.2 },
{ type: "dense", units: 64, activation: "relu" },
{ type: "dense", units: 10, activation: "softmax" }
]
},
training: {
epochs: 100,
batch_size: 32,
learning_rate: 0.001,
optimizer: "adam"
},
divergent: {
enabled: true,
pattern: "lateral", // quantum, chaotic, associative, evolutionary
factor: 0.5
}
},
tier: "small",
user_id: "your_user_id"
})
Example: LSTM for Time Series
mcp__flow-nexus__neural_train({
config: {
architecture: {
type: "lstm",
layers: [
{ type: "lstm", units: 128, return_sequences: true },
{ type: "dropout", rate: 0.2 },
{ type: "lstm", units: 64 },
{ type: "dense", units: 1, activation: "linear" }
]
},
training: {
epochs: 150,
batch_size: 64,
learning_rate: 0.01,
optimizer: "adam"
}
},
tier: "medium"
})
Example: Transformer Architecture
mcp__flow-nexus__neural_train({
config: {
architecture: {
type: "transformer",
layers: [
{ type: "embedding", vocab_size: 10000, embedding_dim: 512 },
{ type: "transformer_encoder", num_heads: 8, ff_dim: 2048 },
{ type: "global_average_pooling" },
{ type: "dense", units: 128, activation: "relu" },
{ type: "dense", units: 2, activation: "softmax" }
]
},
training: {
epochs: 50,
batch_size: 16,
learning_rate: 0.0001,
optimizer: "adam"
}
},
tier: "large"
})
2. Model Inference
Run predictions on trained models.
mcp__flow-nexus__neural_predict({
model_id: "model_abc123",
input: [
[0.5, 0.3, 0.2, 0.1],
[0.8, 0.1, 0.05, 0.05],
[0.2, 0.6, 0.15, 0.05]
],
user_id: "your_user_id"
})
Response:
{
"predictions": [
[0.12, 0.85, 0.03],
[0.89, 0.08, 0.03],
[0.05, 0.92, 0.03]
],
"inference_time_ms": 45,
"model_version": "1.0.0"
}
3. Template Marketplace
Browse and deploy pre-trained models from the marketplace.
List Available Templates
mcp__flow-nexus__neural_list_templates({
category: "classification", // timeseries, regression, nlp, vision, anomaly, generative
tier: "free", // or "paid"
search: "sentiment",
limit: 20
})
Response:
{
"templates": [
{
"id": "sentiment-analysis-v2",
"name": "Sentiment Analysis Classifier",
"description": "Pre-trained BERT model for sentiment analysis",
"category": "nlp",
"accuracy": 0.94,
"downloads": 1523,
"tier": "free"
},
{
"id": "image-classifier-resnet",
"name": "ResNet Image Classifier",
"description": "ResNet-50 for image classification",
"category": "vision",
"accuracy": 0.96,
"downloads": 2341,
"tier": "paid"
}
]
}
Deploy Template
mcp__flow-nexus__neural_deploy_template({
template_id: "sentiment-analysis-v2",
custom_config: {
training: {
epochs: 50,
learning_rate: 0.0001
}
},
user_id: "your_user_id"
})
4. Distributed Training Clusters
Train large models across multiple E2B sandboxes with distributed computing.
Initialize Cluster
mcp__flow-nexus__neural_cluster_init({
name: "large-model-cluster",
architecture: "transformer", // transformer, cnn, rnn, gnn, hybrid
topology: "mesh", // mesh, ring, star, hierarchical
consensus: "proof-of-learning", // byzantine, raft, gossip
daaEnabled: true, // Decentralized Autonomous Agents
wasmOptimization: true
})
Response:
{
"cluster_id": "cluster_xyz789",
"name": "large-model-cluster",
"status": "initializing",
"topology": "mesh",
"max_nodes": 100,
"created_at": "2025-10-19T10:30:00Z"
}
Deploy Worker Nodes
// Deploy parameter server
mcp__flow-nexus__neural_node_deploy({
cluster_id: "cluster_xyz789",
node_type: "parameter_server",
model: "large",
template: "nodejs",
capabilities: ["parameter_management", "gradient_aggregation"],
autonomy: 0.8
})
// Deploy worker nodes
mcp__flow-nexus__neural_node_deploy({
cluster_id: "cluster_xyz789",
node_type: "worker",
model: "xl",
role: "worker",
capabilities: ["training", "inference"],
layers: [
{ type: "transformer_encoder", num_heads: 16 },
{ type: "feed_forward", units: 4096 }
],
autonomy: 0.9
})
// Deploy aggregator
mcp__flow-nexus__neural_node_deploy({
cluster_id: "cluster_xyz789",
node_type: "aggregator",
model: "large",
capabilities: ["gradient_aggregation", "model_synchronization"]
})
Connect Cluster Topology
mcp__flow-nexus__neural_cluster_connect({
cluster_id: "cluster_xyz789",
topology: "mesh" // Override default if needed
})
Start Distributed Training
mcp__flow-nexus__neural_train_distributed({
cluster_id: "cluster_xyz789",
dataset: "imagenet", // or custom dataset identifier
epochs: 100,
batch_size: 128,
learning_rate: 0.001,
optimizer: "adam", // sgd, rmsprop, adagrad
federated: true // Enable federated learning
})
Federated Learning Example:
mcp__flow-nexus__neural_train_distributed({
cluster_id: "cluster_xyz789",
dataset: "medical_images_distributed",
epochs: 200,
batch_size: 64,
learning_rate: 0.0001,
optimizer: "adam",
federated: true, // Data stays on local nodes
aggregation_rounds: 50,
min_nodes_per_round: 5
})
Monitor Cluster Status
mcp__flow-nexus__neural_cluster_status({
cluster_id: "cluster_xyz789"
})
Response:
{
"cluster_id": "cluster_xyz789",
"status": "training",
"nodes": [
{
"node_id": "node_001",
"type": "parameter_server",
"status": "active",
"cpu_usage": 0.75,
"memory_usage": 0.82
},
{
"node_id": "node_002",
"type": "worker",
"status": "active",
"training_progress": 0.45
}
],
"training_metrics": {
"current_epoch": 45,
"total_epochs": 100,
"loss": 0.234,
"accuracy": 0.891
}
}
Run Distributed Inference
mcp__flow-nexus__neural_predict_distributed({
cluster_id: "cluster_xyz789",
input_data: JSON.stringify([
[0.1, 0.2, 0.3],
[0.4, 0.5, 0.6]
]),
aggregation: "ensemble" // mean, majority, weighted, ensemble
})
Terminate Cluster
mcp__flow-nexus__neural_cluster_terminate({
cluster_id: "cluster_xyz789"
})
5. Model Management
List Your Models
mcp__flow-nexus__neural_list_models({
user_id: "your_user_id",
include_public: true
})
Response:
{
"models": [
{
"model_id": "model_abc123",
"name": "Custom Classifier v1",
"architecture": "feedforward",
"accuracy": 0.92,
"created_at": "2025-10-15T14:20:00Z",
"status": "trained"
},
{
"model_id": "model_def456",
"name": "LSTM Forecaster",
"architecture": "lstm",
"mse": 0.0045,
"created_at": "2025-10-18T09:15:00Z",
"status": "training"
}
]
}
Check Training Status
mcp__flow-nexus__neural_training_status({
job_id: "job_training_xyz"
})
Response:
{
"job_id": "job_training_xyz",
"status": "training",
"progress": 0.67,
"current_epoch": 67,
"total_epochs": 100,
"current_loss": 0.234,
"estimated_completion": "2025-10-19T12:45:00Z"
}
Performance Benchmarking
mcp__flow-nexus__neural_performance_benchmark({
model_id: "model_abc123",
benchmark_type: "comprehensive" // inference, throughput, memory, comprehensive
})
Response:
{
"model_id": "model_abc123",
"benchmarks": {
"inference_latency_ms": 12.5,
"throughput_qps": 8000,
"memory_usage_mb": 245,
"gpu_utilization": 0.78,
"accuracy": 0.92,
"f1_score": 0.89
},
"timestamp": "2025-10-19T11:00:00Z"
}
Create Validation Workflow
mcp__flow-nexus__neural_validation_workflow({
model_id: "model_abc123",
user_id: "your_user_id",
validation_type: "comprehensive" // performance, accuracy, robustness, comprehensive
})
6. Publishing and Marketplace
Publish Model as Template
mcp__flow-nexus__neural_publish_template({
model_id: "model_abc123",
name: "High-Accuracy Sentiment Classifier",
description: "Fine-tuned BERT model for sentiment analysis with 94% accuracy",
category: "nlp",
price: 0, // 0 for free, or credits amount
user_id: "your_user_id"
})
Rate a Template
mcp__flow-nexus__neural_rate_template({
template_id: "sentiment-analysis-v2",
rating: 5,
review: "Excellent model! Achieved 95% accuracy on my dataset.",
user_id: "your_user_id"
})
Common Use Cases
Image Classification with CNN
// Initialize cluster for large-scale image training
const cluster = await mcp__flow-nexus__neural_cluster_init({
name: "image-classification-cluster",
architecture: "cnn",
topology: "hierarchical",
wasmOptimization: true
})
// Deploy worker nodes
await mcp__flow-nexus__neural_node_deploy({
cluster_id: cluster.cluster_id,
node_type: "worker",
model: "large",
capabilities: ["training", "data_augmentation"]
})
// Start training
await mcp__flow-nexus__neural_train_distributed({
cluster_id: cluster.cluster_id,
dataset: "custom_images",
epochs: 100,
batch_size: 64,
learning_rate: 0.001,
optimizer: "adam"
})
NLP Sentiment Analysis
// Use pre-built template
const deployment = await mcp__flow-nexus__neural_deploy_template({
template_id: "sentiment-analysis-v2",
custom_config: {
training: {
epochs: 30,
batch_size: 16
}
}
})
// Run inference
const result = await mcp__flow-nexus__neural_predict({
model_id: deployment.model_id,
input: ["This product is amazing!", "Terrible experience."]
})
Time Series Forecasting
// Train LSTM model
const training = await mcp__flow-nexus__neural_train({
config: {
architecture: {
type: "lstm",
layers: [
{ type: "lstm", units: 128, return_sequences: true },
{ type: "dropout", rate: 0.2 },
{ type: "lstm", units: 64 },
{ type: "dense", units: 1 }
]
},
training: {
epochs: 150,
batch_size: 64,
learning_rate: 0.01,
optimizer: "adam"
}
},
tier: "medium"
})
// Monitor progress
const status = await mcp__flow-nexus__neural_training_status({
job_id: training.job_id
})
Federated Learning for Privacy
// Initialize federated cluster
const cluster = await mcp__flow-nexus__neural_cluster_init({
name: "federated-medical-cluster",
architecture: "transformer",
topology: "mesh",
consensus: "proof-of-learning",
daaEnabled: true
})
// Deploy nodes across different locations
for (let i = 0; i < 5; i++) {
await mcp__flow-nexus__neural_node_deploy({
cluster_id: cluster.cluster_id,
node_type: "worker",
model: "large",
autonomy: 0.9
})
}
// Train with federated learning (data never leaves nodes)
await mcp__flow-nexus__neural_train_distributed({
cluster_id: cluster.cluster_id,
dataset: "medical_records_distributed",
epochs: 200,
federated: true,
aggregation_rounds: 100
})
Architecture Patterns
Feedforward Networks
Best for: Classification, regression, simple pattern recognition
{
type: "feedforward",
layers: [
{ type: "dense", units: 256, activation: "relu" },
{ type: "dropout", rate: 0.3 },
{ type: "dense", units: 128, activation: "relu" },
{ type: "dense", units: 10, activation: "softmax" }
]
}
LSTM Networks
Best for: Time series, sequences, forecasting
{
type: "lstm",
layers: [
{ type: "lstm", units: 128, return_sequences: true },
{ type: "lstm", units: 64 },
{ type: "dense", units: 1 }
]
}
Transformers
Best for: NLP, attention mechanisms, large-scale text
{
type: "transformer",
layers: [
{ type: "embedding", vocab_size: 10000, embedding_dim: 512 },
{ type: "transformer_encoder", num_heads: 8, ff_dim: 2048 },
{ type: "global_average_pooling" },
{ type: "dense", units: 2, activation: "softmax" }
]
}
GANs
Best for: Generative tasks, image synthesis
{
type: "gan",
generator_layers: [...],
discriminator_layers: [...]
}
Autoencoders
Best for: Dimensionality reduction, anomaly detection
{
type: "autoencoder",
encoder_layers: [
{ type: "dense", units: 128, activation: "relu" },
{ type: "dense", units: 64, activation: "relu" }
],
decoder_layers: [
{ type: "dense", units: 128, activation: "relu" },
{ type: "dense", units: input_dim, activation: "sigmoid" }
]
}
Best Practices
- Start Small: Begin with
nanoorminitiers for experimentation - Use Templates: Leverage marketplace templates for common tasks
- Monitor Training: Check status regularly to catch issues early
- Benchmark Models: Always benchmark before production deployment
- Distributed Training: Use clusters for large models (>1B parameters)
- Federated Learning: Use for privacy-sensitive data
- Version Models: Publish successful models as templates for reuse
- Validate Thoroughly: Use validation workflows before deployment
Troubleshooting
Training Stalled
// Check cluster status
const status = await mcp__flow-nexus__neural_cluster_status({
cluster_id: "cluster_id"
})
// Terminate and restart if needed
await mcp__flow-nexus__neural_cluster_terminate({
cluster_id: "cluster_id"
})
Low Accuracy
- Increase epochs
- Adjust learning rate
- Add regularization (dropout)
- Try different optimizer
- Use data augmentation
Out of Memory
- Reduce batch size
- Use smaller model tier
- Enable gradient accumulation
- Use distributed training
Related Skills
flow-nexus-sandbox- E2B sandbox managementflow-nexus-swarm- AI swarm orchestrationflow-nexus-workflow- Workflow automation
Resources
- Flow Nexus Docs: https://flow-nexus.ruv.io/docs
- Neural Network Guide: https://flow-nexus.ruv.io/docs/neural
- Template Marketplace: https://flow-nexus.ruv.io/templates
- API Reference: https://flow-nexus.ruv.io/api
Note: Distributed training requires authentication. Register at https://flow-nexus.ruv.io or use npx flow-nexus@latest register.
Core Principles
Flow Nexus Neural Networks operates on 3 fundamental principles:
Principle 1: Distributed Training Through E2B Sandbox Orchestration
Scale neural network training across multiple isolated sandboxes with coordinated gradient aggregation and model synchronization.
In practice:
- Initialize cluster with topology (mesh, ring, hierarchical) for node communication patterns
- Deploy worker nodes (training), parameter servers (gradient aggregation), and aggregators (model sync) as separate sandboxes
- WASM-accelerated inference provides 10-100x faster training vs pure JavaScript implementations
Principle 2: Federated Learning Enables Privacy-Preserving Distributed Training
Train models on distributed data without centralizing sensitive information by keeping data on local nodes.
In practice:
- Worker nodes train on local data (medical records, user activity) without uploading raw data
- Aggregation rounds collect only model updates (gradients) from min 5 nodes per round
- Proof-of-Learning consensus validates training progress without exposing training data
Principle 3: Template Marketplace Accelerates Development Through Reusable Models
Leverage pre-trained models and proven architectures from community templates instead of building from scratch.
In practice:
- Deploy sentiment analysis, image classification, time series forecasting with single command
- Customize templates with training config overrides (epochs, learning rate, batch size)
- Publish successful models as templates for credits or free community contribution
Common Anti-Patterns
| Anti-Pattern | Problem | Solution |
|---|---|---|
| Deploying All Workers Before Cluster Initialization | Workers deployed without cluster context fail to connect or coordinate | Initialize cluster first with neural_cluster_init, capture cluster_id, then deploy nodes with cluster_id reference |
| Ignoring Training Tiers | Using large tier for prototyping wastes compute; nano for production underfits |
Match tier to task: nano/mini for experimentation, small/medium for production, large/xl for complex models (>1B parameters) |
| Sequential Node Deployment | Deploying 10 workers sequentially takes 50s (5s per sandbox); delays training start | Deploy nodes in parallel by calling neural_node_deploy concurrently for all workers (5s total vs 50s sequential) |
Conclusion
Flow Nexus Neural Networks transforms distributed AI development by providing E2B sandbox infrastructure for scalable neural network training, federated learning for privacy-preserving model development, and a marketplace of reusable templates to accelerate common tasks. By orchestrating multiple sandboxes with coordinated gradient aggregation and model synchronization, you achieve enterprise-scale training capabilities without managing infrastructure.
Use this skill when training large models requiring distributed computing (>1B parameters across multiple GPUs), implementing privacy-sensitive ML applications (healthcare, finance with federated learning), or accelerating development with pre-trained templates (sentiment analysis, image classification, forecasting). The key insight is sandbox isolation combined with coordination: each node operates independently in secure E2B environment while participating in collective training through gradient sharing and model updates. Start with single-node training using templates for common tasks, scale to distributed clusters when model size or training time exceeds single-machine limits, and enable federated learning only when data cannot be centralized due to privacy or compliance constraints.