Vision Platform Overview
The Rosepetal Vision Platform is a comprehensive computer vision solution built on Firebase and Docker-based AI inference. It provides a complete workflow for dataset management, model training coordination, and production inference deployment through Node-RED.
Architecture
Components
- Firebase: Cloud storage and database for datasets, models, and metadata
- Docker Containers: Isolated Python environments for ML inference
- Node-RED Nodes: Integration layer for visual programming
- gRPC Protocol: High-performance communication between services
Data Flow
Firebase Storage �� Firebase Config Node �� Vision Platform Nodes
�
Docker Containers
�
ML Models (YOLO, PaddleOCR, etc.)Available Nodes
Configuration
- Firebase Config - Central Firebase authentication and connection management
Dataset Management
- Dataset Upload - Batch upload images from local storage to Firebase
- From Dataset - Stream images from Firebase datasets with filtering
- List Dataset - Browse and monitor available datasets
Model Management
- List Model - Browse and monitor available AI models
Inference
- Inferencer - Run AI inference for detection, classification, segmentation
- OCR Inferencer - Specialized optical character recognition
Utilities
- Promise Reader - Resolve asynchronous promise-based operations
Common Workflows
Workflow 1: Dataset Creation
[Camera Capture] � [Image Processing] � [Dataset Upload]Purpose: Collect and upload training data
Steps:
- Capture images from cameras or sensors
- Apply preprocessing (resize, enhance, etc.)
- Upload to Firebase dataset with proper tags
Workflow 2: Model Training Coordination
[List Dataset] � [UI Selection] � [External Training] � [List Model]Purpose: Select dataset and monitor training progress
Note: Model training happens outside Node-RED (Python scripts, cloud platforms)
Workflow 3: Production Inference
[Camera] � [Inferencer] � [Filter Results] � [Action/Alert]Purpose: Real-time quality control or detection
Steps:
- Capture live images
- Run inference with trained model
- Process results (filter, classify, alert)
- Take action based on predictions
Workflow 4: Batch Processing
[From Dataset] � [Inferencer: Promise Mode] � [Promise Reader] � [To Dataset]Purpose: Process large datasets efficiently
Steps:
- Stream images from Firebase dataset
- Run inference asynchronously (promise mode)
- Resolve all promises in batch
- Save results back to Firebase
Workflow 5: Dataset Curation
[From Dataset] � [Human Review UI] � [Re-tag/Filter] � [Dataset Upload]Purpose: Review and correct dataset annotations
Workflow 6: Multi-Model Pipeline
[Image] � [Inferencer: Detection] � [Crop Regions] � [Inferencer: Classification]Purpose: Complex analysis with multiple models
Example: Detect objects, then classify each detection
Getting Started
Prerequisites
- Firebase Project: Create at Firebase Console
- Firebase Services: Enable Firestore, Storage, Authentication
- Docker: Install Docker for inference containers
- Node-RED: Running Node-RED instance with Rosepetal nodes installed
Quick Start
1. Configure Firebase
[Add firebase-config node]
� Paste Firebase credentials JSON
� Deploy
� Verify green "connected" status2. List Available Resources
[Inject: On Start] � [List Dataset] � [Debug]
[Inject: On Start] � [List Model] � [Debug]Verify datasets and models are accessible.
3. First Inference
[Inject: Image] � [Inferencer: Select Model] � [Debug: Results]Load an image and run inference with available model.
Best Practices
Dataset Management
- Consistent naming: Use clear, descriptive dataset names
- Tag organization: Define tag naming conventions
- Set distribution: Properly split TRAIN/VALID/TEST sets
- Regular cleanup: Remove obsolete datasets
- Backup strategy: Export critical datasets periodically
Model Organization
- Version control: Include version in model names
- Dataset linking: Document which dataset trained each model
- Task labeling: Clearly specify model task type
- Performance tracking: Log accuracy/metrics with models
- Deprecation: Mark old models clearly
Inference Optimization
- Promise mode: Use for batch processing
- Concurrent servers: Scale based on load
- Image preprocessing: Resize before inference
- Result caching: Cache frequent predictions
- Error handling: Implement proper error recovery
Production Deployment
- Connection monitoring: Watch Firebase connection status
- Resource limits: Set memory/CPU constraints
- Logging: Enable comprehensive logging
- Alerts: Configure failure alerts
- Graceful degradation: Handle service unavailability
Security
Firebase Security Rules
Implement proper Firestore security rules:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
// Datasets collection
match /datasets/{dataset} {
allow read: if request.auth != null;
allow write: if request.auth != null &&
request.auth.token.admin == true;
}
// Models collection
match /models/{model} {
allow read: if request.auth != null;
allow write: if request.auth != null &&
request.auth.token.admin == true;
}
}
}Storage Security Rules
Configure Firebase Storage rules:
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
match /datasets/{dataset}/{allPaths=**} {
allow read: if request.auth != null;
allow write: if request.auth != null &&
request.auth.token.admin == true;
}
}
}API Key Protection
- Restrict keys: Configure API key restrictions in Firebase Console
- Environment variables: Store credentials securely
- Don't commit: Never commit Firebase config to public repos
- Rotate regularly: Update credentials periodically
Performance Tuning
Inference Speed
Factors:
- Model size and complexity
- Number of concurrent servers
- Max concurrent predictions setting
- Docker container resources
- Image resolution
Optimization:
- Use smaller model variants when possible
- Increase concurrent servers for throughput
- Pre-warm models with auto-warmup
- Resize images to model input size
- Allocate sufficient Docker resources
Network Performance
Factors:
- Firebase region vs server location
- Image file sizes
- Bandwidth limitations
- Concurrent uploads/downloads
Optimization:
- Choose Firebase region near servers
- Compress images before upload
- Use batch operations
- Implement connection pooling
- Cache frequently accessed data
Memory Management
Considerations:
- Each Docker container: 2-8GB depending on model
- Image buffers during processing
- Promise accumulation in batch mode
- Firebase SDK caching
Optimization:
- Limit number of concurrent containers
- Process in reasonable batch sizes
- Clear buffers after processing
- Monitor Node-RED memory usage
Monitoring and Debugging
Status Indicators
All Vision Platform nodes show connection status:
- Green dot: Connected and operational
- Yellow dot: Connecting or waiting
- Red ring: Error or disconnected
Performance Tracking
Enable performance tracking to monitor:
- Inference time per image
- Total pipeline time
- Queue wait times
- Network latency
Access via: msg.performance
Debugging Tools
- Debug nodes: Monitor message flow
- Docker logs: Check container output
- Firebase console: View data directly
- Node-RED logs: Check for errors
- Network inspector: Monitor Firebase calls
Common Issues
Slow inference
Solutions:
- Check Docker container resources
- Verify image sizes are appropriate
- Ensure models are warmed up
- Monitor network bandwidth
Connection failures
Solutions:
- Verify Firebase credentials
- Check network connectivity
- Review security rules
- Monitor Firebase quotas
Memory leaks
Solutions:
- Clear promise arrays after resolution
- Dispose of large image buffers
- Restart containers periodically
- Monitor memory usage trends
Troubleshooting
See individual node documentation for specific troubleshooting:
Resources
- Firebase Documentation
- Docker Documentation
- Node-RED Documentation
- YOLO Documentation
- PaddleOCR Documentation
Support
For issues specific to Vision Platform nodes:
- Check individual node documentation
- Review Node-RED debug output
- Verify Firebase console data
- Check Docker container logs
- Consult Rosepetal documentation
Version Compatibility
- Node-RED: >=3.0.0
- Docker: >=20.10.0
- Firebase SDK: 11.x
- Python: 3.9+ (for inference containers)