Skip to content

Vision Platform Overview

The Rosepetal Vision Platform is a comprehensive computer vision solution built on Firebase and Docker-based AI inference. It provides a complete workflow for dataset management, model training coordination, and production inference deployment through Node-RED.

Architecture

Components

  1. Firebase: Cloud storage and database for datasets, models, and metadata
  2. Docker Containers: Isolated Python environments for ML inference
  3. Node-RED Nodes: Integration layer for visual programming
  4. gRPC Protocol: High-performance communication between services

Data Flow

Firebase Storage �� Firebase Config Node �� Vision Platform Nodes

                                            Docker Containers

                                            ML Models (YOLO, PaddleOCR, etc.)

Available Nodes

Configuration

  • Firebase Config - Central Firebase authentication and connection management

Dataset Management

Model Management

  • List Model - Browse and monitor available AI models

Inference

  • Inferencer - Run AI inference for detection, classification, segmentation
  • OCR Inferencer - Specialized optical character recognition

Utilities

Common Workflows

Workflow 1: Dataset Creation

[Camera Capture] � [Image Processing] � [Dataset Upload]

Purpose: Collect and upload training data

Steps:

  1. Capture images from cameras or sensors
  2. Apply preprocessing (resize, enhance, etc.)
  3. Upload to Firebase dataset with proper tags

Workflow 2: Model Training Coordination

[List Dataset] � [UI Selection] � [External Training] � [List Model]

Purpose: Select dataset and monitor training progress

Note: Model training happens outside Node-RED (Python scripts, cloud platforms)

Workflow 3: Production Inference

[Camera] � [Inferencer] � [Filter Results] � [Action/Alert]

Purpose: Real-time quality control or detection

Steps:

  1. Capture live images
  2. Run inference with trained model
  3. Process results (filter, classify, alert)
  4. Take action based on predictions

Workflow 4: Batch Processing

[From Dataset] � [Inferencer: Promise Mode] � [Promise Reader] � [To Dataset]

Purpose: Process large datasets efficiently

Steps:

  1. Stream images from Firebase dataset
  2. Run inference asynchronously (promise mode)
  3. Resolve all promises in batch
  4. Save results back to Firebase

Workflow 5: Dataset Curation

[From Dataset] � [Human Review UI] � [Re-tag/Filter] � [Dataset Upload]

Purpose: Review and correct dataset annotations

Workflow 6: Multi-Model Pipeline

[Image] � [Inferencer: Detection] � [Crop Regions] � [Inferencer: Classification]

Purpose: Complex analysis with multiple models

Example: Detect objects, then classify each detection

Getting Started

Prerequisites

  1. Firebase Project: Create at Firebase Console
  2. Firebase Services: Enable Firestore, Storage, Authentication
  3. Docker: Install Docker for inference containers
  4. Node-RED: Running Node-RED instance with Rosepetal nodes installed

Quick Start

1. Configure Firebase

[Add firebase-config node]
� Paste Firebase credentials JSON
� Deploy
� Verify green "connected" status

2. List Available Resources

[Inject: On Start] � [List Dataset] � [Debug]
[Inject: On Start] � [List Model] � [Debug]

Verify datasets and models are accessible.

3. First Inference

[Inject: Image] � [Inferencer: Select Model] � [Debug: Results]

Load an image and run inference with available model.

Best Practices

Dataset Management

  1. Consistent naming: Use clear, descriptive dataset names
  2. Tag organization: Define tag naming conventions
  3. Set distribution: Properly split TRAIN/VALID/TEST sets
  4. Regular cleanup: Remove obsolete datasets
  5. Backup strategy: Export critical datasets periodically

Model Organization

  1. Version control: Include version in model names
  2. Dataset linking: Document which dataset trained each model
  3. Task labeling: Clearly specify model task type
  4. Performance tracking: Log accuracy/metrics with models
  5. Deprecation: Mark old models clearly

Inference Optimization

  1. Promise mode: Use for batch processing
  2. Concurrent servers: Scale based on load
  3. Image preprocessing: Resize before inference
  4. Result caching: Cache frequent predictions
  5. Error handling: Implement proper error recovery

Production Deployment

  1. Connection monitoring: Watch Firebase connection status
  2. Resource limits: Set memory/CPU constraints
  3. Logging: Enable comprehensive logging
  4. Alerts: Configure failure alerts
  5. Graceful degradation: Handle service unavailability

Security

Firebase Security Rules

Implement proper Firestore security rules:

javascript
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // Datasets collection
    match /datasets/{dataset} {
      allow read: if request.auth != null;
      allow write: if request.auth != null &&
                      request.auth.token.admin == true;
    }

    // Models collection
    match /models/{model} {
      allow read: if request.auth != null;
      allow write: if request.auth != null &&
                      request.auth.token.admin == true;
    }
  }
}

Storage Security Rules

Configure Firebase Storage rules:

javascript
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    match /datasets/{dataset}/{allPaths=**} {
      allow read: if request.auth != null;
      allow write: if request.auth != null &&
                      request.auth.token.admin == true;
    }
  }
}

API Key Protection

  1. Restrict keys: Configure API key restrictions in Firebase Console
  2. Environment variables: Store credentials securely
  3. Don't commit: Never commit Firebase config to public repos
  4. Rotate regularly: Update credentials periodically

Performance Tuning

Inference Speed

Factors:

  • Model size and complexity
  • Number of concurrent servers
  • Max concurrent predictions setting
  • Docker container resources
  • Image resolution

Optimization:

  • Use smaller model variants when possible
  • Increase concurrent servers for throughput
  • Pre-warm models with auto-warmup
  • Resize images to model input size
  • Allocate sufficient Docker resources

Network Performance

Factors:

  • Firebase region vs server location
  • Image file sizes
  • Bandwidth limitations
  • Concurrent uploads/downloads

Optimization:

  • Choose Firebase region near servers
  • Compress images before upload
  • Use batch operations
  • Implement connection pooling
  • Cache frequently accessed data

Memory Management

Considerations:

  • Each Docker container: 2-8GB depending on model
  • Image buffers during processing
  • Promise accumulation in batch mode
  • Firebase SDK caching

Optimization:

  • Limit number of concurrent containers
  • Process in reasonable batch sizes
  • Clear buffers after processing
  • Monitor Node-RED memory usage

Monitoring and Debugging

Status Indicators

All Vision Platform nodes show connection status:

  • Green dot: Connected and operational
  • Yellow dot: Connecting or waiting
  • Red ring: Error or disconnected

Performance Tracking

Enable performance tracking to monitor:

  • Inference time per image
  • Total pipeline time
  • Queue wait times
  • Network latency

Access via: msg.performance

Debugging Tools

  1. Debug nodes: Monitor message flow
  2. Docker logs: Check container output
  3. Firebase console: View data directly
  4. Node-RED logs: Check for errors
  5. Network inspector: Monitor Firebase calls

Common Issues

Slow inference

Solutions:

  • Check Docker container resources
  • Verify image sizes are appropriate
  • Ensure models are warmed up
  • Monitor network bandwidth

Connection failures

Solutions:

  • Verify Firebase credentials
  • Check network connectivity
  • Review security rules
  • Monitor Firebase quotas

Memory leaks

Solutions:

  • Clear promise arrays after resolution
  • Dispose of large image buffers
  • Restart containers periodically
  • Monitor memory usage trends

Troubleshooting

See individual node documentation for specific troubleshooting:

Resources

Support

For issues specific to Vision Platform nodes:

  1. Check individual node documentation
  2. Review Node-RED debug output
  3. Verify Firebase console data
  4. Check Docker container logs
  5. Consult Rosepetal documentation

Version Compatibility

  • Node-RED: >=3.0.0
  • Docker: >=20.10.0
  • Firebase SDK: 11.x
  • Python: 3.9+ (for inference containers)