Skip to content

Code Executor Node (gRPC)

Overview

The Code Executor node enables execution of Python code through a gRPC connection to a Docker-based Python server. It provides a complete Python development environment within Node-RED, including a code editor, AI-powered coding assistant, parameter management, and execution logging.

Key Features

  • Remote Python execution: Execute Python code via gRPC in an isolated Docker container
  • Multiple code sources: Load code from files, message properties, or the built-in editor
  • Parameter passing: Send data from Node-RED to Python with automatic serialization
  • AI coding assistant: Integrated GPT-4o-mini chat for code generation and help
  • Live code editor: Built-in Ace editor with Python syntax highlighting
  • Execution testing: Run code directly from the editor without deploying the flow
  • Conversation management: Multiple chat sessions with history
  • Performance tracking: Automatic timing of Python execution
  • Flexible output: Store results anywhere in the message object

Architecture

Components

  1. Node-RED Node: Client-side interface and configuration
  2. gRPC Client: Communicates with Python server
  3. Docker Container: Isolated Python execution environment
  4. Python Server: Handles code execution via gRPC
  5. OpenAI Integration: Optional AI assistant for code help

Data Flow

Message → Node Config → gRPC Client → Docker Python Server → Execute Code → gRPC Response → Message Output

Configuration

Properties

Name

  • Type: String
  • Optional: Yes
  • Description: Custom name for the node instance

Port

  • Type: Number
  • Default: 50051
  • Description: Port where the gRPC client connects. Must match the Docker container's host port.

Python Code Source

  • Type: Select
  • Options:
    • File Path: Load code from a file on the filesystem
    • Message: Retrieve code from a message property
    • Code: Write code directly in the built-in editor

Code File Path

  • Type: String
  • Visible: When source is "File Path"
  • Description: Absolute path to Python script file
  • Example: /home/user/scripts/process_image.py

Code Message Path

  • Type: String
  • Default: msg.code
  • Visible: When source is "Message"
  • Description: Message property containing Python code as a string

Output Message Path

  • Type: String
  • Default: msg.result
  • Description: Where to store the execution result in the output message

Parameters

Parameters allow you to pass data from Node-RED to your Python code.

Parameter Configuration

Each parameter has:

  • Key: Variable name accessible in Python code
  • Value: Data to pass (literal value or message property reference)

Value Types

Literal values:

javascript
Key: threshold
Value: 90
// Python receives: threshold = "90" (as string)

Message property references:

javascript
Key: imageData
Value: msg.payload
// Python receives: imageData = <contents of msg.payload>

Important Notes:

  • Parameters are automatically serialized using MessagePack
  • Simple values (strings, numbers) are received as strings
  • Complex objects (Buffers, arrays, objects) preserve their structure
  • Convert string parameters to appropriate types in your Python code

Python Code Editor

Features

  • Syntax highlighting: Python syntax coloring
  • Line numbers: Easy code navigation
  • Auto-indentation: Smart indentation for Python
  • Test execution: Run code without deploying the flow
  • Full-screen support: Resizable editor

Code Requirements

Your Python code must follow these rules:

  1. Use parameters as pre-existing variables

    python
    # If parameter "threshold" is defined, just use it:
    if int(threshold) > 50:  # Convert from string
        print("High threshold")
  2. Store output in result variable

    python
    # The final result must be assigned to 'result'
    result = processed_data
    # Do NOT use return statement
  3. Handle restricted environment

    • Not all Python libraries are available
    • Limited built-in functions
    • Add missing libraries to Docker container and restart
  4. Use helper functions

    python
    def process_image(data):
        # Helper function logic
        return processed
    
    # Main code
    result = process_image(imageData)

Running Code

Click the "Run Code" button to execute code immediately:

  • Tests code without deploying flow
  • Shows execution output and logs
  • Only works with literal parameter values (not message references)
  • Useful for debugging and validation

AI Coding Assistant

Overview

Integrated GPT-4o-mini chat interface provides:

  • Code generation based on your requirements
  • Code refactoring and optimization
  • Debugging help
  • Documentation and explanations

Features

Conversation Management

  • Multiple chats: Create separate conversations for different tasks
  • Persistent history: Conversations saved in node context
  • Editable titles: Rename conversations for organization
  • Delete chats: Remove unwanted conversations

Automatic Code Modification

  • Enable: Check "Allow automatic code modification"
  • Behavior: Assistant's code suggestions automatically update the editor
  • Disable: Review suggestions manually before applying

Using the Assistant

  1. New Conversation: Click "+ New Chat"
  2. Select Conversation: Click on conversation in sidebar
  3. Type Message: Describe what you need
  4. Send: Press Enter or click "Send"
  5. Review: Check the assistant's response
  6. Apply Code: If enabled, code updates automatically

Best Practices

Effective prompts:

Good: "Create a function to resize an image using PIL,
       parameters are imageData (bytes) and size (int)"

Bad:  "Help me"

Context awareness:

  • Assistant knows your parameters
  • Can see current code
  • Understands last 3 messages of conversation

Input/Output

Input Message

The node accepts any message object. Parameters are extracted based on configuration.

Example input:

javascript
msg.payload = imageBuffer;
msg.config = { threshold: 85, quality: "high" };

With parameters:

Key: imageData, Value: msg.payload
Key: threshold, Value: msg.config.threshold
Key: quality, Value: msg.config.quality

Output Message

Original message with result added at configured path.

Example output:

javascript
msg.result = {
  processedImage: Buffer,
  metadata: { width: 1920, height: 1080 }
};

msg.performance = {
  "code_executor": {
    pythonTime: 145.2,  // Python execution time (ms)
    totalTime: {
      start: Date,
      end: Date,
      milliseconds: 168  // Total round-trip time (ms)
    }
  }
};

Performance Tracking

Automatic performance metrics:

  • pythonTime: Time spent executing Python code (measured by server)
  • totalTime: Total round-trip including network overhead
  • Node status: Shows execution time under the node

Usage Examples

Example 1: Image Processing

Configuration:

  • Source: Code
  • Parameter 1: imageDatamsg.payload
  • Parameter 2: width800
  • Output: msg.result

Python Code:

python
from PIL import Image
import io

# imageData is already available as parameter
img = Image.open(io.BytesIO(imageData))

# Resize to specified width
target_width = int(width)
aspect_ratio = img.height / img.width
target_height = int(target_width * aspect_ratio)

resized = img.resize((target_width, target_height))

# Save to bytes
output = io.BytesIO()
resized.save(output, format='JPEG')

result = {
    'image': output.getvalue(),
    'width': target_width,
    'height': target_height
}

Example 2: Data Analysis

Configuration:

  • Source: Code
  • Parameter: datasetmsg.data
  • Output: msg.analysis

Python Code:

python
import json
import statistics

# Parse dataset (comes as JSON string)
data = json.loads(dataset) if isinstance(dataset, str) else dataset
values = [item['value'] for item in data]

result = {
    'count': len(values),
    'mean': statistics.mean(values),
    'median': statistics.median(values),
    'stdev': statistics.stdev(values) if len(values) > 1 else 0,
    'min': min(values),
    'max': max(values)
}

Example 3: Using Code from Message

Function node before Code Executor:

javascript
msg.code = `
import math

angle_deg = float(angle)
angle_rad = math.radians(angle_deg)

result = {
    'sin': math.sin(angle_rad),
    'cos': math.cos(angle_rad),
    'tan': math.tan(angle_rad)
}
`;

msg.angle = 45;
return msg;

Code Executor Configuration:

  • Source: Message
  • Code Path: msg.code
  • Parameter: anglemsg.angle
  • Output: msg.trigValues

Example 4: File-based Code

Python file: /opt/scripts/barcode_decoder.py

python
# barcode_decoder.py
import cv2
import numpy as np
from pyzbar import pyzbar

# imageBytes parameter contains image data
nparr = np.frombuffer(imageBytes, np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)

# Decode barcodes
barcodes = pyzbar.decode(image)

result = [{
    'type': barcode.type,
    'data': barcode.data.decode('utf-8'),
    'rect': barcode.rect._asdict()
} for barcode in barcodes]

Code Executor Configuration:

  • Source: File Path
  • File Path: /opt/scripts/barcode_decoder.py
  • Parameter: imageBytesmsg.payload
  • Output: msg.barcodes

Docker Setup

Requirements

The node requires a Docker container running the Python gRPC server.

Docker Instructions

Reference file: rosepetal-additional-nodes/rosepetal-codeExecutor-node/docker_instructions.txt

Basic setup:

bash
# Build the Docker image
docker build -t code-executor-server .

# Run the container
docker run -d \
  --name code-executor-server \
  -p 50051:50051 \
  code-executor-server

Port mapping:

  • Container internal port: 50051
  • Host port: 50051 (configurable)
  • Node configuration must match host port

Adding Python Libraries

To add libraries to the execution environment:

  1. Edit requirements.txt in the Docker image
  2. Rebuild the Docker image
  3. Restart the container

Example:

txt
# requirements.txt
numpy==1.24.0
opencv-python==4.8.0
Pillow==10.0.0
pyzbar==0.1.9

Checking Logs

View Python execution logs:

bash
docker logs code-executor-server

API Key Configuration

The AI assistant requires an OpenAI API key.

Setup

  1. Create file: rosepetal-additional-nodes/rosepetal-codeExecutor-node/api_key.txt
  2. Add your OpenAI API key (single line)
  3. Restart Node-RED

Example:

sk-proj-abcdefghijklmnopqrstuvwxyz1234567890

Security note: Keep this file secure and never commit it to version control.

Error Handling

Common Errors

"Node not found"

  • Cause: Node ID mismatch in HTTP requests
  • Solution: Redeploy the flow

"gRPC Execution Error"

  • Cause: Docker container not running or port mismatch
  • Solution: Check Docker status and port configuration

"Error reading API key file"

  • Cause: Missing or invalid api_key.txt
  • Solution: Create file with valid OpenAI API key

"Error from server: ModuleNotFoundError"

  • Cause: Python library not installed in Docker container
  • Solution: Add library to requirements.txt and rebuild image

"Could not resolve: msg.value"

  • Cause: Referenced message property doesn't exist
  • Solution: Verify message structure and parameter configuration

Debugging Tips

  1. Test execution: Use "Run Code" button to test without message flow
  2. Check logs: View Docker logs for Python-side errors
  3. Verify parameters: Ensure parameter values exist in message
  4. Simplify code: Test with minimal code first, then build up
  5. Type conversion: Remember parameters come as strings

Best Practices

Code Organization

  1. Separate concerns: Use file-based code for complex logic
  2. Helper functions: Break down complex operations
  3. Error handling: Use try-except blocks in Python
  4. Type checking: Validate parameter types

Performance

  1. Minimize data transfer: Only send necessary data
  2. Batch operations: Process multiple items in one call
  3. Cache results: Store expensive computations
  4. Monitor timing: Watch performance metrics

Security

  1. Validate input: Don't trust incoming data
  2. Restrict imports: Only import necessary modules
  3. Sandbox execution: Keep Docker container isolated
  4. Secure API key: Protect OpenAI API key file

Development Workflow

  1. Start with assistant: Use AI to generate initial code
  2. Test in editor: Run code with "Run Code" button
  3. Deploy and test: Test with real message flow
  4. Iterate: Refine based on results
  5. Move to file: Once stable, migrate to file-based code

Limitations

Python Environment

  • Restricted built-in functions
  • Limited standard library access
  • Must install third-party libraries explicitly
  • No access to Node-RED filesystem

Code Execution

  • Synchronous execution only
  • Single result per execution
  • No streaming output
  • Limited error context

AI Assistant

  • Requires OpenAI API key
  • Uses GPT-4o-mini model (not configurable)
  • Limited conversation context (last 3 messages)
  • API costs apply per message

Troubleshooting

Docker Container Issues

Container not starting:

bash
# Check Docker logs
docker logs code-executor-server

# Restart container
docker restart code-executor-server

# Rebuild if needed
docker build -t code-executor-server .
docker rm code-executor-server
docker run -d --name code-executor-server -p 50051:50051 code-executor-server

Port already in use:

bash
# Find process using port
lsof -i :50051

# Change port in node configuration and Docker run command
docker run -d --name code-executor-server -p 50052:50051 code-executor-server

Code Execution Issues

Parameter not found:

python
# Check parameter name matches configuration
print("Available locals:", list(locals().keys()))

Type errors:

python
# Convert string parameters
threshold = int(threshold)
quality = float(quality)
enabled = quality.lower() == 'true'

Import errors:

python
# Check if module is available
try:
    import module_name
except ImportError:
    result = {"error": "module_name not installed"}

See Also