Code Executor Node (gRPC)
Overview
The Code Executor node enables execution of Python code through a gRPC connection to a Docker-based Python server. It provides a complete Python development environment within Node-RED, including a code editor, AI-powered coding assistant, parameter management, and execution logging.
Key Features
- Remote Python execution: Execute Python code via gRPC in an isolated Docker container
- Multiple code sources: Load code from files, message properties, or the built-in editor
- Parameter passing: Send data from Node-RED to Python with automatic serialization
- AI coding assistant: Integrated GPT-4o-mini chat for code generation and help
- Live code editor: Built-in Ace editor with Python syntax highlighting
- Execution testing: Run code directly from the editor without deploying the flow
- Conversation management: Multiple chat sessions with history
- Performance tracking: Automatic timing of Python execution
- Flexible output: Store results anywhere in the message object
Architecture
Components
- Node-RED Node: Client-side interface and configuration
- gRPC Client: Communicates with Python server
- Docker Container: Isolated Python execution environment
- Python Server: Handles code execution via gRPC
- OpenAI Integration: Optional AI assistant for code help
Data Flow
Message → Node Config → gRPC Client → Docker Python Server → Execute Code → gRPC Response → Message OutputConfiguration
Properties
Name
- Type: String
- Optional: Yes
- Description: Custom name for the node instance
Port
- Type: Number
- Default: 50051
- Description: Port where the gRPC client connects. Must match the Docker container's host port.
Python Code Source
- Type: Select
- Options:
- File Path: Load code from a file on the filesystem
- Message: Retrieve code from a message property
- Code: Write code directly in the built-in editor
Code File Path
- Type: String
- Visible: When source is "File Path"
- Description: Absolute path to Python script file
- Example:
/home/user/scripts/process_image.py
Code Message Path
- Type: String
- Default:
msg.code - Visible: When source is "Message"
- Description: Message property containing Python code as a string
Output Message Path
- Type: String
- Default:
msg.result - Description: Where to store the execution result in the output message
Parameters
Parameters allow you to pass data from Node-RED to your Python code.
Parameter Configuration
Each parameter has:
- Key: Variable name accessible in Python code
- Value: Data to pass (literal value or message property reference)
Value Types
Literal values:
Key: threshold
Value: 90
// Python receives: threshold = "90" (as string)Message property references:
Key: imageData
Value: msg.payload
// Python receives: imageData = <contents of msg.payload>Important Notes:
- Parameters are automatically serialized using MessagePack
- Simple values (strings, numbers) are received as strings
- Complex objects (Buffers, arrays, objects) preserve their structure
- Convert string parameters to appropriate types in your Python code
Python Code Editor
Features
- Syntax highlighting: Python syntax coloring
- Line numbers: Easy code navigation
- Auto-indentation: Smart indentation for Python
- Test execution: Run code without deploying the flow
- Full-screen support: Resizable editor
Code Requirements
Your Python code must follow these rules:
Use parameters as pre-existing variables
python# If parameter "threshold" is defined, just use it: if int(threshold) > 50: # Convert from string print("High threshold")Store output in
resultvariablepython# The final result must be assigned to 'result' result = processed_data # Do NOT use return statementHandle restricted environment
- Not all Python libraries are available
- Limited built-in functions
- Add missing libraries to Docker container and restart
Use helper functions
pythondef process_image(data): # Helper function logic return processed # Main code result = process_image(imageData)
Running Code
Click the "Run Code" button to execute code immediately:
- Tests code without deploying flow
- Shows execution output and logs
- Only works with literal parameter values (not message references)
- Useful for debugging and validation
AI Coding Assistant
Overview
Integrated GPT-4o-mini chat interface provides:
- Code generation based on your requirements
- Code refactoring and optimization
- Debugging help
- Documentation and explanations
Features
Conversation Management
- Multiple chats: Create separate conversations for different tasks
- Persistent history: Conversations saved in node context
- Editable titles: Rename conversations for organization
- Delete chats: Remove unwanted conversations
Automatic Code Modification
- Enable: Check "Allow automatic code modification"
- Behavior: Assistant's code suggestions automatically update the editor
- Disable: Review suggestions manually before applying
Using the Assistant
- New Conversation: Click "+ New Chat"
- Select Conversation: Click on conversation in sidebar
- Type Message: Describe what you need
- Send: Press Enter or click "Send"
- Review: Check the assistant's response
- Apply Code: If enabled, code updates automatically
Best Practices
Effective prompts:
Good: "Create a function to resize an image using PIL,
parameters are imageData (bytes) and size (int)"
Bad: "Help me"Context awareness:
- Assistant knows your parameters
- Can see current code
- Understands last 3 messages of conversation
Input/Output
Input Message
The node accepts any message object. Parameters are extracted based on configuration.
Example input:
msg.payload = imageBuffer;
msg.config = { threshold: 85, quality: "high" };With parameters:
Key: imageData, Value: msg.payload
Key: threshold, Value: msg.config.threshold
Key: quality, Value: msg.config.qualityOutput Message
Original message with result added at configured path.
Example output:
msg.result = {
processedImage: Buffer,
metadata: { width: 1920, height: 1080 }
};
msg.performance = {
"code_executor": {
pythonTime: 145.2, // Python execution time (ms)
totalTime: {
start: Date,
end: Date,
milliseconds: 168 // Total round-trip time (ms)
}
}
};Performance Tracking
Automatic performance metrics:
- pythonTime: Time spent executing Python code (measured by server)
- totalTime: Total round-trip including network overhead
- Node status: Shows execution time under the node
Usage Examples
Example 1: Image Processing
Configuration:
- Source: Code
- Parameter 1:
imageData→msg.payload - Parameter 2:
width→800 - Output:
msg.result
Python Code:
from PIL import Image
import io
# imageData is already available as parameter
img = Image.open(io.BytesIO(imageData))
# Resize to specified width
target_width = int(width)
aspect_ratio = img.height / img.width
target_height = int(target_width * aspect_ratio)
resized = img.resize((target_width, target_height))
# Save to bytes
output = io.BytesIO()
resized.save(output, format='JPEG')
result = {
'image': output.getvalue(),
'width': target_width,
'height': target_height
}Example 2: Data Analysis
Configuration:
- Source: Code
- Parameter:
dataset→msg.data - Output:
msg.analysis
Python Code:
import json
import statistics
# Parse dataset (comes as JSON string)
data = json.loads(dataset) if isinstance(dataset, str) else dataset
values = [item['value'] for item in data]
result = {
'count': len(values),
'mean': statistics.mean(values),
'median': statistics.median(values),
'stdev': statistics.stdev(values) if len(values) > 1 else 0,
'min': min(values),
'max': max(values)
}Example 3: Using Code from Message
Function node before Code Executor:
msg.code = `
import math
angle_deg = float(angle)
angle_rad = math.radians(angle_deg)
result = {
'sin': math.sin(angle_rad),
'cos': math.cos(angle_rad),
'tan': math.tan(angle_rad)
}
`;
msg.angle = 45;
return msg;Code Executor Configuration:
- Source: Message
- Code Path:
msg.code - Parameter:
angle→msg.angle - Output:
msg.trigValues
Example 4: File-based Code
Python file: /opt/scripts/barcode_decoder.py
# barcode_decoder.py
import cv2
import numpy as np
from pyzbar import pyzbar
# imageBytes parameter contains image data
nparr = np.frombuffer(imageBytes, np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
# Decode barcodes
barcodes = pyzbar.decode(image)
result = [{
'type': barcode.type,
'data': barcode.data.decode('utf-8'),
'rect': barcode.rect._asdict()
} for barcode in barcodes]Code Executor Configuration:
- Source: File Path
- File Path:
/opt/scripts/barcode_decoder.py - Parameter:
imageBytes→msg.payload - Output:
msg.barcodes
Docker Setup
Requirements
The node requires a Docker container running the Python gRPC server.
Docker Instructions
Reference file: rosepetal-additional-nodes/rosepetal-codeExecutor-node/docker_instructions.txt
Basic setup:
# Build the Docker image
docker build -t code-executor-server .
# Run the container
docker run -d \
--name code-executor-server \
-p 50051:50051 \
code-executor-serverPort mapping:
- Container internal port:
50051 - Host port:
50051(configurable) - Node configuration must match host port
Adding Python Libraries
To add libraries to the execution environment:
- Edit
requirements.txtin the Docker image - Rebuild the Docker image
- Restart the container
Example:
# requirements.txt
numpy==1.24.0
opencv-python==4.8.0
Pillow==10.0.0
pyzbar==0.1.9Checking Logs
View Python execution logs:
docker logs code-executor-serverAPI Key Configuration
The AI assistant requires an OpenAI API key.
Setup
- Create file:
rosepetal-additional-nodes/rosepetal-codeExecutor-node/api_key.txt - Add your OpenAI API key (single line)
- Restart Node-RED
Example:
sk-proj-abcdefghijklmnopqrstuvwxyz1234567890Security note: Keep this file secure and never commit it to version control.
Error Handling
Common Errors
"Node not found"
- Cause: Node ID mismatch in HTTP requests
- Solution: Redeploy the flow
"gRPC Execution Error"
- Cause: Docker container not running or port mismatch
- Solution: Check Docker status and port configuration
"Error reading API key file"
- Cause: Missing or invalid
api_key.txt - Solution: Create file with valid OpenAI API key
"Error from server: ModuleNotFoundError"
- Cause: Python library not installed in Docker container
- Solution: Add library to
requirements.txtand rebuild image
"Could not resolve: msg.value"
- Cause: Referenced message property doesn't exist
- Solution: Verify message structure and parameter configuration
Debugging Tips
- Test execution: Use "Run Code" button to test without message flow
- Check logs: View Docker logs for Python-side errors
- Verify parameters: Ensure parameter values exist in message
- Simplify code: Test with minimal code first, then build up
- Type conversion: Remember parameters come as strings
Best Practices
Code Organization
- Separate concerns: Use file-based code for complex logic
- Helper functions: Break down complex operations
- Error handling: Use try-except blocks in Python
- Type checking: Validate parameter types
Performance
- Minimize data transfer: Only send necessary data
- Batch operations: Process multiple items in one call
- Cache results: Store expensive computations
- Monitor timing: Watch performance metrics
Security
- Validate input: Don't trust incoming data
- Restrict imports: Only import necessary modules
- Sandbox execution: Keep Docker container isolated
- Secure API key: Protect OpenAI API key file
Development Workflow
- Start with assistant: Use AI to generate initial code
- Test in editor: Run code with "Run Code" button
- Deploy and test: Test with real message flow
- Iterate: Refine based on results
- Move to file: Once stable, migrate to file-based code
Limitations
Python Environment
- Restricted built-in functions
- Limited standard library access
- Must install third-party libraries explicitly
- No access to Node-RED filesystem
Code Execution
- Synchronous execution only
- Single result per execution
- No streaming output
- Limited error context
AI Assistant
- Requires OpenAI API key
- Uses GPT-4o-mini model (not configurable)
- Limited conversation context (last 3 messages)
- API costs apply per message
Troubleshooting
Docker Container Issues
Container not starting:
# Check Docker logs
docker logs code-executor-server
# Restart container
docker restart code-executor-server
# Rebuild if needed
docker build -t code-executor-server .
docker rm code-executor-server
docker run -d --name code-executor-server -p 50051:50051 code-executor-serverPort already in use:
# Find process using port
lsof -i :50051
# Change port in node configuration and Docker run command
docker run -d --name code-executor-server -p 50052:50051 code-executor-serverCode Execution Issues
Parameter not found:
# Check parameter name matches configuration
print("Available locals:", list(locals().keys()))Type errors:
# Convert string parameters
threshold = int(threshold)
quality = float(quality)
enabled = quality.lower() == 'true'Import errors:
# Check if module is available
try:
import module_name
except ImportError:
result = {"error": "module_name not installed"}