Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
c067df3
feat(export): add workflow export as standalone Python/FastAPI service
aadamsx Dec 27, 2025
911acb0
fix(export): address code review feedback - add proper TypeScript types
aadamsx Dec 27, 2025
6c6082e
fix(export): replace alert() with notification system
aadamsx Dec 27, 2025
3ac556b
fix(export): address critical security vulnerabilities
aadamsx Dec 27, 2025
a204694
fix(export): use ref for isExporting to avoid stale closure
aadamsx Dec 27, 2025
a36504e
refactor(export): extract templates and modularize export service
aadamsx Dec 27, 2025
3325f2b
feat(export): add configurable local file tools with WORKSPACE_DIR
aadamsx Dec 27, 2025
3a3e0ce
docs: add comprehensive export service documentation to README
aadamsx Dec 27, 2025
ce76ee4
feat(export): add comprehensive file operations with binary support, …
aadamsx Dec 27, 2025
7817c52
feat(export): add support for all 13 LLM providers
aadamsx Dec 27, 2025
953b708
fix: correct templates directory path for Next.js dev mode
aadamsx Dec 28, 2025
d56a697
docs: add comprehensive workflow creation and export guide
aadamsx Dec 28, 2025
7088404
docs: expand workflow guide with tools, variable refs, and extension …
aadamsx Dec 29, 2025
35b0440
fix(export-service): deduplicate tools to prevent API errors
aadamsx Dec 29, 2025
c025aac
Merge origin/staging into feature/workflow-to-process
aadamsx Dec 29, 2025
bf07e13
docs: add working example and key learnings to workflow guide
aadamsx Dec 31, 2025
a01c12c
chore: remove workflow guide from PR (moved to local reference)
aadamsx Dec 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
239 changes: 239 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,245 @@ If ports 3000, 3002, or 5432 are in use, configure alternatives:
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -d
```

## Export Workflows as Standalone Services

Export any workflow as a self-contained Python/FastAPI service that can be deployed independently via Docker, Railway, or any container platform.

### Quick Start

1. Right-click a workflow in the sidebar
2. Select **"Export as Service"**
3. Extract the ZIP file
4. Configure `.env` with your API keys
5. Run the service:

```bash
# With Docker (recommended)
docker compose up -d

# Or run directly
pip install -r requirements.txt
uvicorn main:app --port 8080

# Execute the workflow
curl -X POST http://localhost:8080/execute \
-H "Content-Type: application/json" \
-d '{"your": "input"}'
```

### Exported Files

| File | Description |
|------|-------------|
| `workflow.json` | Workflow definition (blocks, connections, configuration) |
| `.env` | Environment variables with your decrypted API keys |
| `.env.example` | Template without sensitive values (safe to commit) |
| `main.py` | FastAPI server with /execute, /health, /ready endpoints |
| `executor.py` | DAG execution engine |
| `handlers/` | Block type handlers (agent, function, condition, etc.) |
| `tools.py` | Native file operation tools |
| `resolver.py` | Variable and input resolution |
| `Dockerfile` | Container configuration |
| `docker-compose.yml` | Docker Compose setup with volume mounts |
| `requirements.txt` | Python dependencies |
| `README.md` | Usage instructions |

### Multi-Provider LLM Support

The exported service automatically detects and routes to the correct provider based on model name:

| Provider | Models | Environment Variable |
|----------|--------|---------------------|
| **Anthropic** | Claude 4 (Opus, Sonnet), Claude 3.5, Claude 3 | `ANTHROPIC_API_KEY` |
| **OpenAI** | GPT-4, GPT-4o, o1, o3 | `OPENAI_API_KEY` |
| **Google** | Gemini Pro, Gemini Flash | `GOOGLE_API_KEY` |

Provider is detected from the model name (e.g., `claude-sonnet-4-20250514` → Anthropic, `gpt-4o` → OpenAI).

### Supported Block Types

| Block Type | Description |
|------------|-------------|
| **Start/Trigger** | Entry point for workflow execution |
| **Agent** | LLM calls with tool support (MCP and native) |
| **Function** | Custom code (JavaScript auto-transpiled to Python) |
| **Condition** | Branching logic with safe expression evaluation |
| **Router** | Multi-path routing based on conditions |
| **API** | HTTP requests to external services |
| **Loop** | Iteration (for, forEach, while, doWhile) |
| **Variables** | State management across blocks |
| **Response** | Final output formatting |

### File Operations

Agents can perform file operations in two ways:

#### Option 1: Local File Tools (WORKSPACE_DIR)

Set `WORKSPACE_DIR` in `.env` to enable sandboxed local file operations:

```bash
# In .env
WORKSPACE_DIR=./workspace
```

When enabled, agents automatically get access to these tools:

| Tool | Description |
|------|-------------|
| `local_write_file` | Write text content to a file |
| `local_write_bytes` | Write binary data (images, PDFs) as base64 |
| `local_append_file` | Append text to a file (creates if not exists) |
| `local_read_file` | Read text content from a file |
| `local_read_bytes` | Read binary data as base64 |
| `local_delete_file` | Delete a file |
| `local_list_directory` | List files with metadata (size, modified time) |

**Enable Command Execution** (opt-in for security):

```bash
# In .env
WORKSPACE_DIR=./workspace
ENABLE_COMMAND_EXECUTION=true
```

When enabled, agents also get:

| Tool | Description |
|------|-------------|
| `local_execute_command` | Run commands like `python script.py` or `node process.js` |

Shell operators (`|`, `>`, `&&`, etc.) are blocked for security.

**File Size Limits:**

```bash
# Default: 100MB. Set custom limit in bytes:
MAX_FILE_SIZE=52428800 # 50MB
```

**Security:** All paths are sandboxed to `WORKSPACE_DIR`. Path traversal attacks (`../`) and symlink escapes are blocked. Agents cannot access files outside the workspace directory.

**With Docker:** The `docker-compose.yml` mounts `./output` on your host to `/app/workspace` in the container:

```bash
docker compose up -d
# Files written by agents appear in ./output/ on your host machine
```

#### Option 2: MCP Filesystem Tools

If your workflow uses MCP filesystem servers, those tools work as configured. MCP servers handle file operations on their own systems—paths and permissions are determined by the MCP server's configuration.

#### Using Both Together

You can enable both options simultaneously. If `WORKSPACE_DIR` is set, agents will have access to:
- Local file tools (`local_write_file`, etc.) for the sandboxed workspace
- MCP tools for external filesystem servers

The LLM chooses the appropriate tool based on the tool descriptions and context.

#### Health Check with Workspace Status

The `/health` endpoint returns workspace configuration status:

```json
{
"status": "healthy",
"workspace": {
"enabled": true,
"workspace_dir": "/app/workspace",
"command_execution_enabled": false,
"max_file_size": 104857600
}
}
```

### API Endpoints

The exported service provides these endpoints:

| Endpoint | Method | Description |
|----------|--------|-------------|
| `/execute` | POST | Execute the workflow with input data |
| `/health` | GET | Health check (returns `{"status": "healthy"}`) |
| `/ready` | GET | Readiness check |

**Example execution:**

```bash
curl -X POST http://localhost:8080/execute \
-H "Content-Type: application/json" \
-d '{
"message": "Analyze this data",
"data": {"key": "value"}
}'
```

### Docker Deployment

```bash
# Build and run with Docker Compose
docker compose up -d

# View logs
docker compose logs -f

# Stop
docker compose down
```

**Manual Docker build:**

```bash
docker build -t my-workflow .
docker run -p 8080:8080 --env-file .env my-workflow
```

### Production Configuration

| Environment Variable | Default | Description |
|---------------------|---------|-------------|
| `HOST` | `0.0.0.0` | Server bind address |
| `PORT` | `8080` | Server port |
| `WORKSPACE_DIR` | (disabled) | Enable local file tools with sandbox path |
| `ENABLE_COMMAND_EXECUTION` | `false` | Allow agents to execute commands |
| `MAX_FILE_SIZE` | `104857600` (100MB) | Maximum file size in bytes |
| `WORKFLOW_PATH` | `workflow.json` | Path to workflow definition |
| `RATE_LIMIT_REQUESTS` | `60` | Max requests per rate limit window |
| `RATE_LIMIT_WINDOW` | `60` | Rate limit window in seconds |
| `MAX_REQUEST_SIZE` | `10485760` (10MB) | Maximum HTTP request body size |
| `LOG_LEVEL` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR) |

### Security

The exported service implements multiple security measures:

- **No `eval()`**: All condition evaluation uses safe AST-based parsing
- **No `shell=True`**: Commands executed without shell to prevent injection
- **Sandboxed file operations**: All paths restricted to `WORKSPACE_DIR`
- **Shell operator rejection**: Pipes, redirects, and command chaining blocked
- **Path traversal protection**: `..` and symlink escapes blocked
- **File size limits**: Configurable max file size (default 100MB)
- **Input validation**: Request size limits (default 10MB)
- **Rate limiting**: Configurable request rate limits (default 60/min)

### MCP Tool Support

The exported service supports MCP (Model Context Protocol) tools via the official Python SDK. MCP servers must be running and accessible at their configured URLs.

MCP tools configured in your workflow are automatically available to agent blocks. The service connects to MCP servers via Streamable HTTP transport.

### Export Validation

Before export, the service validates your workflow for compatibility:

- **Unsupported block types**: Shows which blocks cannot be exported
- **Unsupported providers**: Shows which LLM providers are not yet supported
- **Clear error messages**: Displayed via notification system with actionable feedback

If validation fails, you'll see a notification explaining what needs to be changed.

## Tech Stack

- **Framework**: [Next.js](https://nextjs.org/) (App Router)
Expand Down
Loading