OpenFlow Protocol

LLM Node

Large Language Model nodes handle text generation, chat completion, and vision tasks.

Schema

{
  id: "alphanumeric_underscore, 1-50 chars (required, unique node identifier)",
  type: "'LLM' (required, literal node type)",
  name: "string, 1-100 chars (required, display name)",
  config: {
    provider: "'openai' | 'anthropic' | 'grok' (required, LLM provider)",
    model: "provider_model_id, e.g., 'gpt-4', 'claude-3-sonnet', 'grok-2' (required, specific model)",
    max_tokens: "integer, 1-32000, default: 1000 (optional, response length limit)",
    temperature: "float, 0.0-2.0, default: 0.7 (optional, randomness control)",
    mcp_servers: "mcp_server_config_array, [{name: 'server1', endpoint: 'http://localhost:3000'}] (optional)",
    tools: "mcp_tools_config, {enabled: true, tools: ['search', 'calculator']} (optional)"
  },
  prompt_mapping: "variable_mapping_object, {user_input: '{{input_text}}', context: '{{doc.content}}'} (optional)",
  messages: "message_object_array, min 1 item (required, conversation messages)",
  output: "output_schema_object, defines expected response structure (required)",
}

Message Types

Text Message:

{
  type: "'text' (required, message type)",
  role: "'system' | 'user' | 'assistant' (required, message sender)",
  text: "interpolated_string, supports {{variable}} syntax, max 100000 chars (required, message content)",
}

Image Message:

{
  type: "'image' (required, message type)",
  role: "'user' | 'assistant' (required, message sender)",
  image_url: "https_url, e.g., 'https://example.com/image.jpg' (optional, remote image)",
  image_path: "file_system_path, e.g., '/path/to/image.png' (optional, local image)",
  image_data: "base64_string, format: 'data:image/jpeg;base64,/9j/4AAQ...' (optional, inline image)",
}

Output Schema Requirements

  • MUST define at least one output property
  • Each property MUST specify type and description
  • Array types MUST define item structure
  • Object types MUST define property schemas

Example

{
  id: "analyze_document",
  type: "LLM",
  name: "Document Analysis",
  config: {
    provider: "grok",
    model: "grok-3-latest",
    max_tokens: 2000,
    temperature: 0.1
  },
  messages: [
    {
      type: "text",
      role: "system",
      text: "You are a document analysis expert. Extract key information accurately."
    },
    {
      type: "text",
      role: "user",
      text: "Analyze this document and extract: {{analysis_requirements}}"
    }
  ],
  output: {
    summary: {
      type: "string, DataType enum",
      description: "Comprehensive document summary with key insights"
    },
    confidence_score: {
      type: "number, DataType enum",
      description: "Analysis confidence percentage (higher = more confident)"
    }
  }
}

On this page