Skip to main content

List Model Endpoints

Lists available provider endpoints for a specific AI model, including pricing and capabilities information for each provider.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Connection Id - The connection identifier from Connect node (optional if API Key is provided directly).
  • Author - Model author/provider name (e.g., "openai", "google", "anthropic"). Required and cannot be empty.
  • Slug - Model slug/identifier (e.g., "gpt-4o", "gemini-2.5-flash", "claude-sonnet-4"). Required and cannot be empty.

Options

Authentication

  • API Key - OpenRouter API key credential (optional if using Connection Id).
  • Use Robomotion AI Credits - Use Robomotion AI credits instead of your own API key.

Output

  • Endpoints - Model endpoint information object containing:
    • id - Model ID (e.g., "openai/gpt-4o")
    • name - Model display name
    • created - Creation timestamp
    • description - Model description
    • architecture - Architecture details:
      • input_modalities - Supported input types (e.g., ["text", "image"])
      • output_modalities - Supported output types (e.g., ["text"])
      • tokenizer - Tokenizer used
      • instruct_type - Instruction format
    • endpoints - Array of provider endpoints, each containing:
      • name - Provider name
      • context_length - Context window size in tokens
      • pricing - Pricing information:
        • request - Per-request cost
        • image - Per-image cost
        • prompt - Per-prompt-token cost
        • completion - Per-completion-token cost
      • provider_name - Provider display name
      • supported_parameters - Supported API parameters
      • quantization - Quantization level if applicable
      • max_completion_tokens - Maximum completion tokens
      • max_prompt_tokens - Maximum prompt tokens
      • status - Provider endpoint status
      • uptime_last_30m - Recent uptime percentage

How It Works

When executed, the node:

  1. Validates the connection or creates a temporary client
  2. Validates that Author and Slug are not empty
  3. Builds the API request URL: /models/{author}/{slug}/endpoints
  4. Makes GET request to fetch endpoint information
  5. Parses and returns the complete endpoint data

Requirements

  • Either a valid Connection Id from Connect node OR direct API Key credentials OR Robomotion AI Credits
  • Valid model author and slug

Error Handling

The node will return specific errors in the following cases:

  • Empty or missing Author
  • Empty or missing Slug
  • Invalid model (404)
  • API authentication errors (401)
  • API service errors (500, 502, 503, 504)

Model Identifier Format

Models are identified by author/slug:

  • Author: Provider name (lowercase, e.g., "openai", "google", "anthropic")
  • Slug: Model identifier (e.g., "gpt-4o", "gemini-2.5-flash")

Common examples:

  • openai/gpt-4o
  • google/gemini-2.5-flash
  • anthropic/claude-sonnet-4
  • x-ai/grok-4

Examples

Example 1: Get GPT-4 Endpoints

Inputs:

  • Connection Id: msg.connection
  • Author: "openai"
  • Slug: "gpt-4o"

Output:

{
id: "openai/gpt-4o",
name: "GPT-4o",
created: 1715126400,
description: "GPT-4 Omni...",
architecture: {
input_modalities: ["text", "image"],
output_modalities: ["text"],
tokenizer: "GPT",
instruct_type: "chat"
},
endpoints: [
{
name: "OpenAI",
context_length: 128000,
pricing: {
request: "0",
image: "0.007225",
prompt: "0.0000025",
completion: "0.00001"
},
provider_name: "OpenAI",
supported_parameters: ["temperature", "top_p", "max_tokens"],
max_completion_tokens: 16384,
max_prompt_tokens: 111616,
status: "available",
uptime_last_30m: 1.0
},
// Additional provider endpoints...
]
}

Example 2: Compare Provider Pricing

Inputs:

  • Connection Id: msg.connection
  • Author: "anthropic"
  • Slug: "claude-sonnet-4"

Output Processing:

// Compare pricing across providers
console.log("Provider pricing for Claude Sonnet 4:");
for (let endpoint of msg.endpoints.endpoints) {
const prompt_cost = parseFloat(endpoint.pricing.prompt);
const completion_cost = parseFloat(endpoint.pricing.completion);

console.log(`${endpoint.provider_name}:`);
console.log(` Prompt: $${prompt_cost.toFixed(6)}/token`);
console.log(` Completion: $${completion_cost.toFixed(6)}/token`);
console.log(` Context: ${endpoint.context_length} tokens`);
console.log(` Status: ${endpoint.status}`);
console.log(` Uptime: ${(endpoint.uptime_last_30m * 100).toFixed(1)}%`);
}

Example 3: Find Best Value Provider

Inputs:

  • Connection Id: msg.connection
  • Author: "google"
  • Slug: "gemini-2.5-flash"

Output Processing:

// Find cheapest provider
let best_provider = null;
let lowest_cost = Infinity;

for (let endpoint of msg.endpoints.endpoints) {
const prompt_cost = parseFloat(endpoint.pricing.prompt);
const completion_cost = parseFloat(endpoint.pricing.completion);
const avg_cost = (prompt_cost + completion_cost) / 2;

if (avg_cost < lowest_cost && endpoint.status === "available") {
lowest_cost = avg_cost;
best_provider = endpoint;
}
}

console.log(`Best value: ${best_provider.provider_name}`);
console.log(`Average cost: $${lowest_cost.toFixed(6)}/token`);

Example 4: Check Model Capabilities

Inputs:

  • Connection Id: msg.connection
  • Author: "openai"
  • Slug: "gpt-4o"

Output Processing:

// Check what the model supports
console.log("Model capabilities:");
console.log(`Input modalities: ${msg.endpoints.architecture.input_modalities.join(", ")}`);
console.log(`Output modalities: ${msg.endpoints.architecture.output_modalities.join(", ")}`);
console.log(`Tokenizer: ${msg.endpoints.architecture.tokenizer}`);

// Check supported parameters
const sample_endpoint = msg.endpoints.endpoints[0];
console.log(`Supported parameters: ${sample_endpoint.supported_parameters.join(", ")}`);

Example 5: Provider Availability Check

Inputs:

  • Connection Id: msg.connection
  • Author: "anthropic"
  • Slug: "claude-opus-4.5"

Output Processing:

// Check which providers are available
const available_providers = msg.endpoints.endpoints.filter(e =>
e.status === "available" && e.uptime_last_30m > 0.95
);

if (available_providers.length === 0) {
console.log("No highly available providers found");
} else {
console.log("Highly available providers:");
available_providers.forEach(p => {
console.log(` - ${p.provider_name} (${(p.uptime_last_30m * 100).toFixed(1)}% uptime)`);
});
}

Example 6: Context Window Analysis

Inputs:

  • Connection Id: msg.connection
  • Author: "google"
  • Slug: "gemini-2.5-pro"

Output Processing:

// Find largest context window
let max_context = 0;
let best_for_long_context = null;

for (let endpoint of msg.endpoints.endpoints) {
if (endpoint.context_length > max_context) {
max_context = endpoint.context_length;
best_for_long_context = endpoint;
}
}

console.log(`Largest context window: ${max_context} tokens`);
console.log(`Provider: ${best_for_long_context.provider_name}`);

Example 7: Cost Estimation

Inputs:

  • Connection Id: msg.connection
  • Author: "openai"
  • Slug: "gpt-4.1"

Output Processing:

// Estimate cost for a specific usage
const prompt_tokens = 1000;
const completion_tokens = 500;

console.log("Cost estimates:");
for (let endpoint of msg.endpoints.endpoints) {
const prompt_cost = parseFloat(endpoint.pricing.prompt) * prompt_tokens;
const completion_cost = parseFloat(endpoint.pricing.completion) * completion_tokens;
const total_cost = prompt_cost + completion_cost;

console.log(`${endpoint.provider_name}: $${total_cost.toFixed(4)}`);
}

Best Practices

  1. Provider Selection:

    • Compare pricing across providers
    • Check uptime and availability
    • Consider context window sizes
    • Verify supported parameters
  2. Cost Optimization:

    • Use endpoint pricing data for cost estimation
    • Route to cheapest available provider
    • Factor in reliability (uptime) when choosing
  3. Capability Planning:

    • Check supported modalities before use
    • Verify parameter support
    • Consider token limits for your use case
  4. Reliability:

    • Monitor uptime percentages
    • Have fallback providers
    • Check status before making requests
  5. Performance:

    • Cache endpoint data (doesn't change frequently)
    • Pre-calculate cost estimates
    • Store frequently used model information

Use Cases

  1. Cost Analysis: Compare provider pricing for budget planning
  2. Provider Selection: Choose best provider based on price and availability
  3. Capability Discovery: Understand what a model supports
  4. Cost Estimation: Calculate expected costs before execution
  5. Fallback Planning: Identify backup providers
  6. Context Planning: Find models with sufficient context windows
  7. Parameter Validation: Check supported parameters before use
  8. Uptime Monitoring: Track provider reliability
  • List Models - Discover available models
  • Generate Text - Use models for text generation
  • Chat Completion - Use models for conversations
  • Get Generation - Get actual costs after execution