List Model Endpoints
Lists available provider endpoints for a specific AI model, including pricing and capabilities information for each provider.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
info
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Connection Id - The connection identifier from Connect node (optional if API Key is provided directly).
- Author - Model author/provider name (e.g., "openai", "google", "anthropic"). Required and cannot be empty.
- Slug - Model slug/identifier (e.g., "gpt-4o", "gemini-2.5-flash", "claude-sonnet-4"). Required and cannot be empty.
Options
Authentication
- API Key - OpenRouter API key credential (optional if using Connection Id).
- Use Robomotion AI Credits - Use Robomotion AI credits instead of your own API key.
Output
- Endpoints - Model endpoint information object containing:
id- Model ID (e.g., "openai/gpt-4o")name- Model display namecreated- Creation timestampdescription- Model descriptionarchitecture- Architecture details:input_modalities- Supported input types (e.g., ["text", "image"])output_modalities- Supported output types (e.g., ["text"])tokenizer- Tokenizer usedinstruct_type- Instruction format
endpoints- Array of provider endpoints, each containing:name- Provider namecontext_length- Context window size in tokenspricing- Pricing information:request- Per-request costimage- Per-image costprompt- Per-prompt-token costcompletion- Per-completion-token cost
provider_name- Provider display namesupported_parameters- Supported API parametersquantization- Quantization level if applicablemax_completion_tokens- Maximum completion tokensmax_prompt_tokens- Maximum prompt tokensstatus- Provider endpoint statusuptime_last_30m- Recent uptime percentage
How It Works
When executed, the node:
- Validates the connection or creates a temporary client
- Validates that Author and Slug are not empty
- Builds the API request URL:
/models/{author}/{slug}/endpoints - Makes GET request to fetch endpoint information
- Parses and returns the complete endpoint data
Requirements
- Either a valid Connection Id from Connect node OR direct API Key credentials OR Robomotion AI Credits
- Valid model author and slug
Error Handling
The node will return specific errors in the following cases:
- Empty or missing Author
- Empty or missing Slug
- Invalid model (404)
- API authentication errors (401)
- API service errors (500, 502, 503, 504)
Model Identifier Format
Models are identified by author/slug:
- Author: Provider name (lowercase, e.g., "openai", "google", "anthropic")
- Slug: Model identifier (e.g., "gpt-4o", "gemini-2.5-flash")
Common examples:
openai/gpt-4ogoogle/gemini-2.5-flashanthropic/claude-sonnet-4x-ai/grok-4
Examples
Example 1: Get GPT-4 Endpoints
Inputs:
- Connection Id: msg.connection
- Author: "openai"
- Slug: "gpt-4o"
Output:
{
id: "openai/gpt-4o",
name: "GPT-4o",
created: 1715126400,
description: "GPT-4 Omni...",
architecture: {
input_modalities: ["text", "image"],
output_modalities: ["text"],
tokenizer: "GPT",
instruct_type: "chat"
},
endpoints: [
{
name: "OpenAI",
context_length: 128000,
pricing: {
request: "0",
image: "0.007225",
prompt: "0.0000025",
completion: "0.00001"
},
provider_name: "OpenAI",
supported_parameters: ["temperature", "top_p", "max_tokens"],
max_completion_tokens: 16384,
max_prompt_tokens: 111616,
status: "available",
uptime_last_30m: 1.0
},
// Additional provider endpoints...
]
}
Example 2: Compare Provider Pricing
Inputs:
- Connection Id: msg.connection
- Author: "anthropic"
- Slug: "claude-sonnet-4"
Output Processing:
// Compare pricing across providers
console.log("Provider pricing for Claude Sonnet 4:");
for (let endpoint of msg.endpoints.endpoints) {
const prompt_cost = parseFloat(endpoint.pricing.prompt);
const completion_cost = parseFloat(endpoint.pricing.completion);
console.log(`${endpoint.provider_name}:`);
console.log(` Prompt: $${prompt_cost.toFixed(6)}/token`);
console.log(` Completion: $${completion_cost.toFixed(6)}/token`);
console.log(` Context: ${endpoint.context_length} tokens`);
console.log(` Status: ${endpoint.status}`);
console.log(` Uptime: ${(endpoint.uptime_last_30m * 100).toFixed(1)}%`);
}
Example 3: Find Best Value Provider
Inputs:
- Connection Id: msg.connection
- Author: "google"
- Slug: "gemini-2.5-flash"
Output Processing:
// Find cheapest provider
let best_provider = null;
let lowest_cost = Infinity;
for (let endpoint of msg.endpoints.endpoints) {
const prompt_cost = parseFloat(endpoint.pricing.prompt);
const completion_cost = parseFloat(endpoint.pricing.completion);
const avg_cost = (prompt_cost + completion_cost) / 2;
if (avg_cost < lowest_cost && endpoint.status === "available") {
lowest_cost = avg_cost;
best_provider = endpoint;
}
}
console.log(`Best value: ${best_provider.provider_name}`);
console.log(`Average cost: $${lowest_cost.toFixed(6)}/token`);
Example 4: Check Model Capabilities
Inputs:
- Connection Id: msg.connection
- Author: "openai"
- Slug: "gpt-4o"
Output Processing:
// Check what the model supports
console.log("Model capabilities:");
console.log(`Input modalities: ${msg.endpoints.architecture.input_modalities.join(", ")}`);
console.log(`Output modalities: ${msg.endpoints.architecture.output_modalities.join(", ")}`);
console.log(`Tokenizer: ${msg.endpoints.architecture.tokenizer}`);
// Check supported parameters
const sample_endpoint = msg.endpoints.endpoints[0];
console.log(`Supported parameters: ${sample_endpoint.supported_parameters.join(", ")}`);
Example 5: Provider Availability Check
Inputs:
- Connection Id: msg.connection
- Author: "anthropic"
- Slug: "claude-opus-4.5"
Output Processing:
// Check which providers are available
const available_providers = msg.endpoints.endpoints.filter(e =>
e.status === "available" && e.uptime_last_30m > 0.95
);
if (available_providers.length === 0) {
console.log("No highly available providers found");
} else {
console.log("Highly available providers:");
available_providers.forEach(p => {
console.log(` - ${p.provider_name} (${(p.uptime_last_30m * 100).toFixed(1)}% uptime)`);
});
}
Example 6: Context Window Analysis
Inputs:
- Connection Id: msg.connection
- Author: "google"
- Slug: "gemini-2.5-pro"
Output Processing:
// Find largest context window
let max_context = 0;
let best_for_long_context = null;
for (let endpoint of msg.endpoints.endpoints) {
if (endpoint.context_length > max_context) {
max_context = endpoint.context_length;
best_for_long_context = endpoint;
}
}
console.log(`Largest context window: ${max_context} tokens`);
console.log(`Provider: ${best_for_long_context.provider_name}`);
Example 7: Cost Estimation
Inputs:
- Connection Id: msg.connection
- Author: "openai"
- Slug: "gpt-4.1"
Output Processing:
// Estimate cost for a specific usage
const prompt_tokens = 1000;
const completion_tokens = 500;
console.log("Cost estimates:");
for (let endpoint of msg.endpoints.endpoints) {
const prompt_cost = parseFloat(endpoint.pricing.prompt) * prompt_tokens;
const completion_cost = parseFloat(endpoint.pricing.completion) * completion_tokens;
const total_cost = prompt_cost + completion_cost;
console.log(`${endpoint.provider_name}: $${total_cost.toFixed(4)}`);
}