Skip to main content

Show Model Info

Retrieves detailed information about a specific AI model on your local Ollama server. This node provides comprehensive metadata including model parameters, architecture details, system prompt templates, and licensing information.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Client ID - The Client ID from the Connect node. Optional if Host URL is provided.
  • Model - The name of the model to get information about (e.g., llama3, mistral:latest, codellama:7b).

Output

  • Model Info - A detailed object containing:
    • modelfile - The Modelfile used to create the model
    • parameters - Model parameters and configuration
    • template - The prompt template used by the model
    • details - Technical details (format, family, parameter size, quantization)
    • model_info - Additional metadata about the model

Options

  • Host URL - Ollama server URL (optional). Use this instead of Client ID for direct connection. Example: http://localhost:11434

How It Works

The Show Model Info node:

  1. Connects to the Ollama server (via Client ID or Host URL)
  2. Requests detailed information for the specified model
  3. Retrieves comprehensive metadata from the Ollama registry
  4. Returns all available information as a structured object

Usage Examples

Example 1: Get Basic Model Information

Inputs:
- Model: "llama3"

Output:
- Model Info: {
"modelfile": "FROM llama3\nPARAMETER temperature 0.8\n...",
"parameters": "temperature=0.8\ntop_p=0.9\n...",
"template": "{{ .System }}\n\nUser: {{ .Prompt }}\n\nAssistant:",
"details": {
"format": "gguf",
"family": "llama",
"parameter_size": "8B",
"quantization_level": "Q4_0"
}
}

Example 2: Extract Model Template for Custom Prompts

// After Show Model Info node
const modelInfo = msg.model_info;
const template = modelInfo.template;

// Use the template to understand how to format prompts
console.log("Model expects prompts in this format:");
console.log(template);

// Adapt your prompts accordingly
if (template.includes("### Instruction:")) {
msg.formattedPrompt = `### Instruction:\n${msg.userPrompt}\n\n### Response:`;
} else {
msg.formattedPrompt = msg.userPrompt;
}

Example 3: Check Model Parameters

// After Show Model Info node
const modelInfo = msg.model_info;
const params = modelInfo.parameters;

// Parse parameters to understand defaults
const defaultTemp = params.match(/temperature=([\d.]+)/)?.[1];
const defaultTopP = params.match(/top_p=([\d.]+)/)?.[1];

console.log(`Default temperature: ${defaultTemp}`);
console.log(`Default top_p: ${defaultTopP}`);

// Decide if you need to override defaults
msg.useCustomParams = defaultTemp > 0.9; // true if default is too high

Example 4: Verify Model Capabilities

// After Show Model Info node
const modelInfo = msg.model_info;
const family = modelInfo.details?.family;
const paramSize = modelInfo.details?.parameter_size;

// Select appropriate use case based on model
switch(family) {
case 'llama':
msg.modelType = 'general-purpose';
msg.bestFor = 'chat, completion, analysis';
break;
case 'codellama':
msg.modelType = 'code-focused';
msg.bestFor = 'code generation, code review';
break;
case 'mistral':
msg.modelType = 'instruction-following';
msg.bestFor = 'structured outputs, precise tasks';
break;
}

console.log(`This ${paramSize} model is best for: ${msg.bestFor}`);

Example 5: Model Compatibility Check

// After Show Model Info node
const modelInfo = msg.model_info;
const format = modelInfo.details?.format;
const quantization = modelInfo.details?.quantization_level;

// Check if model meets requirements
const isCompatible = format === 'gguf';
const isEfficient = quantization?.includes('Q4') || quantization?.includes('Q5');

if (!isCompatible) {
throw new Error('Model format not supported');
}

if (isEfficient) {
msg.performanceLevel = 'optimized';
} else {
msg.performanceLevel = 'high-quality';
}

Requirements

  • Ollama service must be running
  • The specified model must exist locally
  • Valid Client ID from Connect node OR Host URL provided

Common Use Cases

  • Understanding model prompt format requirements
  • Extracting default parameters for optimization
  • Validating model configuration before use
  • Documenting model specifications in automation logs
  • Selecting models based on technical capabilities
  • Troubleshooting model behavior
  • Building adaptive prompts based on model templates
  • Generating model comparison reports

Tips

Understanding Model Details

  • format - Usually "gguf" (GPT-Generated Unified Format)
  • family - Model architecture (llama, mistral, gemma, phi, etc.)
  • parameter_size - Model size: 7B, 8B, 13B, 70B, etc. (B = billion parameters)
  • quantization_level - Compression level:
    • Q2_K - Smallest, fastest, lowest quality
    • Q4_0/Q4_K - Good balance of size and quality
    • Q5_0/Q5_K - Better quality, larger size
    • Q8_0 - High quality, much larger

Using the Template Field

The template field shows how to structure prompts. Some models have specific formatting requirements that improve their performance.

Performance Characteristics

Parameter size affects performance:

  • < 3B - Fast responses, basic tasks
  • 7-8B - Good balance for most tasks
  • 13-15B - Better quality, slower
  • 70B+ - Best quality, requires powerful hardware

Error Handling

Common errors you might encounter:

  • "Failed to retrieve model information" - Model doesn't exist or name is incorrect
  • Model not found - Pull the model first using Pull Model
  • Connection refused - Verify Ollama service is running

Best Practices

  1. Cache model info if you query it frequently
  2. Validate model existence before using in workflows
  3. Use model family to select appropriate tasks
  4. Check quantization for performance optimization
  5. Review templates for better prompt engineering