List Models
Retrieves a list of all AI models currently available on your local Ollama server. This node is useful for discovering which models are installed and ready to use.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
info
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Client ID - The Client ID from the Connect node. Optional if Host URL is provided.
Output
- Models - An array of model information objects. Each object contains:
name- The model name and tag (e.g., "llama3:latest")model- The model identifiersize- Model size in bytesdigest- Unique hash of the modelmodified_at- Timestamp of last modificationdetails- Additional model metadata (format, family, parameter size, quantization level)
Options
- Host URL - Ollama server URL (optional). Use this instead of Client ID for direct connection. Example:
http://localhost:11434
How It Works
The List Models node:
- Connects to the Ollama server (via Client ID or Host URL)
- Queries the server for all locally available models
- Retrieves detailed information about each model
- Returns the complete list as an array
Usage Examples
Example 1: List All Local Models
Inputs:
- Client ID: "xyz789..." (or Host URL: "http://localhost:11434")
Output:
- Models: [
{
"name": "llama3:latest",
"model": "llama3",
"size": 4661211648,
"digest": "365c0bd3c0005e26ff8ce6231f42e80a65a83654",
"modified_at": "2024-12-15T10:30:00Z",
"details": {
"format": "gguf",
"family": "llama",
"parameter_size": "8B",
"quantization_level": "Q4_0"
}
},
{
"name": "mistral:latest",
"size": 4109869056,
...
}
]
Example 2: Check if Specific Model Exists
// After List Models node
const models = msg.models;
const modelExists = models.some(m => m.name.includes('llama3'));
if (!modelExists) {
// Trigger Pull Model node
msg.modelToPull = 'llama3';
}
Example 3: Find Smallest Available Model
// After List Models node
const models = msg.models;
const smallestModel = models.reduce((smallest, current) => {
return current.size < smallest.size ? current : smallest;
});
msg.selectedModel = smallestModel.name;
// Use this model for faster generation
Example 4: Display Model Information
// After List Models node
const models = msg.models;
const modelList = models.map(m => {
const sizeGB = (m.size / (1024 ** 3)).toFixed(2);
return `${m.name} (${sizeGB} GB)`;
});
msg.modelSummary = modelList.join('\n');
// Output:
// llama3:latest (4.34 GB)
// mistral:latest (3.83 GB)
// codellama:latest (3.64 GB)
Example 5: Filter Models by Type
// After List Models node
const models = msg.models;
// Filter code-focused models
const codeModels = models.filter(m =>
m.name.includes('code') ||
m.name.includes('codellama')
);
// Filter chat models
const chatModels = models.filter(m =>
m.name.includes('llama') ||
m.name.includes('mistral') ||
m.name.includes('gemma')
);
msg.codeModels = codeModels;
msg.chatModels = chatModels;
Requirements
- Ollama service must be running
- Valid Client ID from Connect node OR Host URL provided
Common Use Cases
- Verifying model availability before generation
- Building dynamic model selection interfaces
- Monitoring installed models for automation dashboards
- Automated model inventory management
- Selecting optimal model based on size or capabilities
- Validating environment setup before running workflows
- Creating model usage reports
Tips
Understanding Model Sizes
- < 2 GB - Lightweight models (phi, gemma:2b) - Fast but less capable
- 2-5 GB - Standard models (mistral, llama3:8b) - Good balance
- 5-15 GB - Large models (llama3:70b) - Higher quality, slower
- > 15 GB - Extra large models - Best quality, resource intensive
Model Naming Convention
Models follow the pattern: name:tag
llama3:latest- Latest version of llama3llama3:8b- Llama3 with 8 billion parametersmistral:7b-instruct- Mistral 7B instruction-tuned variant
Working with Model Lists
// Extract just model names
const modelNames = msg.models.map(m => m.name);
// Sort by size (smallest first)
const sortedBySize = msg.models.sort((a, b) => a.size - b.size);
// Sort by modification date (newest first)
const sortedByDate = msg.models.sort((a, b) =>
new Date(b.modified_at) - new Date(a.modified_at)
);
// Group by family
const byFamily = msg.models.reduce((acc, model) => {
const family = model.details?.family || 'unknown';
if (!acc[family]) acc[family] = [];
acc[family].push(model);
return acc;
}, {});
Performance Considerations
- List Models is a lightweight operation
- Safe to call frequently without performance impact
- Results can be cached if model inventory doesn't change often
- Useful as a health check for Ollama service
Error Handling
Common errors you might encounter:
- "Either Host URL or Client ID must be provided" - Provide one connection method
- Connection refused - Verify Ollama service is running
- Empty model list - No models are installed, use Pull Model
Output Format Details
Each model object in the array contains:
{
"name": "llama3:latest",
"model": "llama3",
"size": 4661211648,
"digest": "365c0bd3c0005e26ff8ce6231f42e80a65a83654",
"modified_at": "2024-12-15T10:30:00.123456Z",
"details": {
"format": "gguf",
"family": "llama",
"parameter_size": "8B",
"quantization_level": "Q4_0"
}
}
Best Practices
- Validate model availability before using it in generation nodes
- Cache the model list if your workflow uses it multiple times
- Handle empty results gracefully by pulling needed models
- Use model size to make intelligent model selection decisions
- Check modification dates to ensure models are up to date
- Filter by family when you need specific model capabilities
Related Nodes
- Pull Model - Download new models
- Show Model Info - Get detailed info about a specific model
- Delete Model - Remove unused models
- Generate Completion - Use models for text generation