Skip to main content

Ollama

The Ollama package enables you to integrate local Large Language Models (LLMs) into your RPA automation workflows. With Ollama, you can run AI models on your own hardware without relying on external API services, ensuring data privacy and reducing costs.

Features

  • Run local AI models for text generation and chat completions
  • Manage model lifecycle: pull, delete, and get model information
  • Generate embeddings for semantic search and text analysis
  • Full control over model parameters (temperature, top_p, etc.)
  • Support for various open-source models (Llama, Mistral, CodeLlama, and more)
  • No external API dependencies - completely local operation

Prerequisites

Before using the Ollama package, you need to:

  1. Install Ollama on your system from ollama.ai
  2. Start the Ollama service (default URL: http://localhost:11434)
  3. Pull at least one model using ollama pull <model-name> (e.g., ollama pull llama3)

Common Use Cases

  • Automated content generation for reports and documentation
  • Intelligent data extraction and classification
  • Chatbot integration in customer service workflows
  • Code generation and analysis
  • Semantic search and document similarity
  • Text summarization and translation
  • Sentiment analysis and text categorization

Getting Started

The typical workflow involves:

  1. Connect - Establish connection to your local Ollama server
  2. Pull Model (if needed) - Download the AI model you want to use
  3. Generate Completion/Chat - Use the model for text generation
  4. Disconnect - Close the connection when done

Alternatively, you can provide the host URL directly to each node without using Connect/Disconnect.