Skip to main content

Execute Non Query

Executes a SQL command (INSERT, UPDATE, DELETE, CREATE, ALTER, etc.) that does not return results.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Connection Id - The connection ID returned from the Connect node (optional if credentials are provided).
  • Batch Id - Optional batch ID for executing the command within a transaction context.
  • SQL Query - The SQL command to execute (INSERT, UPDATE, DELETE, CREATE, ALTER, DROP, etc.).

Options

  • Credentials - Database credentials (optional if using Connection ID). This allows executing commands without a Connect node.

How It Works

The Execute Non Query node runs SQL commands that modify data or database structure without returning a result set. When executed, the node:

  1. Validates inputs and determines connection method (connection ID or credentials)
  2. Processes template variables in the SQL query using message context
  3. If a Batch Id is provided, adds the command to that transaction
  4. Otherwise, executes the command directly on the connection
  5. Commits the changes (if not in a batch)
  6. Continues to the next node without returning data

Requirements

  • Either a valid connection ID from Connect node OR database credentials
  • A valid SQL command (INSERT, UPDATE, DELETE, CREATE, ALTER, DROP, etc.)
  • If using Batch Id, a valid batch transaction must exist

Error Handling

The node will return specific errors in the following cases:

  • InvalidParameter - Template variable referenced in query is not found in message context
  • InvalidBatch - Batch ID provided but batch not found
  • ErrInvalidArg - Neither connection ID nor credentials were provided
  • ErrNotFound - Connection ID not found
  • SQL execution errors (syntax errors, constraint violations, permission denied)

Template Variables

The SQL Query supports dynamic variables using double curly braces:

Syntax: {{variableName}}

Template variables are replaced with values from the message context before execution.

Example:

INSERT INTO users (user_id, name, email) VALUES ({{userId}}, '{{userName}}', '{{userEmail}}')

Usage Notes

  • The SQL editor provides syntax highlighting for SQL statements
  • This node does not return query results - use Execute Query for SELECT statements
  • Commands are executed immediately unless within a batch transaction
  • For bulk inserts, consider using batch operations for better performance
  • The query timeout is set to 60 seconds (configurable in connection settings)
  • ClickHouse is optimized for INSERT operations - bulk inserts are highly efficient

Example 1: Insert Single Record

Scenario: Insert a new user record into the users table

Inputs:

  • Connection Id: (from Connect node)
  • SQL Query:
INSERT INTO users (user_id, name, email, status, created_at)
VALUES ({{userId}}, '{{userName}}', '{{userEmail}}', 'active', now())

Message Context:

{
"userId": 1001,
"userName": "John Doe",
"userEmail": "john@example.com"
}

Result: User record is inserted into the database.

Example 2: Bulk Insert

Scenario: Insert multiple event records at once

Inputs:

  • Connection Id: (from Connect node)
  • SQL Query:
INSERT INTO analytics_events (timestamp, event_type, user_id, properties)
VALUES
(now(), 'page_view', 1001, '{"page": "/home"}'),
(now(), 'click', 1001, '{"element": "button"}'),
(now(), 'page_view', 1002, '{"page": "/products"}')

Result: Three event records are inserted efficiently.

Example 3: Update Records

Scenario: Update user status based on activity

Inputs:

  • Connection Id: (from Connect node)
  • SQL Query:
ALTER TABLE users
UPDATE status = 'inactive'
WHERE last_login < now() - INTERVAL 90 DAY

Result: All users who haven't logged in for 90 days are marked as inactive.

Example 4: Delete Old Records

Scenario: Clean up old analytics data

Inputs:

  • Connection Id: (from Connect node)
  • SQL Query:
ALTER TABLE analytics_events
DELETE WHERE timestamp < '{{cutoffDate}}'

Message Context:

{
"cutoffDate": "2023-01-01"
}

Result: All events before the cutoff date are deleted.

Example 5: Create Table

Scenario: Create a new table for storing user sessions

Inputs:

  • Connection Id: (from Connect node)
  • SQL Query:
CREATE TABLE IF NOT EXISTS user_sessions (
session_id UUID,
user_id UInt64,
start_time DateTime,
end_time DateTime,
pages_viewed UInt32,
events Array(String)
) ENGINE = MergeTree()
ORDER BY (user_id, start_time)
PARTITION BY toYYYYMM(start_time)

Result: New table is created with optimized structure for analytics.

Example 6: Using Credentials (No Connection)

Scenario: Execute a one-off insert without maintaining a connection

Inputs:

  • Connection Id: (empty)
  • SQL Query:
INSERT INTO audit_log (timestamp, action, user_id)
VALUES (now(), 'automation_run', {{userId}})

Options:

  • Credentials: (select ClickHouse Database credential)

Message Context:

{
"userId": 1001
}

Result: Audit log entry is created.

Best Practices

  • Bulk Inserts: Insert multiple rows in a single statement for better performance
  • Batch Operations: Use batch transactions for atomic operations
  • Partitioning: Design tables with appropriate partitioning for efficient data management
  • Data Types: Use appropriate ClickHouse data types (UInt64, DateTime, Array, etc.)
  • Mutations: Use ALTER TABLE for updates/deletes (ClickHouse handles them asynchronously)
  • Template Variables: Sanitize user input to prevent SQL injection
  • Error Handling: Implement proper error handling for constraint violations
  • Table Engines: Choose the right engine (MergeTree, ReplacingMergeTree, SummingMergeTree)

ClickHouse-Specific Operations

INSERT Operations

-- Simple insert
INSERT INTO table_name VALUES (...)

-- Insert from SELECT
INSERT INTO table_name SELECT * FROM source_table WHERE condition

-- Insert with column specification
INSERT INTO table_name (col1, col2) VALUES (val1, val2)

UPDATE Operations (Mutations)

-- Updates are asynchronous in ClickHouse
ALTER TABLE table_name UPDATE column = value WHERE condition

-- Update multiple columns
ALTER TABLE users
UPDATE status = 'inactive', updated_at = now()
WHERE last_login < now() - INTERVAL 90 DAY

DELETE Operations (Mutations)

-- Deletes are asynchronous in ClickHouse
ALTER TABLE table_name DELETE WHERE condition

-- Lightweight delete (faster, available in newer versions)
DELETE FROM table_name WHERE condition

CREATE TABLE

CREATE TABLE IF NOT EXISTS events (
timestamp DateTime,
event_type String,
user_id UInt64,
properties String
) ENGINE = MergeTree()
ORDER BY (timestamp, user_id)
PARTITION BY toYYYYMM(timestamp)
TTL timestamp + INTERVAL 90 DAY

Common Errors and Solutions

Error: Code: 62. DB::Exception: Syntax error

  • Solution: Check SQL syntax, verify table and column names exist

Error: Code: 60. DB::Exception: Table doesn't exist

  • Solution: Ensure table is created before inserting data, check database name

Error: Code: 252. DB::Exception: Too many parts

  • Solution: Reduce insert frequency, increase merge speed, or adjust partitioning

Error: Code: 241. DB::Exception: Memory limit exceeded

  • Solution: Reduce batch size, optimize query, or increase memory limits

Error: Connection timeout

  • Solution: Check network connectivity, verify server is running, increase timeout

Performance Tips

  • Batch writes: Group multiple inserts into single statements
  • Async inserts: Use async_insert setting for high-throughput scenarios
  • Partition pruning: Design partitions to minimize data scanned
  • Compression: ClickHouse handles compression automatically
  • Distributed tables: Use for multi-node clusters
  • Materialized views: Pre-compute aggregations for faster queries