Skip to main content

Send Batch

Commits a batch transaction and executes all queued SQL commands.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Batch Id - The batch ID returned from the Create Batch node.

How It Works

The Send Batch node commits a transaction and executes all SQL commands that were queued within that batch. When executed, the node:

  1. Validates that a batch ID is provided
  2. Retrieves the transaction from the internal batch pool
  3. Commits the transaction to the database
  4. All queued operations are executed atomically
  5. Removes the batch from the internal pool
  6. Releases the transaction resources

Requirements

  • A valid batch ID from a previous Create Batch node
  • The batch must still be active (not already committed or rolled back)

Error Handling

The node will return specific errors in the following cases:

  • InvalidBatch - Batch ID not found (invalid ID or batch already committed/rolled back)
  • SQL execution errors within the batch (constraint violations, syntax errors)
  • Database commit errors (connection lost, server issues)
warning

If the commit fails, the entire transaction is automatically rolled back. None of the operations in the batch will be applied to the database.

Usage Notes

  • Send Batch must be called to commit the transaction - otherwise changes are lost
  • After calling Send Batch, the batch ID becomes invalid and cannot be reused
  • All operations in the batch are executed atomically (all succeed or all fail)
  • If any operation in the batch fails, the entire transaction is rolled back
  • Once committed, changes become visible to other database connections
  • Always call Send Batch after completing all batch operations

Commit Behavior

On Success:

  • All queued operations are applied to the database
  • Changes become permanent and visible to other connections
  • Transaction resources are released
  • Batch ID is removed from the pool

On Failure:

  • Entire transaction is rolled back
  • No changes are applied to the database
  • An error is returned with details
  • Batch ID is removed from the pool

Example 1: Simple Batch Commit

Scenario: Commit a batch of insert operations

Flow:

  1. Create Batch → Get batch_id
  2. Execute Non Query (with batch_id) → Insert data
  3. Execute Non Query (with batch_id) → Insert more data
  4. Send Batch → Commit all inserts

Inputs:

  • Batch Id: "7c9e6679-7425-40de-944b-e07fc1f90ae7" (from Create Batch)

Result: All insert operations are committed to the database atomically.

Example 2: Multi-Step Data Processing

Scenario: Process analytics data with multiple steps

Create Batch:

Output: batch_id = "abc123..."

Execute Non Query 1 (batch_id):

INSERT INTO user_activity_summary
SELECT
user_id,
count() as actions,
max(timestamp) as last_activity
FROM raw_activity
WHERE toDate(timestamp) = today()
GROUP BY user_id

Execute Non Query 2 (batch_id):

ALTER TABLE raw_activity
DELETE WHERE toDate(timestamp) = today()

Execute Non Query 3 (batch_id):

INSERT INTO processing_log (timestamp, table_name, records_processed)
VALUES (now(), 'user_activity_summary', (
SELECT count() FROM user_activity_summary WHERE date = today()
))

Send Batch (batch_id):

  • Batch Id: "abc123..."

Result:

  • Summary is created
  • Raw data is cleaned up
  • Processing is logged
  • All in a single atomic transaction

Example 3: Error Handling with Try-Catch

Scenario: Safely commit a batch with error handling

Flow:

Try:
1. Create Batch
2. Execute operations with batch_id
3. Send Batch ← Commits on success
Catch:
- Log error
- Batch automatically rolls back
Finally:
- Disconnect

Message Flow:

{
"batch_id": "xyz789...",
"operation": "batch_processing"
}

If Send Batch fails (e.g., constraint violation), the Catch block handles the error and the transaction is automatically rolled back.

Best Practices

  • Always commit batches: Ensure Send Batch is called for every Create Batch
  • Error handling: Use Try-Catch to handle commit failures gracefully
  • Validation: Validate data before committing to avoid rollbacks
  • Timely commits: Don't leave transactions open for extended periods
  • Batch scope: Keep transactions focused and reasonably sized
  • Resource cleanup: Use Finally blocks to ensure proper cleanup
  • Monitor failures: Log batch commit failures for debugging

Transaction Lifecycle

┌─────────────────────┐
│ Create Batch │ ← Transaction starts
└──────────┬──────────┘

│ Transaction is open
│ Changes are staged

┌──────────▼──────────┐
│ Execute operations │ ← Operations queued
│ with batch_id │ (not yet committed)
└──────────┬──────────┘


┌──────────▼──────────┐
│ Send Batch │ ← Transaction commits
└──────────┬──────────┘

│ Changes are permanent
│ Transaction ends

┌──────────▼──────────┐
│ Batch ID invalid │
│ Resources released │
└─────────────────────┘

Rollback Scenarios

The transaction is automatically rolled back (all changes discarded) in these cases:

  1. Send Batch fails - Any error during commit
  2. Flow error before Send Batch - Execution stops before commit
  3. Connection lost - Database connection drops
  4. Constraint violation - Data violates database constraints
  5. Syntax error in batch - Invalid SQL in queued operations

Common Patterns

Pattern 1: Simple Batch

Create Batch → Execute operations → Send Batch

Pattern 2: Conditional Commit

Create Batch
→ Execute operations
→ If validation passes: Send Batch
→ Else: Skip (auto-rollback)

Pattern 3: Error-Safe Batch

Try:
Create Batch
→ Execute operations
→ Send Batch
Catch:
Log error (auto-rollback)
Finally:
Cleanup resources

Pattern 4: Multiple Batches

Create Batch 1
→ Execute operations for batch 1
→ Send Batch 1 ✓

Create Batch 2
→ Execute operations for batch 2
→ Send Batch 2 ✓

Performance Considerations

  • Commit overhead: Each commit has overhead - batch multiple operations
  • Lock duration: Long transactions hold locks - commit promptly
  • Memory usage: Large batches use more memory during commit
  • Concurrent access: Commits release locks for other connections
  • Merge optimization: ClickHouse optimizes batch inserts during commit

Troubleshooting

Error: Batch not found

  • Cause: Invalid batch ID or batch already committed
  • Solution: Ensure batch ID is correct and Send Batch is only called once

Error: Constraint violation

  • Cause: Data violates database constraints (unique, foreign key, etc.)
  • Solution: Validate data before committing, check constraint definitions

Error: Connection lost

  • Cause: Network issues or server restart during commit
  • Solution: Implement retry logic, check network stability

Error: Deadlock detected

  • Cause: Multiple transactions waiting for each other
  • Solution: Reduce transaction scope, commit more frequently

Error: Timeout exceeded

  • Cause: Batch operations taking too long
  • Solution: Reduce batch size, optimize queries, increase timeout

When to Use Batches

Use batches when:

  • Multiple operations must succeed or fail together
  • You need to ensure data consistency across operations
  • Performing ETL or data migration tasks
  • Updating multiple related tables
  • You want to rollback on error

Don't use batches when:

  • Performing a single operation
  • Operations are independent
  • You need maximum concurrency
  • Each operation should commit immediately
  • Working with read-only queries