Skip to main content

Insert Table

Inserts table data into a MySQL database table. This node is optimized for bulk inserts of structured data.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Connection Id - The unique identifier of the database connection to use (optional if credentials provided directly).
  • Transaction Id - The unique identifier of a transaction (optional). Use when the insert is part of a database transaction.
  • Table Name - The name of the database table to insert data into.
  • Table Data - The table data to insert, with columns and rows in table structure format.

Options

  • Replace - When enabled, updates existing rows on duplicate key instead of failing. If a row with the same unique key exists, it will be updated with the new values.
  • Credentials - MySQL database credentials (optional). Use this if you want to insert without a Connect node.
  • Parameters - Additional connection string parameters (optional). Only used when credentials are provided.

Output

  • None - The data is inserted into the database, but no results are returned.

How It Works

The Insert Table node inserts structured table data into a MySQL table. When executed, the node:

  1. Retrieves the database connection (either from connection ID or creates new connection from credentials)
  2. Optionally retrieves the transaction if a transaction ID is provided
  3. Validates the table name and table data
  4. For each row in the table data:
    • Constructs an INSERT statement based on the columns
    • Adds column values as parameters to prevent SQL injection
    • Executes the INSERT (or INSERT ON DUPLICATE KEY UPDATE if Replace is enabled)
  5. Applies all changes to the database

Requirements

  • Either: A valid connection ID from Connect node OR valid database credentials
  • Valid table name that exists in the database
  • Table data with at least one row
  • Table columns must match database table columns
  • Appropriate INSERT permission on the table

Error Handling

The node will return specific errors in the following cases:

  • ErrInvalidArg - Table name is empty or table data is empty
  • ErrNotFound - Connection ID not found
  • ErrRuntime - Insert operation failed (table doesn't exist, column mismatch, constraint violation, etc.)

Table Data Format

The table data must be in the following structure:

{
"columns": ["Name", "Email", "City", "Age"],
"rows": [
{"Name": "John Doe", "Email": "john@example.com", "City": "New York", "Age": 30},
{"Name": "Jane Smith", "Email": "jane@example.com", "City": "Los Angeles", "Age": 28},
{"Name": "Bob Johnson", "Email": "bob@example.com", "City": "Chicago", "Age": 35}
]
}

Usage Examples

Insert from CSV

Read CSV → Insert Table

The CSV node outputs data in table format, which can be directly used:

// CSV output is automatically in table format
$item.table = {
columns: ["ProductName", "Category", "Price", "Stock"],
rows: [...]
}

Insert from Excel

Excel Read Range → Insert Table

Excel nodes also output table format:

// Excel range data
$item.range = {
columns: ["CustomerName", "Email", "Phone", "City"],
rows: [...]
}

Insert from Web Scraping

Browser Scrape Table → Transform Data → Insert Table
// Transform scraped data to table format
const table = {
columns: Object.keys($item.scrapedData[0]),
rows: $item.scrapedData
};

Insert from API Response

HTTP Request → Transform to Table → Insert Table
// Convert API JSON array to table format
const apiData = $item.response.data;
const table = {
columns: ["OrderID", "CustomerID", "Total", "Status"],
rows: apiData.map(order => ({
OrderID: order.id,
CustomerID: order.customer_id,
Total: order.total,
Status: order.status
}))
};

Common Use Cases

  1. Bulk Data Import - Import data from CSV, Excel, or external APIs
  2. Web Scraping Storage - Store scraped data in MySQL
  3. Data Migration - Move data between systems
  4. ETL Operations - Extract, transform, and load data into MySQL
  5. Report Data Storage - Save generated report data
  6. Batch Processing - Insert multiple records efficiently
  7. Sync Operations - Synchronize data from external sources

Replace Mode (ON DUPLICATE KEY UPDATE)

When Replace option is enabled, the node generates INSERT ... ON DUPLICATE KEY UPDATE statements:

INSERT INTO tablename (Name, Email, City)
VALUES ('John', 'john@example.com', 'NYC')
ON DUPLICATE KEY UPDATE
Name = VALUES(Name),
Email = VALUES(Email),
City = VALUES(City)

This is useful for:

  • Upserting data (insert if not exists, update if exists)
  • Syncing data from external sources
  • Updating records based on unique keys
  • Avoiding duplicate entry errors

Requirements for Replace Mode:

  • Table must have a PRIMARY KEY or UNIQUE constraint
  • The constraint determines which records are considered duplicates

Tips for Effective Use

  • Table Format - Ensure data is in proper table format with columns and rows
  • Column Matching - Table data columns must match database table columns (case-sensitive)
  • Data Types - Ensure data types match (numbers as numbers, dates as proper format)
  • Batch Processing - For very large datasets, split into batches to avoid timeouts
  • Replace Mode - Use Replace option to handle duplicates gracefully
  • Transactions - Use transactions for atomic bulk inserts
  • Direct Credentials - Use optional Credentials field to insert without Connect node
  • Validation - Validate data before inserting to avoid runtime errors

Working with Different Data Sources

CSV to MySQL

Read CSV (file: data.csv)

Insert Table
- Table Name: imported_data
- Table Data: $item.table

Excel to MySQL

Excel Open (file: data.xlsx)

Excel Read Range (range: A1:D100)

Insert Table
- Table Name: excel_import
- Table Data: $item.range

JSON Array to MySQL

// In JavaScript node: transform JSON to table format
const jsonData = $item.apiResponse;

const table = {
columns: Object.keys(jsonData[0]),
rows: jsonData
};

return { table };
JavaScript → Insert Table (Table Data: $item.table)

Query to Insert (Copy Data)

Query (SELECT * FROM source_table)

Insert Table
- Table Name: destination_table
- Table Data: $item.result

Best Practices

Validate Data First

// JavaScript node before Insert
const rows = $item.table.rows;

// Validate required fields
const validRows = rows.filter(row => {
return row.Email && row.Name && row.Email.includes('@');
});

// Remove duplicates
const uniqueRows = Array.from(
new Map(validRows.map(row => [row.Email, row])).values()
);

return {
table: {
columns: $item.table.columns,
rows: uniqueRows
}
};

Handle Large Datasets

For large datasets (10,000+ rows), process in batches:

// Split into batches of 1000
const batchSize = 1000;
const allRows = $item.table.rows;

for (let i = 0; i < allRows.length; i += batchSize) {
const batchRows = allRows.slice(i, i + batchSize);

// Send each batch to Insert Table node
// Use Loop or recursive flow
}

Use Transactions for Consistency

Connect

Start Transaction

Insert Table (batch 1)

Insert Table (batch 2)

Insert Table (batch 3)

Commit (all or nothing)

Disconnect

Map Column Names

If source columns don't match database columns:

// Map column names
const table = {
columns: ["CustomerName", "EmailAddress", "PhoneNumber"],
rows: $item.sourceData.rows.map(row => ({
CustomerName: row.Name,
EmailAddress: row.Email,
PhoneNumber: row.Phone
}))
};

Performance Optimization

Disable Indexes During Bulk Insert

For very large imports:

Non Query: ALTER TABLE tablename DISABLE KEYS

Insert Table (bulk data)

Non Query: ALTER TABLE tablename ENABLE KEYS

Use Batch Inserts

The Insert Table node automatically uses parameterized inserts for each row, which is efficient. For maximum performance:

  • Insert in batches of 1000-5000 rows
  • Use transactions to group batches
  • Consider LOAD DATA INFILE for extremely large datasets (use Non Query node)

Connection Reuse

Connect (once)

Insert Table (multiple calls in loop)

Disconnect

Better than connecting for each insert.

Common Errors and Solutions

Insert Operation Failed

Error: ErrRuntime: Insert operation failed

Common Causes:

  • Table doesn't exist
  • Column name mismatch
  • Data type mismatch
  • Constraint violation (NOT NULL, FOREIGN KEY, UNIQUE)
  • Permission denied

Solutions:

  • Verify table exists: SHOW TABLES LIKE 'tablename'
  • Check column names: DESCRIBE tablename
  • Validate data types match column definitions
  • Ensure required (NOT NULL) columns have values
  • Check foreign key references exist
  • Verify user has INSERT permission

Table Name Cannot Be Empty

Error: ErrInvalidArg: Table name cannot be empty

Solutions:

  • Provide the table name in the Table Name field
  • Ensure the variable containing the table name has a value

Table Data Cannot Be Empty

Error: ErrInvalidArg: Table data cannot be empty

Solutions:

  • Ensure the table data has at least one row
  • Validate the data source (CSV, Excel, etc.) has data
  • Check that data transformation didn't remove all rows

Duplicate Entry Error

Error: Duplicate entry 'value' for key 'PRIMARY' or 'unique_constraint'

Solutions:

  • Enable the Replace option to update on duplicate
  • Filter out duplicates before inserting
  • Check if records already exist in the database
  • Use a different unique value generator

Column Count Doesn't Match

Error: Column count doesn't match value count

Solutions:

  • Ensure all rows have values for all columns
  • Check that column names in table data match database columns exactly
  • Verify case sensitivity (MySQL is case-sensitive on Linux)

Advanced Examples

ETL Pipeline

HTTP Request (fetch data from API)

JavaScript (transform data)

Query (check for existing records)

JavaScript (filter new records)

Insert Table (insert new records)

Incremental Load

// Check last import timestamp
// Query: SELECT MAX(ImportedAt) FROM target_table

const lastImport = $item.lastImportDate;

// Filter source data for new records
const newRecords = $item.sourceData.rows.filter(row => {
return new Date(row.CreatedAt) > new Date(lastImport);
});

// Add import timestamp
const table = {
columns: [...$item.sourceData.columns, 'ImportedAt'],
rows: newRecords.map(row => ({
...row,
ImportedAt: new Date().toISOString()
}))
};

Upsert Pattern with Replace

Query (SELECT existing IDs)

JavaScript (merge new and updated records)

Insert Table (with Replace enabled)
- Inserts new records
- Updates existing records based on PRIMARY KEY

Multi-Table Insert with Transaction

Connect

Start Transaction

Insert Table (customers)

Query (get new customer IDs)

JavaScript (add customer IDs to orders data)

Insert Table (orders)

Commit

Disconnect

Data Type Mapping

Ensure data types match MySQL column types:

JavaScript TypeMySQL TypeExample
StringVARCHAR, TEXT, CHAR'John Doe'
NumberINT, BIGINT, DECIMAL, FLOAT123, 45.67
BooleanTINYINT(1), BOOLEANtrue → 1, false → 0
DateDATE, DATETIME, TIMESTAMP'2024-01-15', '2024-01-15 10:30:00'
nullNULLnull

Date Format

// Format dates for MySQL
const table = {
columns: ["OrderDate", "ShipDate"],
rows: $item.orders.map(order => ({
OrderDate: new Date(order.orderDate).toISOString().slice(0, 10), // '2024-01-15'
ShipDate: new Date(order.shipDate).toISOString().slice(0, 19).replace('T', ' ') // '2024-01-15 10:30:00'
}))
};

See Also