Skip to main content

Read CSV

Reads data from a CSV file and outputs a data table

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Input

  • File Path - Path of the CSV file to read

Output

  • Table - Data read from the CSV file in data table format

Options

  • Separator - The delimiter used in the CSV file

    • Comma (,)
    • Semicolon (;)
    • Tab (TAB)
    • Space
  • Skip Rows - The number of rows to be skipped at the beginning of the CSV file

  • Open with Encoding - The encoding type of the CSV file

    • UTF-8
    • UTF-16
    • Windows 1250
    • Windows 1251
    • Windows 1252
    • Windows 1253
    • Windows 1254
    • Windows 1255
    • Windows 1256
    • Windows 1257
    • Windows 1258
    • ISO 8859-1
    • ISO 8859-2
    • ISO 8859-3
    • ISO 8859-4
    • ISO 8859-5
    • ISO 8859-6
    • ISO 8859-7
    • ISO 8859-8
    • ISO 8859-9
    • ISO 8859-10
    • ISO 8859-13
    • ISO 8859-14
    • ISO 8859-15
    • ISO 8859-16
    • Macintosh
    • KOI8-R
  • Headers - Indicates whether the CSV file has headers or not

  • Jsonify - Transforms the header names into a JSON-friendly format by converting them to lowercase and replacing spaces with underscores.

How It Works

The Read CSV node performs the following steps:

  1. Locates File - Validates that the CSV file exists at the specified path
  2. Opens File - Opens the file using the specified encoding to ensure proper character interpretation
  3. Skips Rows - If configured, skips the specified number of initial rows (useful for files with metadata)
  4. Parses Headers - If Headers option is enabled, reads the first row as column names
  5. Reads Data - Parses each row using the specified separator character
  6. Builds Data Table - Constructs a data table structure with rows and columns
  7. Applies Jsonify - If enabled, transforms header names to JSON-friendly format (lowercase, underscore-separated)
  8. Returns Table - Outputs the complete data table for use in subsequent nodes

Requirements

  • File System Access - Read permissions to the CSV file location
  • Valid CSV File - The file must exist and contain valid CSV-formatted data
  • Correct Encoding - The encoding setting must match the file's actual encoding
  • Memory Availability - Sufficient memory to load the entire CSV file (for very large files, consider Split CSV first)
  • Proper Separator - The separator setting must match the actual delimiter used in the file

Error Handling

Error CodeDescriptionSolution
FILE_NOT_FOUNDThe specified CSV file does not existVerify the file path is correct and the file exists
PERMISSION_DENIEDInsufficient permissions to read the fileCheck file permissions and ensure read access is granted
ENCODING_ERRORThe file encoding doesn't match the specified encodingTry different encoding options or check the file's actual encoding
INVALID_SEPARATORThe separator doesn't match the file structureVerify the correct delimiter (comma, semicolon, tab, space)
MALFORMED_CSVThe CSV file has inconsistent row lengths or corrupted dataValidate the CSV file structure and fix any formatting issues
MEMORY_ERRORThe file is too large to load into memoryUse Split CSV to divide large files into smaller chunks
FILE_IN_USEThe file is locked by another processClose the file in other applications before reading

Usage Examples

Example 1: Read Customer Data

Read a customer list from a CSV file with headers:

Input:
- File Path: "C:\Data\customers.csv"
- Separator: Comma (,)
- Skip Rows: 0
- Open with Encoding: UTF-8
- Headers: true
- Jsonify: false

Output:
- Table: Data table with customer records, using first row as column names

Example 2: Read Excel Export with Metadata

Read a CSV exported from Excel that has 3 metadata rows before the actual data:

Input:
- File Path: "D:\Reports\sales_report.csv"
- Separator: Comma (,)
- Skip Rows: 3
- Open with Encoding: UTF-8
- Headers: true
- Jsonify: false

Output:
- Table: Data table starting from row 4, skipping metadata

Example 3: Read Data for API Processing

Read CSV data and convert headers to API-friendly format:

Input:
- File Path: "/home/user/data/Product List.csv"
- Separator: Comma (,)
- Skip Rows: 0
- Open with Encoding: UTF-8
- Headers: true
- Jsonify: true

CSV Headers: "Product Name", "Unit Price", "Stock Level"
Transformed Headers: "product_name", "unit_price", "stock_level"

Output:
- Table: Data table with JSON-friendly column names for API integration

Usage Notes

  • Encoding Detection - If unsure about encoding, UTF-8 is the most common. For legacy systems, try Windows-1252.
  • Separator Auto-Detection - While the node doesn't auto-detect, comma and semicolon are most common. Tab-separated values are common in data exports.
  • Memory Considerations - Reading very large files (over 100MB) can consume significant memory. Monitor performance and consider splitting large files.
  • Header Row - When Headers is true, the first row (after skipped rows) becomes column names, not data.
  • Jsonify Usage - Enable Jsonify when preparing data for JSON APIs or when you want consistent, code-friendly column names.
  • Empty Cells - Empty cells in the CSV are preserved as empty strings in the data table.
  • Data Type Handling - All data is initially read as strings. Use type conversion nodes if numeric or date operations are needed.

Tips

  • Test with Sample First - When working with large files, test your configuration on a small sample first.
  • Use Skip Rows Wisely - Skip Rows is perfect for CSV files with comments or metadata at the top.
  • Encoding Troubleshooting - If you see strange characters, try different encoding options. UTF-8, Windows-1252, and ISO 8859-1 cover most cases.
  • Inspect Your Data - Use a Log node immediately after Read CSV to inspect the data table structure before further processing.
  • Handle Missing Files - Use Try-Catch blocks to handle file not found errors gracefully.
  • Separator Issues - If data appears in a single column, you likely have the wrong separator selected.
  • Performance Optimization - For repeated reads of the same file, consider reading once and storing in a variable.
  • Write CSV - Write a data table to a CSV file
  • Append CSV - Add rows to an existing CSV file
  • Split CSV - Split large CSV files into smaller chunks
  • Data Table - Learn about data table operations and manipulation
    • Iterate through rows in the data table