Skip to main content

Read All

Retrieves all documents from a MongoDB collection without any filtering.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

warning

This node retrieves ALL documents in the collection. Use with caution on large collections as it may consume significant memory and time.

Inputs

  • MongoDB Client Id - The client ID returned from the Connect node (optional if credentials are provided).
  • Database Name - The name of the database containing the collection.
  • Collection Name - The name of the collection to read from.

Options

  • Credentials - Database credentials (Category 5) - optional if using Client ID from Connect node. This allows you to perform the operation without a separate Connect node.

Output

  • Documents - An array containing all documents in the collection. Each document is returned as an object with all its fields.

How It Works

The Read All node retrieves every document from a MongoDB collection. When executed, the node:

  1. Validates that database name and collection name are not empty
  2. Obtains a MongoDB client (either from client ID or by creating one from credentials)
  3. Accesses the specified collection
  4. Executes a Find operation with an empty filter {}
  5. Retrieves all documents from the collection
  6. Returns the complete array of documents
  7. Stores the result in the specified output variable

Requirements

  • Either a valid client ID from Connect node OR database credentials
  • Valid database name (non-empty)
  • Valid collection name (non-empty)
  • Appropriate permissions to read documents from the collection
  • Sufficient memory to load all documents

Error Handling

The node will return specific errors in the following cases:

  • ErrInvalidArg - Database name or collection name is empty, or client/credentials are invalid
  • ErrConnection - Cannot connect to MongoDB (when using credentials)
  • Permission errors if the user doesn't have read rights
  • Collection not found errors if the collection doesn't exist
  • Memory errors if the collection is too large to load

Usage Notes

  • Retrieves ALL documents without any filtering
  • Returns an empty array if the collection is empty
  • All document fields including _id are returned
  • Entire result set is loaded into memory
  • Use Read Document node instead if you need filtering
  • You can use either the client ID approach or direct credentials approach
  • Not recommended for collections with millions of documents
  • Consider using pagination or Read Document with filters for large datasets

Output Format

The output is an array of document objects:

[
{
"_id": "507f1f77bcf86cd799439011",
"name": "John Doe",
"email": "john@example.com",
"age": 30
},
{
"_id": "507f1f77bcf86cd799439012",
"name": "Jane Smith",
"email": "jane@example.com",
"age": 28
},
{
"_id": "507f1f77bcf86cd799439013",
"name": "Bob Johnson",
"email": "bob@example.com",
"age": 35
}
]

Example Usage

Scenario 1: Read all users

  1. Connect node → Client Id
  2. Read All:
    • MongoDB Client Id: (from Connect)
    • Database Name: "myapp"
    • Collection Name: "users"
    • Output: all_users
  3. Log: "Total users: " + all_users.length

Scenario 2: Read all with direct credentials

Read All:
- Database Name: "inventory"
- Collection Name: "products"
- Credentials: (select credential)
- Output: all_products

For Each product in all_products:
Log: product.name + " - Stock: " + product.stock

Scenario 3: Export all data to CSV

Connect → Client Id

Read All:
- Database Name: "sales"
- Collection Name: "orders"
- Output: all_orders

Set Variable:
- csv_data = all_orders.map(order => ({
order_id: order.order_id,
customer: order.customer,
total: order.total,
date: order.date
}))

Write CSV:
- Data: {csv_data}
- File: "orders_export.csv"

Disconnect

Scenario 4: Process all documents

Read All:
- Database Name: "analytics"
- Collection Name: "events"
- Output: all_events

Set Variable:
- total_count = all_events.length
- unique_users = new Set(all_events.map(e => e.user_id)).size

Log: "Total events: " + total_count
Log: "Unique users: " + unique_users

Scenario 5: Backup collection data

Read All:
- Database Name: "production"
- Collection Name: "important_data"
- Output: backup_data

Insert Document:
- Database Name: "backup"
- Collection Name: "important_data_backup"
- MongoDB Query: {backup_data}

Log: "Backed up " + backup_data.length + " documents"

Scenario 6: Calculate statistics

Read All:
- Database Name: "ecommerce"
- Collection Name: "products"
- Output: products

Set Variables:
- total_products = products.length
- total_value = products.reduce((sum, p) => sum + (p.price * p.stock), 0)
- avg_price = products.reduce((sum, p) => sum + p.price, 0) / total_products
- out_of_stock = products.filter(p => p.stock === 0).length

Log: "Total products: " + total_products
Log: "Total inventory value: $" + total_value
Log: "Average price: $" + avg_price
Log: "Out of stock items: " + out_of_stock

Scenario 7: Iterate and update all documents

Connect → Client Id

Read All:
- Database Name: "legacy"
- Collection Name: "records"
- Output: all_records

For Each record in all_records:
Update Document:
- Database Name: "legacy"
- Collection Name: "records"
- MongoDB Query:
{
"Filter": {
"_id": {"$oid": "{record._id}"}
},
"Update": {
"$set": {
"migrated": true,
"migration_date": "{{current_date}}"
}
}
}

Disconnect

Common Use Cases

  • Data Export: Export all collection data to files or other systems
  • Backup: Create backups of entire collections
  • Migration: Read all data for migration to another database
  • Analytics: Calculate statistics across all documents
  • Reporting: Generate reports using all collection data
  • Small Collections: Process complete datasets from small collections
  • Testing: Retrieve all test data for validation
  • Sync: Synchronize data with external systems

Best Practices

  • Only use on collections with manageable data sizes
  • Monitor memory usage when reading large collections
  • Consider using Read Document with pagination for large datasets
  • Implement error handling for memory issues
  • Use filters (Read Document) when you don't need all data
  • Create indexes if you'll filter or sort the results after retrieval
  • Consider batch processing for very large collections
  • Test with small collections first before using on production data

Performance Considerations

  • Memory usage increases with collection size
  • Processing time increases linearly with document count
  • Network bandwidth consumed for data transfer
  • Consider these alternatives for large collections:
    • Use Read Document with pagination
    • Process in batches using skip and limit
    • Use aggregation pipeline for calculations
    • Export data using MongoDB tools instead

Alternatives for Large Collections

Pagination approach:

Read Document with skip and limit:
{
"skip": 0,
"limit": 100
}

Aggregation approach: Use MongoDB aggregation pipeline for processing without loading all data into memory.

Streaming approach: Process documents in smaller batches using loops with Read Document.

Common Errors

Empty Database Name:

  • Cause: Database name input is empty
  • Solution: Provide a valid database name

Empty Collection Name:

  • Cause: Collection name input is empty
  • Solution: Provide a valid collection name

Collection Not Found:

  • Cause: Specified collection doesn't exist
  • Solution: Verify collection name or create the collection first

Permission Denied:

  • Cause: User doesn't have read permission
  • Solution: Ensure user has read or readWrite role

Memory Error:

  • Cause: Collection too large to load into memory
  • Solution: Use Read Document with filters or implement pagination

Timeout:

  • Cause: Collection size causes operation to exceed timeout
  • Solution: Increase timeout or use alternative approaches for large collections

Working with Results

Count documents:

const count = documents.length;

Filter results:

const active = documents.filter(doc => doc.status === 'active');

Sort results:

const sorted = documents.sort((a, b) => a.name.localeCompare(b.name));

Map to specific fields:

const names = documents.map(doc => doc.name);

Find specific document:

const user = documents.find(doc => doc.email === 'john@example.com');

Group documents:

const grouped = documents.reduce((acc, doc) => {
(acc[doc.category] = acc[doc.category] || []).push(doc);
return acc;
}, {});
  • Read Document - Query documents with filtering (recommended for large collections)
  • Show Collections - List all collections in a database
  • Insert Document - Add new documents
  • Update Document - Modify documents
  • Delete Document - Remove documents