List Objects
Retrieves a list of all objects in an S3 bucket, including nested objects in subdirectories.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Client Id - The client connection ID from the Connect node. Optional if using credentials directly.
- Bucket Name - The name of the S3 bucket to list objects from.
Options
- End Point - S3 endpoint URL. Required only if using credentials directly instead of Client ID.
- Access Key Id - AWS Access Key ID credential. Optional - use this instead of Client ID for direct authentication.
- Secret Key Access - AWS Secret Access Key credential. Optional - use this instead of Client ID for direct authentication.
Output
- result - An array of object information. Each object contains the name, size, last modified date, storage class, owner, version ID, and expiration details.
How It Works
The List Objects node retrieves all objects in a bucket recursively, including objects in nested folders. When executed, the node:
- Retrieves the S3 client using either the Client ID or creates a new client from credentials
- Validates that the bucket name is provided and not empty
- Sends a request to S3 to list all objects recursively
- Processes the object stream and collects all objects
- Filters and formats the object information
- Returns an array of object metadata
Requirements
- Either a valid Client ID from a Connect node, or Access Key ID and Secret Access Key credentials
- A valid S3 bucket
- Appropriate S3 permissions to list objects (s3:ListBucket)
Error Handling
The node will return specific errors in the following cases:
- Empty or invalid bucket name
- Invalid Client ID or credentials
- Bucket does not exist
- Insufficient permissions to list objects
- Network or connection errors
- Errors reading individual objects in the stream
Output Structure
The result is an array of object information:
[
{
"name": "documents/report-2024.pdf",
"lastModified": "2024-03-15T10:30:00Z",
"Expiration": "0001-01-01T00:00:00Z",
"Owner": {
"DisplayName": "",
"ID": ""
},
"VersionID": "",
"storageClass": "STANDARD"
},
{
"name": "images/logo.png",
"lastModified": "2024-02-20T14:15:00Z",
"Expiration": "0001-01-01T00:00:00Z",
"Owner": {
"DisplayName": "",
"ID": ""
},
"VersionID": "",
"storageClass": "STANDARD"
}
]
Object Fields
- name - The object key/name including full path
- lastModified - Timestamp when the object was last modified
- Expiration - Expiration date if lifecycle policy is set
- Owner - Object owner information (DisplayName and ID)
- VersionID - Version ID if bucket versioning is enabled
- storageClass - Storage class (STANDARD, GLACIER, etc.)
Usage Notes
- The node lists objects recursively, including all subdirectories
- Objects are returned in no particular order
- The operation may take time for buckets with many objects
- All objects are loaded into memory - be cautious with very large buckets
- The list includes objects at all levels of the folder hierarchy
- Empty "folders" (common prefixes with no objects) are not included
Best Practices
- Filter results in your flow logic if you need specific objects
- For very large buckets, consider filtering by prefix or using pagination
- Cache the results if you need to reference the list multiple times
- Use the object list to build indexes or catalogs
- Process the list in batches if performing operations on many objects
- Monitor memory usage when listing buckets with millions of objects
Example
To list all objects in a documents bucket:
Inputs:
- Client Id: (from Connect node)
- Bucket Name:
company-documents
Output:
[
{
"name": "2024/invoices/invoice-001.pdf",
"lastModified": "2024-03-01T09:00:00Z",
"storageClass": "STANDARD"
},
{
"name": "2024/reports/monthly-report.xlsx",
"lastModified": "2024-03-15T14:30:00Z",
"storageClass": "STANDARD"
},
{
"name": "archive/old-data.zip",
"lastModified": "2023-12-01T10:00:00Z",
"storageClass": "GLACIER"
}
]
Common Use Cases
File Inventory Create a complete inventory of all files in a bucket:
- List Objects - Get all objects
- Process Results - Format and store inventory
- Export - Save to CSV or database
Batch Processing Process all files in a bucket:
- List Objects - Get all objects
- Loop - For each object:
- Download Object - Download file
- Process - Perform operations
- Upload Results - Save processed output
Storage Analysis Analyze storage usage and costs:
const objects = result; // From List Objects
// Calculate total size
const totalSize = objects.reduce((sum, obj) => sum + obj.Size, 0);
const totalSizeGB = totalSize / (1024 * 1024 * 1024);
// Count by storage class
const standardCount = objects.filter(obj =>
obj.storageClass === 'STANDARD'
).length;
Sync Operations Synchronize S3 bucket with local storage:
- List Objects - Get S3 objects
- List Local Files - Get local files
- Compare - Find differences
- Sync - Download new/updated files
File Search Find specific files by name pattern:
const objects = result; // From List Objects
// Find all PDF files
const pdfFiles = objects.filter(obj =>
obj.name.endsWith('.pdf')
);
// Find files in specific folder
const reports = objects.filter(obj =>
obj.name.startsWith('reports/2024/')
);
// Find recently modified files
const recentDate = new Date('2024-03-01');
const recentFiles = objects.filter(obj =>
new Date(obj.lastModified) > recentDate
);
Processing Large Results
For buckets with many objects, process results in batches:
const objects = result; // From List Objects
const batchSize = 100;
for (let i = 0; i < objects.length; i += batchSize) {
const batch = objects.slice(i, i + batchSize);
// Process batch
}
Filtering by File Type
const objects = result; // From List Objects
// Group by file extension
const filesByType = {};
objects.forEach(obj => {
const ext = obj.name.split('.').pop().toLowerCase();
if (!filesByType[ext]) filesByType[ext] = [];
filesByType[ext].push(obj);
});
console.log(`PDF files: ${filesByType['pdf']?.length || 0}`);
console.log(`Images: ${(filesByType['jpg']?.length || 0) + (filesByType['png']?.length || 0)}`);
Date-Based Filtering
Find objects modified within a date range:
const objects = result; // From List Objects
const startDate = new Date('2024-01-01');
const endDate = new Date('2024-03-31');
const objectsInRange = objects.filter(obj => {
const modified = new Date(obj.lastModified);
return modified >= startDate && modified <= endDate;
});
Direct Credentials Example
Inputs:
- Bucket Name:
my-data-bucket
Options:
- End Point:
s3.us-west-2.amazonaws.com - Access Key Id: (your AWS Access Key ID credential)
- Secret Key Access: (your AWS Secret Access Key credential)
Building a Download Queue
Create a queue of files to download:
- List Objects - Get all objects
- Filter - Select specific files
- Loop - For each file:
- Download Object - Download to local storage
const objects = result; // From List Objects
// Create download queue for recent PDFs
const downloadQueue = objects.filter(obj =>
obj.name.endsWith('.pdf') &&
new Date(obj.lastModified) > new Date('2024-03-01')
);
Storage Class Distribution
Analyze how objects are distributed across storage classes:
const objects = result; // From List Objects
const distribution = objects.reduce((acc, obj) => {
const cls = obj.storageClass || 'UNKNOWN';
acc[cls] = (acc[cls] || 0) + 1;
return acc;
}, {});
console.log('Storage class distribution:', distribution);
// { STANDARD: 150, GLACIER: 25, INTELLIGENT_TIERING: 10 }
Common Errors
Error: "NoSuchBucket: The specified bucket does not exist"
- Solution: Verify the bucket name is correct
Error: "Access Denied"
- Solution: Ensure your credentials have the s3:ListBucket permission
Error: "Failed to list object"
- Solution: Check for specific object-level errors or bucket configuration issues
Error: "Invalid Client ID"
- Solution: Verify the Client ID from the Connect node is being passed correctly
Performance Considerations
- Listing large buckets (millions of objects) can take significant time
- All objects are loaded into memory - monitor RAM usage
- Consider using prefix filters if you only need objects in specific folders
- For very large buckets, implement pagination or streaming
- The recursive listing may be slower than prefix-based queries
- Network bandwidth affects listing speed for buckets in different regions
Recursive Listing
The node automatically lists all objects recursively:
Bucket structure:
documents/
├── 2024/
│ ├── Q1/report.pdf
│ └── Q2/report.pdf
└── archive/
└── old.zip
Result includes all files:
- documents/2024/Q1/report.pdf
- documents/2024/Q2/report.pdf
- documents/archive/old.zip