Upload Object
Uploads a local file to an S3 bucket with optional metadata and content type configuration.
Common Properties
- Name - The custom name of the node.
- Color - The custom color of the node.
- Delay Before (sec) - Waits in seconds before executing the node.
- Delay After (sec) - Waits in seconds after executing node.
- Continue On Error - Automation will continue regardless of any error. The default value is false.
If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.
Inputs
- Client Id - The client connection ID from the Connect node. Optional if using credentials directly.
- File Path - The local file system path to the file you want to upload.
- Bucket Name - The name of the S3 bucket where the file will be uploaded.
- Object Name - The key/name for the object in S3. This is the full path including any prefixes (folders).
Options
- Content Type - (Optional) The MIME type of the file (e.g.,
application/pdf,image/png,text/csv,video/mp4). If not specified, S3 will attempt to determine it automatically. - User Metadata - (Optional) Custom key-value metadata to attach to the object. Metadata must be provided as an object with string values.
- End Point - S3 endpoint URL. Required only if using credentials directly instead of Client ID.
- Access Key Id - AWS Access Key ID credential. Optional - use this instead of Client ID for direct authentication.
- Secret Key Access - AWS Secret Access Key credential. Optional - use this instead of Client ID for direct authentication.
How It Works
The Upload Object node transfers a file from your local file system to an S3 bucket. When executed, the node:
- Retrieves the S3 client using either the Client ID or creates a new client from credentials
- Validates that the bucket name and object name are provided and not empty
- Validates that the file path exists and is accessible
- Retrieves optional content type and user metadata
- Uploads the file to S3 with the specified options
- Completes successfully once the upload is finished
Requirements
- Either a valid Client ID from a Connect node, or Access Key ID and Secret Access Key credentials
- A valid, existing S3 bucket
- A local file that exists and is readable
- Appropriate S3 permissions to upload objects (s3:PutObject)
- Sufficient disk space for the file being uploaded
Error Handling
The node will return specific errors in the following cases:
- Empty or invalid bucket name
- Empty or invalid object name
- File does not exist at the specified path
- File is not readable (permissions issue)
- Invalid Client ID or credentials
- Bucket does not exist
- Insufficient permissions to upload objects
- Network or connection errors
- File size exceeds available memory or upload limits
Usage Notes
- The file is uploaded from the local file system where the robot is running
- Object names can include forward slashes (/) to simulate folder structures
- If an object with the same name already exists, it will be overwritten
- The upload is atomic - the object appears in S3 only after the complete upload
- Large files are automatically handled with multipart uploads
- File paths should use the correct separator for your operating system
- User metadata keys are automatically prefixed with
x-amz-meta-by S3 - All metadata values must be strings
Object Naming Best Practices
- Use forward slashes (/) to organize objects in logical folders
- Include file extensions to make object types clear
- Use descriptive names that indicate content
- Consider including timestamps for versioning:
reports/sales-2024-03-15.pdf - Avoid special characters and spaces in object names
- Use lowercase for consistency
- Keep names under 1024 characters (S3 limit)
Content Type Examples
Common MIME types to use:
- Images:
image/jpeg,image/png,image/gif,image/webp - Documents:
application/pdf,application/msword,application/vnd.openxmlformats-officedocument.wordprocessingml.document - Spreadsheets:
application/vnd.ms-excel,application/vnd.openxmlformats-officedocument.spreadsheetml.sheet - Text:
text/plain,text/csv,text/html,text/xml - Video:
video/mp4,video/mpeg,video/webm - Audio:
audio/mpeg,audio/wav,audio/ogg - Archives:
application/zip,application/x-tar,application/gzip
User Metadata Structure
Metadata must be provided as an object with string key-value pairs:
{
"uploaded-by": "automation-bot",
"document-type": "invoice",
"department": "finance",
"year": "2024"
}
Best Practices
- Always specify the Content Type for better file handling and browser downloads
- Use meaningful object names that indicate the file's purpose
- Add metadata for searchability and file management
- Consider object naming conventions for your organization
- Handle large files appropriately - consider file size limits
- Use versioning on buckets to prevent accidental overwrites
- Implement retry logic for large file uploads
- Validate file paths before attempting upload
- Use encryption for sensitive data (server-side encryption)
Example
To upload a PDF invoice to S3:
Inputs:
- Client Id: (from Connect node)
- File Path:
/home/user/invoices/invoice-2024-001.pdf - Bucket Name:
company-invoices - Object Name:
2024/march/invoice-2024-001.pdf
Options:
- Content Type:
application/pdf - User Metadata:
{
"invoice-number": "2024-001",
"customer": "acme-corp",
"amount": "1500.00"
}
Result:
The file will be uploaded to s3://company-invoices/2024/march/invoice-2024-001.pdf with the specified content type and metadata.
Upload with Folder Structure
To organize files in virtual folders:
Object Name: documents/2024/Q1/reports/sales-report.pdf
This creates a structure that appears as:
documents/
└── 2024/
└── Q1/
└── reports/
└── sales-report.pdf
Batch Upload Example
To upload multiple files, use a Loop node:
- Get Files - List files in a local directory
- Loop - For each file:
- Upload Object
- File Path: (file path from loop)
- Object Name: (construct from file name)
Direct Credentials Example
Inputs:
- File Path:
/data/backup.zip - Bucket Name:
my-backups - Object Name:
daily-backup-2024-03-15.zip
Options:
- Content Type:
application/zip - End Point:
s3.us-east-1.amazonaws.com - Access Key Id: (your AWS Access Key ID credential)
- Secret Key Access: (your AWS Secret Access Key credential)
Common Use Cases
Automated Backups Upload database backups or file backups to S3 for disaster recovery.
Document Management Upload processed documents, invoices, or contracts to S3 for long-term storage.
Media Processing Upload images, videos, or audio files for a media library or CDN.
Data Export Export data to CSV or Excel files and upload to S3 for analysis.
Log Archival Upload application logs to S3 for compliance and debugging.
Common Errors
Error: "NoSuchBucket: The specified bucket does not exist"
- Solution: Verify the bucket name is correct and the bucket exists
Error: "Access Denied"
- Solution: Ensure your credentials have the s3:PutObject permission for the bucket
Error: "File not found"
- Solution: Verify the file path is correct and the file exists
Error: "User Metadata must be a map[string]interface"
- Solution: Ensure metadata is provided as a valid JSON object with string values
Error: "All metadata values must be strings"
- Solution: Convert all metadata values to strings before uploading