Skip to main content

Check Image Safety

Analyzes images for potentially unsafe content using Google Vision API's safe search detection feature.

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Vision Client Id - The unique identifier of the Vision API connection, typically obtained from the Connect node.
  • Image Path - The file path to the image to analyze for safety.

Options

  • Credentials - Google Cloud service account credentials (optional - use instead of Connect node). If provided, the node will create its own client connection without requiring a Vision Client ID.

Output

  • Safety Result - An object containing safety assessment results with the following properties:
    • Adult - Likelihood of adult content (sexually explicit)
    • Medical - Likelihood of medical content
    • Racy - Likelihood of racy content (suggestive but not explicit)
    • Spoof - Likelihood of spoof or fake content
    • Violence - Likelihood of violent content

Each property returns a likelihood level:

  • UNKNOWN - Unknown likelihood
  • VERY_UNLIKELY - Very unlikely to contain this content
  • UNLIKELY - Unlikely to contain this content
  • POSSIBLE - Possibly contains this content
  • LIKELY - Likely contains this content
  • VERY_LIKELY - Very likely contains this content

How It Works

The Check Image Safety node analyzes an image to detect potentially unsafe content using Google Vision API's safe search detection. When executed, the node:

  1. Retrieves the Vision API client using the provided client ID
  2. Validates that the image path is not empty
  3. Opens and reads the image file from the specified path
  4. Creates a Vision API image object from the file
  5. Calls the DetectSafeSearch method to analyze the image for unsafe content
  6. Processes the results and returns the safety assessment

Requirements

  • A valid connection to Vision API established with the Connect node
  • Valid Google Cloud credentials with appropriate permissions
  • An image file accessible from the specified path
  • Enabled Vision API in your Google Cloud project

Error Handling

The node will return specific errors in the following cases:

  • Empty or invalid Vision Client ID
  • Empty image path
  • Invalid image file path
  • File read errors
  • Invalid image format
  • Network connectivity issues
  • Vision API service errors
  • Authentication failures

Usage Notes

  • The Vision Client ID must be obtained from a successful Connect node execution (or provide credentials directly)
  • The image file must be accessible from the specified path
  • Supported image formats include JPEG, PNG, GIF, BMP, TIFF, and WebP
  • Each safety category is assessed with a likelihood level (UNKNOWN, VERY_UNLIKELY, UNLIKELY, POSSIBLE, LIKELY, VERY_LIKELY)
  • This node is useful for content moderation and filtering applications
  • Results can be used to automatically flag or filter images based on safety criteria
  • The safety detection is not 100% accurate and should be used as part of a broader content moderation strategy
  • Safe Search is designed for broad categories; for specific content policies, consider additional validation

Example Use Cases

Moderate User-Uploaded Images

Input:
Vision Client Id: (from Connect node)
Image Path: /uploads/user_photo_12345.jpg

Output:
Safety Result: {
Adult: "VERY_UNLIKELY",
Medical: "UNLIKELY",
Racy: "UNLIKELY",
Spoof: "VERY_UNLIKELY",
Violence: "VERY_UNLIKELY"
}

Action: Image passes safety check, publish to platform

Filter Inappropriate Content Automatically

1. Check Image Safety
Image Path: {{uploaded_image}}
Output: safety_result

2. Programming > Evaluate
Code:
const result = message.safety_result;
const unsafe = (
result.Adult === "LIKELY" || result.Adult === "VERY_LIKELY" ||
result.Violence === "LIKELY" || result.Violence === "VERY_LIKELY" ||
result.Racy === "VERY_LIKELY"
);

if (unsafe) {
message.status = "rejected";
message.reason = "Inappropriate content detected";
} else {
message.status = "approved";
}

3. Flow > Router
Route based on status:
- If rejected: Delete file and notify user
- If approved: Process normally

Flag Images for Manual Review

1. Check Image Safety
Image Path: /uploads/{{filename}}
Output: safety

2. Programming > Evaluate
Determine review priority:
const s = message.safety;
if (s.Adult === "LIKELY" || s.Violence === "LIKELY") {
message.priority = "high";
} else if (s.Adult === "POSSIBLE" || s.Racy === "LIKELY") {
message.priority = "medium";
} else {
message.priority = "none";
}

3. Database > Insert
Table: review_queue
Data: {filename, safety, priority, timestamp}

4. If priority is high or medium:
- Send notification to moderator team
- Hold image from publication

Content Safety Dashboard

1. File System > List Files
Directory: /content
Output: files

2. Loop through files:
- Check Image Safety
Image Path: {{file}}
Output: safety_result

- Programming > Evaluate
Calculate safety score:
const levels = ["VERY_UNLIKELY", "UNLIKELY", "POSSIBLE", "LIKELY", "VERY_LIKELY"];
const getScore = (level) => levels.indexOf(level);

message.safety_score = Math.max(
getScore(safety_result.Adult),
getScore(safety_result.Violence),
getScore(safety_result.Racy)
);

- Database > Insert
Store results with safety_score for dashboard visualization

E-commerce Product Image Validation

1. Check Image Safety
Image Path: /products/new/{{product_id}}.jpg
Output: result

2. Programming > Evaluate
Validate product images:
const r = message.result;

// Allow medical content for health products
// Reject adult/racy content
// Allow all other content

if (r.Adult !== "VERY_UNLIKELY" && r.Adult !== "UNLIKELY") {
message.valid = false;
message.reason = "Adult content not allowed for products";
} else if (r.Violence === "LIKELY" || r.Violence === "VERY_LIKELY") {
message.valid = false;
message.reason = "Violent content not allowed";
} else {
message.valid = true;
}

3. Update product status based on validation

Social Media Platform Moderation

1. Check Image Safety
Image Path: {{post_image}}
Output: safety

2. Programming > Evaluate
Apply platform-specific rules:
const s = message.safety;

// Conservative approach for public platform
message.action = "approve";

if (s.Adult === "POSSIBLE" || s.Adult === "LIKELY" || s.Adult === "VERY_LIKELY") {
message.action = "block";
message.flag = "adult_content";
} else if (s.Violence === "LIKELY" || s.Violence === "VERY_LIKELY") {
message.action = "review";
message.flag = "violent_content";
} else if (s.Racy === "VERY_LIKELY") {
message.action = "age_restrict";
message.flag = "racy_content";
}

3. Execute action:
- block: Prevent posting, notify user
- review: Queue for human review
- age_restrict: Allow but add age gate
- approve: Publish normally

Tips

  • Set Clear Thresholds: Define what likelihood levels trigger action (e.g., block at "LIKELY" or higher)
  • Category-Specific Rules: Different categories may need different thresholds
  • Combine Checks: Use multiple safety checks if you need both image and text moderation
  • Human Review: Always include human review option for edge cases
  • Context Matters: Medical content might be appropriate in some contexts but not others
  • User Communication: Clearly explain to users why content was flagged
  • False Positives: Vision API may flag legitimate content; allow appeal process
  • Continuous Monitoring: Regularly review flagged content to adjust thresholds
  • Logging: Keep detailed logs of safety decisions for compliance and improvement

Understanding Likelihood Levels

VERY_UNLIKELY

  • Highest confidence that content does NOT contain this category
  • Safe to automatically approve

UNLIKELY

  • Low probability of containing this content
  • Generally safe for most use cases

POSSIBLE

  • Uncertain - could contain this content
  • Consider manual review or secondary checks
  • Good threshold for flagging borderline content

LIKELY

  • High probability of containing this content
  • Recommended threshold for automatic blocking
  • Review recommended before publication

VERY_LIKELY

  • Highest confidence that content DOES contain this category
  • Automatic blocking recommended
  • Strong indicator of policy violation

Common Errors and Solutions

Error: "Image Path cannot be empty"

Solution: Ensure the Image Path input is populated with a valid file path

Error: "No such file or directory"

Solution: Verify the file path is correct and the file exists

Error: "Invalid Client"

Solution: Ensure Connect node ran successfully and Vision Client ID is properly passed, or provide credentials directly

All Categories Return "UNKNOWN"

Solution:

  • Image may be too small or low quality
  • Try with a higher resolution image
  • Verify image is not corrupted
  • Some image types may not be analyzable

Safety Categories Explained

Adult Content

Sexually explicit content including nudity and sexual acts. Most platforms block at "LIKELY" or higher.

Medical Content

Medical imagery including injuries, surgeries, or medical conditions. May be appropriate depending on context (e.g., health platforms).

Racy Content

Suggestive or provocative content that is not explicitly sexual. Less severe than Adult content. Often age-restricted rather than blocked.

Spoof Content

Fake, doctored, or misleading content including deep fakes. Important for maintaining content authenticity.

Violence

Violent acts, weapons, blood, or graphic injury. Critical for user safety and platform guidelines.

Compliance Considerations

  • GDPR: Log and document content moderation decisions
  • Age Restrictions: Consider age-gating instead of blocking certain content
  • User Rights: Provide appeal process for flagged content
  • Transparency: Explain moderation criteria to users
  • Consistency: Apply rules uniformly across all content
  • Audit Trail: Maintain records of safety checks for compliance reporting