Skip to main content

Generate Code

Generates code snippets, functions, and complete programs using Google Vertex AI's PaLM 2 code generation model (Code Bison).

Common Properties

  • Name - The custom name of the node.
  • Color - The custom color of the node.
  • Delay Before (sec) - Waits in seconds before executing the node.
  • Delay After (sec) - Waits in seconds after executing node.
  • Continue On Error - Automation will continue regardless of any error. The default value is false.
info

If the ContinueOnError property is true, no error is caught when the project is executed, even if a Catch node is used.

Inputs

  • Connection Id - Vertex AI client session identifier from Connect node (optional if credentials provided directly).
  • Credentials - Google Cloud service account credentials (optional if using Connection ID).
  • Project Id - Google Cloud Project ID (required if using direct credentials).
  • Prompt - Code generation prompt describing what code to generate (required).

Options

Generation Parameters

  • Temperature - Controls randomness. Lower values produce more deterministic code. Optional parameter.
  • Max Output Tokens - Maximum tokens to generate. Default is 256.
    • ~4 characters per token
    • 100 tokens ≈ 60-80 words
  • Candidate Count - Number of code candidates to generate. Default is 1.

Model Configuration

  • Model - Vertex AI code generation model to use:
    • code-bison - Code generation model (default)
    • Custom Model - Specify your own model name
  • Custom Model - Custom model name when "Custom Model" is selected.

Endpoint Configuration

  • Locations - Google Cloud region for the Vertex AI endpoint. Default is "us-central1".
  • Publishers - Model publisher (typically "google"). Default is "google".

Output

  • Response - Full API response object containing generated code.

Response structure:

{
"predictions": [
{
"content": "def hello_world():\n print('Hello, World!')",
"safetyAttributes": {
"categories": [],
"blocked": false,
"scores": []
},
"citationMetadata": {
"citations": []
}
}
]
}

How It Works

The Generate Code node creates code using Code Bison, a model trained specifically on source code. When executed:

  1. Validates connection (either via Connection ID or direct credentials)
  2. Retrieves authentication token and project ID
  3. Validates that prompt is not empty
  4. Parses optional generation parameters (temperature, tokens, candidate count)
  5. Constructs request payload with prompt (as "prefix") and parameters
  6. Sends POST request to Vertex AI code predict endpoint
  7. Processes response and extracts generated code
  8. Returns complete response object with code and metadata

Code Bison is optimized for code generation, completion, documentation, and explanation across multiple programming languages.

Requirements

  • Either:
    • Connection ID from Connect node, OR
    • Direct credentials + Project ID
  • Prompt (non-empty string describing code to generate)
  • Max Output Tokens (required, must be set)
  • Vertex AI API enabled in Google Cloud project
  • IAM permissions: aiplatform.endpoints.predict

Error Handling

Common errors and solutions:

ErrorCauseSolution
ErrInvalidArgEmpty promptProvide a valid code generation prompt
ErrInvalidArgMax Output Tokens emptySet Max Output Tokens value
ErrInvalidArgConnection ID or credentials missingUse Connect node or provide credentials
ErrInvalidArgInvalid model selectionSelect code-bison or provide custom model name
ErrNotFoundConnection not foundVerify Connection ID from Connect node
ErrStatusAPI error (quota, safety)Check Google Cloud Console for API status
Parse errorInvalid parameter valueVerify temperature, tokens are valid numbers

Example Use Cases

Function Generation

Prompt: "Write a Python function that calculates the factorial of a number using recursion. Include docstring and error handling."
Temperature: 0.2
Max Output Tokens: 300

Code Completion

Prompt: "Complete this JavaScript function:\nfunction validateEmail(email) {\n  // TODO: validate email format"
Temperature: 0.1
Max Output Tokens: 150

API Integration Code

Prompt: "Generate Node.js code to make a POST request to a REST API with authentication headers and error handling. Use axios library."
Temperature: 0.3
Max Output Tokens: 400
Candidate Count: 2

Unit Test Generation

Prompt: "Write Jest unit tests for a function that adds two numbers. Include edge cases like negative numbers, zero, and large numbers."
Temperature: 0.2
Max Output Tokens: 500

SQL Query Generation

Prompt: "Write a SQL query to find the top 10 customers by total purchase amount in the last 30 days, joining customers and orders tables."
Temperature: 0.1
Max Output Tokens: 200

Code Documentation

Prompt: "Add comprehensive docstring comments to this Python function:\ndef process_data(data, filters):\n    result = []\n    for item in data:\n        if all(f(item) for f in filters):\n            result.append(item)\n    return result"
Temperature: 0.2
Max Output Tokens: 300

Algorithm Implementation

Prompt: "Implement the quicksort algorithm in Java with comments explaining each step."
Temperature: 0.2
Max Output Tokens: 600

Code Translation

Prompt: "Convert this Python code to JavaScript:\ndef fibonacci(n):\n    if n <= 1:\n        return n\n    return fibonacci(n-1) + fibonacci(n-2)"
Temperature: 0.1
Max Output Tokens: 200

Tips

  • Temperature Settings:

    • 0.0-0.2: Deterministic, production code, bug fixes
    • 0.3-0.5: Balanced, general code generation
    • 0.6-1.0: Creative, exploratory, multiple approaches
  • Prompt Engineering for Code:

    • Specify programming language explicitly
    • Include context (libraries, frameworks, versions)
    • Request specific features (error handling, documentation)
    • Provide input/output examples
    • Specify code style or conventions
  • Token Management:

    • Estimate code length before setting Max Output Tokens
    • Simple functions: 100-300 tokens
    • Complex functions: 300-600 tokens
    • Classes with methods: 600-1000 tokens
  • Multi-Language Support:

    • Python, JavaScript, Java, C++, C#, Go, Ruby, PHP
    • SQL, HTML, CSS
    • Shell scripts, Dockerfile, YAML configurations
  • Quality Control:

    • Always review and test generated code
    • Use Candidate Count > 1 for critical functions
    • Lower temperature for production code
    • Verify security implications

Common Patterns

Code Generation Workflow

Flow:
1. Connect to Vertex AI
2. Define prompt with requirements
3. Generate Code with appropriate parameters
4. Extract code from response
5. Validate and test code
6. Integrate into project

Iterative Refinement

Process:
1. Generate initial code (broader prompt)
2. Review output
3. Generate refinement (specific improvement prompt)
4. Compare versions
5. Select best implementation

Multi-Candidate Selection

Configuration:
- Candidate Count: 3
- Temperature: 0.4
Workflow:
1. Generate 3 code variations
2. Test each implementation
3. Benchmark performance
4. Select optimal solution

Best Practices

  • Code Review: Always review generated code before use
  • Testing: Test all generated code thoroughly
  • Security: Validate for security vulnerabilities
  • Dependencies: Verify library versions and availability
  • Documentation: Request comments and docstrings
  • Error Handling: Include error handling in prompts
  • Style: Specify coding conventions in prompt
  • Context: Provide sufficient context for accurate generation
  • Validation: Implement code validation and linting
  • Version Control: Track generated code in version control

Prompt Templates

Function Template

"Write a [LANGUAGE] function named [NAME] that [DESCRIPTION].
Input: [INPUT_PARAMS]
Output: [OUTPUT_TYPE]
Include: error handling, type hints, docstring"

Class Template

"Create a [LANGUAGE] class named [NAME] that [PURPOSE].
Methods: [METHOD_LIST]
Properties: [PROPERTY_LIST]
Include: constructor, getters/setters, documentation"

API Integration Template

"Generate [LANGUAGE] code to [ACTION] using [API_NAME] API.
Endpoint: [URL]
Method: [HTTP_METHOD]
Authentication: [AUTH_TYPE]
Include: error handling, response parsing"

Performance Optimization

  • Connection Reuse: Use same connection for batch code generation
  • Prompt Clarity: Clear prompts reduce regeneration needs
  • Token Efficiency: Set appropriate Max Output Tokens
  • Caching: Cache common code patterns
  • Batch Processing: Generate multiple functions in sequence

Safety Considerations

  • Code Injection: Sanitize any user input in prompts
  • Credentials: Never include sensitive data in prompts
  • Execution: Sandbox execution of generated code
  • Validation: Validate before production deployment
  • Licensing: Be aware of code licensing implications
  • Attribution: Consider attribution for AI-generated code

Code Quality Checklist

Before using generated code:

  • Syntax is valid
  • Logic is correct
  • Error handling is adequate
  • Security vulnerabilities checked
  • Dependencies are available
  • Performance is acceptable
  • Code follows project conventions
  • Tests pass
  • Documentation is clear
  • Licenses are compatible