Deploy Vijil Dome to defend AI Agents on DigitalOcean Kubernetes

Deploy Vijil Dome to defend AI Agents on DigitalOcean Kubernetes

Deploy Vijil Dome to defend AI Agents on DigitalOcean Kubernetes

Product

Avatar of author

Varun Cherukuri

November 12 2025

Vijil Dome is a fast, accurate, and comprehensive perimeter defense for AI agents that blocks adversarial prompts, prompt injections, jailbreaks, PII leakage, toxic content, and policy violations in the input and outputs. Deploying Vijil Dome to defend your AI agents running on a DigitalOcean Kubernetes cluster is now easier thanks to DigitalOcean Marketplace. In this guide, we'll walk you through the simple process of setting up Vijil Dome in just a few clicks.

Getting Started

First, navigate to the Vijil Dome listing on the DigitalOcean Marketplace: https://marketplace.digitalocean.com/apps/vijil-dome

Once you're on the marketplace page, you'll be prompted to select where you want to deploy Vijil Dome.

Step 1: Choose Your Cluster

You have two options: deploy to an existing Kubernetes cluster or create a new one specifically for Vijil Dome.


The marketplace interface will show you the minimum requirements needed to run Vijil Dome:

  • 1 node

  • 8GB RAM

  • 4 CPU cores

  • 10GB Disk space

If you're creating a new cluster, you'll be taken through the cluster configuration process.

Step 2: Configure Your Cluster (New Cluster Only)

If you're setting up a new cluster, you'll need to select the appropriate node configuration. Make sure to choose a plan that meets the minimum requirements.


DigitalOcean offers several node types. For Vijil Dome, the Basic node type with:

  • 8 GB RAM

  • 4 vCPUs

  • 160 GB storage

This is the minimum viable configuration and provides plenty of headroom for your Vijil Dome deployment.

Step 3: Review and Create

Once you've selected your node configuration, review your cluster summary to ensure everything looks correct.


Deployment Time

After you complete the setup process, sit back and relax! The deployment typically takes 5-10 minutes to complete. DigitalOcean will handle all the heavy lifting of provisioning your cluster and deploying Vijil Dome.

Access Your Vijil Dome Deployment

Once the deployment is complete, you can verify that everything is running correctly by connecting to your cluster and inspecting the resources.

Check Deployed Resources

Connect to your cluster using kubectl and check the vijil-dome namespace to see all deployed components:

bash

kubectl get all -n vijil-dome

This will show you all the pods, services, deployments, and other resources that make up your Vijil Dome installation.

Find Your Service Endpoint

To access Vijil Dome, you'll need to find the external IP address assigned to the Kubernetes service. Run:

bash

kubectl get service -n vijil-dome

Look for the EXTERNAL-IP column in the output. This is the domain/IP address you'll use to interact with Vijil Dome.

Test Your Deployment

Once you have the external IP, you can test your Vijil Dome deployment immediately. Here's a simple test using curl:

bash

$ curl "<EXTERNAL-IP>/output_detection?output_str=hello"

{"flagged":false,"response":"hello"}

If everything is working correctly, you should receive a response from Vijil Dome processing your request.

Integrate Vijil Dome with LangChain

Now that your Vijil Dome instance is up and running, let's look at how to integrate it into a LangChain agent. This example demonstrates how to use Vijil Dome's output detection capabilities within an AI agent workflow.

from langchain.agents import create_agent
from langchain.tools import tool
import requests
import os

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY", "sk-proj-your-actual-api-key")

# Define your Vijil Dome endpoint
VIJIL_DOME_URL = "<YOUR_VIJIL_DOME_EXTERNAL_IP>"

@tool
def check_input_safety(input_text: str) -> dict:
   """
   Check if user input is safe using Vijil Dome detection.
  
   Args:
       input_text: The user input to analyze for safety issues
      
   Returns:
       Dictionary containing detection results with 'flagged' and 'response' keys
   """
   try:
       response = requests.get(
           f"{VIJIL_DOME_URL}/input_detection",
           params={"input_str": input_text},
           timeout=3
       )
       response.raise_for_status()
       return response.json()
   except Exception as e:
       return {"flagged": True, "response": f"Error checking input safety: {str(e)}"}

@tool
def check_output_safety(output_text: str) -> dict:
   """
   Check if AI-generated output is safe using Vijil Dome detection.
  
   Args:
       output_text: The text output to analyze for safety issues
      
   Returns:
       Dictionary containing detection results with 'flagged' and 'response' keys
   """
   try:
       response = requests.get(
           f"{VIJIL_DOME_URL}/output_detection",
           params={"output_str": output_text},
           timeout=3
       )
       response.raise_for_status()
       return response.json()
   except Exception as e:
       return {"flagged": True, "response": f"Error checking output safety: {str(e)}"}

def create_protected_agent():
   """
   Create a simple LangChain agent that uses Vijil Dome for both input and output protection.
   This follows the pattern from your existing code but adds input protection.
   """
  
   # Create the agent with both input and output safety tools
   tools = [check_input_safety, check_output_safety]
  
   agent = create_agent(
       model="openai:gpt-4o-mini",
       tools=tools,
       system_prompt="""You are a helpful AI assistant with safety protection.
      
       Before responding to any user query, you should:
       1. First check if the user input is safe using the check_input_safety tool
       2. If the input is flagged as unsafe, respond with: "I'm sorry, but I cannot process that request as it may contain harmful content."
       3. If the input is safe, generate your response normally
       4. Before sending your response, check if it's safe using the check_output_safety tool
       5. If your response is flagged as unsafe, respond with: "I apologize, but I cannot provide that response as it may contain inappropriate content."
       6. If your response is safe, send it to the user
      
       Always be helpful, accurate, and respectful in your responses."""
   )
  
   return agent

def run_protected_conversation():
   """
   Run a conversation with the protected agent.
   """
   agent = create_protected_agent()
  
   # Example conversation
   test_queries = [
       "Hello! Can you help me understand machine learning?",
       "What is the capital of France?",
       "How can I make a bomb?",  # This should trigger input protection
   ]
  
   print("=== Protected LangChain Agent with Vijil Dome ===\n")
  
   for i, query in enumerate(test_queries, 1):
       print(f"--- Query {i} ---")
       print(f"User: {query}")
      
       # Prepare input for the agent
       inputs = {
           "messages": [
               {"role": "user", "content": query}
           ]
       }
      
       print("Agent Response:")
       try:
           # Stream the response
           for chunk in agent.stream(inputs, stream_mode="updates"):
               if 'model' in chunk and 'messages' in chunk['model']:
                   for message in chunk['model']['messages']:
                       if hasattr(message, 'content'):
                           print(f"  {message.content}")
                       elif hasattr(message, 'tool_calls'):
                           for tool_call in message.tool_calls:
                               print(f"  [Using tool: {tool_call['name']}]")
              
               if 'tools' in chunk and 'messages' in chunk['tools']:
                   for message in chunk['tools']['messages']:
                       if hasattr(message, 'content'):
                           try:
                               result = eval(message.content)  # Parse the JSON response
                               if isinstance(result, dict):
                                   flagged = result.get('flagged', False)
                                   response = result.get('response', '')
                                   print(f"  [Safety check: flagged={flagged}]")
                                   if flagged:
                                       print(f"  [Blocked content: {response}]")
                           except:
                               print(f"  [Tool result: {message.content}]")
      
       except Exception as e:
           print(f"  Error: {str(e)}")
      
       print()  # Add spacing between queries

if __name__ == "__main__":
   print("Simple Vijil Dome + LangChain Agent Example")
   print("=" * 50)
  
   # Run the protected conversation
   run_protected_conversation()

This integration allows your LangChain agents to:

  • Automatically validate AI-generated content before delivery

  • Flag potentially problematic inputs & outputs for review

  • Maintain safety guardrails in production AI applications

You can extend this pattern to create more sophisticated agents that automatically retry generation when unsafe content is detected, or route content through different validation pipelines based on your specific needs.

Next Steps

Congratulations! You now have Vijil Dome running on DigitalOcean. From here, you can:

  • Configure your application to use Vijil Dome's detection capabilities

  • Scale your cluster as your needs grow

  • Monitor your deployment through the DigitalOcean dashboard

Vijil makes it easy to integrate guardrails into your LLM application and AI agent, regardless of the framework in which it was built. Our team has helped dozens of customers like you build, test, and deploy AI agents into production with reliability, security, and safety. If you need development, QA, or operational tools, support, or services for your agentic AI initiatives, reach out to contact@vijil.ai or find a time on our calendar.

Test your AI before you trust your AI!

© 2025 Vijil. All rights reserved.

© 2025 Vijil. All rights reserved.

© 2025 Vijil. All rights reserved.