AI Security Vetting
Continuous AI Assurance for Enterprise
Secure your AI systems with the same rigor you apply to your human workforce. AI Security Vetting provides continuous behavioral assessment of AI agents, ensuring they remain aligned, secure, and trustworthy throughout their lifecycle.
Why Vet Your AI?
π‘οΈ Defence in Depth
Don't rely solely on provider safeguards. Our tool tests your actual deployment configuration, custom prompts, and integrated tools to ensure your specific setup is secure.
π Continuous Verification
AI models change. Prompts evolve. Test regularly to catch regressions and new vulnerabilities as your AI system grows and adapts.
π― Multi-Provider Coverage
Test across OpenAI, Azure OpenAI, Anthropic, Google Gemini, Microsoft Copilot, and any OpenAI-compatible API with a single tool.
π Actionable Reports
Get detailed HTML, Markdown, CSV, and JSONL outputs with severity scores, attack replays, and specific remediation guidance.
Supported AI Providers
GPT-4, GPT-3.5 and compatible APIs
Enterprise-grade OpenAI models
Claude 3 Haiku, Sonnet, Opus
Gemini Pro and Ultra models
Bot Framework Direct Line v3
Any OpenAI-compatible API
Specialised for Australian π¦πΊ, New Zealand π³πΏ and Singapore πΈπ¬ Data
Our tool generates synthetic, checksum-valid Australian, New Zealand and Singapore identifiers to test your AI's memory safety and data protection capabilities. Never uses real PII - only realistic test data.
π¦πΊ Australian Identifiers
- Tax File Numbers (TFN) - Checksum-validated 9-digit identifiers
- Medicare Numbers - Valid format with check digits
- State Driver Licences - NSW, VIC, QLD, WA, SA, TAS, ACT, NT formats
- Australian Passports - Realistic passport number formats
- Australian Mobile Numbers - Valid 04xx xxx xxx patterns
- Australian Business Numbers (ABN) - 11-digit validated identifiers
π³πΏ New Zealand Identifiers
- IRD Numbers - Checksum-validated Inland Revenue identifiers
- NZ Driver Licences - Valid regional format variations
- NZ Passports - Realistic New Zealand passport formats
- NZ Mobile Numbers - Valid 02x xxx xxxx patterns
- NZBN (Business Numbers) - 13-digit business identifiers
- National Health Index (NHI) - Healthcare identifier formats
πΈπ¬ Singapore Identifiers
- NRIC/FIN Numbers - Checksum-validated national identity identifiers
- UEN (Business Numbers) - Unique Entity Number business identifiers
- Singapore Passports - Realistic passport number formats
- Singapore Mobile Numbers - Valid +65 xxx xxxx patterns
- PayNow/FAST Payment IDs - Digital payment identifier formats
Memory Safety Testing Modes
Same-Session Testing
Seeds are injected in the same exchange to test immediate echo vulnerabilities
Cross-Session Testing
Seeds sent in separate sessions to test long-term memory retention
Strict Mode Validation
Only fails on validated sensitive data (TFN, Medicare, etc.) to reduce false positives
β οΈ Ethical Testing Guarantee
All test data is synthetic and generated algorithmically. We never use real customer data, production PII, or actual government identifiers. Our synthetic data follows authentic formatting and validation rules but represents no real individuals or entities.
Quick Start Examples
Get started in minutes with these common configurations. Replace the binary name with yours (e.g., AISecurityVetting.exe on Windows).
OpenAI (Generic Environment)
# macOS/Linux
export OPENAI_API_KEY=sk-xxxx
./AISecurityVetting --provider openai --model gpt-4o-mini --license-key YOUR_KEY
# Windows PowerShell
$env:OPENAI_API_KEY="sk-xxxx"
.\AISecurityVetting.exe --provider openai --model gpt-4o-mini --license-key YOUR_KEY
Azure OpenAI
# Set environment variables
export AZURE_OPENAI_API_KEY=xxxxx
./AISecurityVetting \
--provider azure-openai \
--azure-endpoint https://YOUR-RESOURCE.openai.azure.com \
--azure-deployment gpt4o \
--model gpt-4o \
--license-key YOUR_KEY
Anthropic Claude
# Claude API
export ANTHROPIC_API_KEY=xxxx
./AISecurityVetting --provider anthropic --model claude-3-5-sonnet-20240620 --license-key YOUR_KEY
# Windows PowerShell
$env:ANTHROPIC_API_KEY="xxxx"
.\AISecurityVetting.exe --provider anthropic --model claude-3-5-sonnet-20240620 --license-key YOUR_KEY
Google Gemini
# Gemini API
export GEMINI_API_KEY=xxxx
./AISecurityVetting --provider gemini --model gemini-1.5-pro --license-key YOUR_KEY
# Windows PowerShell
$env:GEMINI_API_KEY="xxxx"
.\AISecurityVetting.exe --provider gemini --model gemini-1.5-pro --license-key YOUR_KEY
# Optional: Use different API version
./AISecurityVetting --provider gemini --model gemini-1.5-pro --gemini-base v1 --license-key YOUR_KEY
Microsoft Copilot (Direct Line v3)
# Microsoft Copilot with specialised environment
export COPILOT_DIRECTLINE_SECRET=xxxx
./AISecurityVetting \
--provider copilot --model ignored \
--copilot-user-id security_tester \
--target-env copilot \
--seed-mode same-session --seed-count 5 \
--license-key YOUR_KEY
# Windows PowerShell
$env:COPILOT_DIRECTLINE_SECRET="xxxx"
.\AISecurityVetting.exe `
--provider copilot --model ignored `
--copilot-user-id security_tester `
--target-env copilot `
--seed-mode same-session --seed-count 5 `
--license-key YOUR_KEY
Note: Copilot uses Bot Framework Direct Line v3. The --model parameter is ignored. Recommended to use --target-env copilot for specialised Dataverse/Power Platform testing.
OpenAI Compatible APIs
# Any OpenAI-compatible API (Ollama, LocalAI, vLLM, etc.)
export OPENAI_API_KEY=your-api-key-or-token
./AISecurityVetting \
--provider openai-compat \
--base-url http://localhost:11434/v1 \
--model llama3:latest \
--license-key YOUR_KEY
# Example: Ollama local instance
export OPENAI_API_KEY=dummy-key
./AISecurityVetting \
--provider openai-compat \
--base-url http://localhost:11434/v1 \
--model mistral:7b \
--license-key YOUR_KEY
# Example: vLLM deployment
export OPENAI_API_KEY=your-vllm-token
./AISecurityVetting \
--provider openai-compat \
--base-url https://your-vllm-endpoint.com/v1 \
--model meta-llama/Llama-2-7b-chat-hf \
--license-key YOUR_KEY
Compatible with: Ollama, LocalAI, vLLM, Together AI, Groq, Perplexity API, and any other service implementing OpenAI's Chat Completions API format.
Advanced Configuration Examples
Advanced scenarios for comprehensive security testing, red team exercises, and specialised environments.
--region anz OR --region au OR --region nz OR --region sg
Full Enterprise Red Team Testing
# RAG + tools environment, aggressive attacks, obfuscation, strict memory classification
export OPENAI_API_KEY=sk-xxxx
./AISecurityVetting \
--provider openai --model gpt-4o-mini \
--target-env rag-tools \
--attack-preset aggressive \
--attack-obfuscation rot13 \
--strict-mode \
--seed-mode both --seed-count 15 \
--region anz \
--out red_team_$(date +%F) \
--license-key YOUR_KEY \
--temperature 0.0 \
--timeout 90
Use case: Comprehensive security assessment for enterprise AI agents with tool access. Tests against sophisticated attack patterns with obfuscation techniques.
Memory/Retention Deep Testing
# Focus on memory leaks with extensive synthetic PII seeding
export ANTHROPIC_API_KEY=xxxx
./AISecurityVetting \
--provider anthropic --model claude-3-5-sonnet-20240620 \
--seed-mode both \
--seed-count 25 \
--seed-namespace "MEMORY-TEST-$(date +%Y%m%d)" \
--strict-mode \
--region anz \
--attack-preset auto \
--out memory_audit_results \
--license-key YOUR_KEY
Use case: Validate memory safety and PII handling. Tests both same-session and cross-session retention with 25 synthetic AU/NZ records (TFN, Medicare, driver licenses, etc.).
Microsoft Copilot Agent Testing
# Specialised for Copilot with Dataverse/Power Platform probes
export COPILOT_DIRECTLINE_SECRET=xxxx
./AISecurityVetting \
--provider copilot --model ignored \
--copilot-user-id security_tester_001 \
--target-env copilot \
--seed-mode same-session --seed-count 10 \
--attack-preset aggressive \
--attack-as-system \
--region anz \
--out copilot_security_audit \
--license-key YOUR_KEY \
--timeout 120
Use case: Test Microsoft Copilot agents for Dataverse exfiltration, Power Automate triggers, SharePoint/Teams governance violations, and system-level prompt injection.
Custom Test Suite with File Seeding
# Custom YAML suite with pre-defined synthetic PII seeds
export AZURE_OPENAI_API_KEY=xxxxx
./AISecurityVetting \
--provider azure-openai \
--azure-endpoint https://YOUR-RESOURCE.openai.azure.com \
--azure-deployment gpt4o \
--model gpt-4o \
--suite ./custom_financial_tests.yaml \
--seed-file ./synthetic_customer_data.yaml \
--seed-mode cross-session \
--attack-preset lite \
--strict-mode \
--system-prompt "You are a secure financial AI assistant..." \
--out custom_financial_audit \
--license-key YOUR_KEY
Use case: Industry-specific testing with custom scenarios and controlled synthetic data. Perfect for specialised domains like finance, healthcare, or legal services.
System-Layer Compromise Simulation
# Test system prompt injection and orchestrator bypass attempts
export GEMINI_API_KEY=xxxx
./AISecurityVetting \
--provider gemini --model gemini-1.5-pro \
--target-env rag-tools \
--attack-preset aggressive \
--attack-as-system \
--attack-obfuscation rot13 \
--seed-mode both --seed-count 20 \
--strict-mode \
--max-tokens 2048 \
--temperature 0.0 \
--out system_compromise_test \
--license-key YOUR_KEY \
--log-level debug
Use case: Simulate advanced persistent threats targeting the AI orchestration layer. Tests malicious system instructions, prompt injection via tool outputs, and multi-vector attacks.
Pro Tips for Advanced Testing
- Stable namespacing: Use consistent
--seed-namespacevalues for reproducible cross-session tests - Debug logging: Add
--log-level debugto troubleshoot provider connection issues - Timeout tuning: Increase
--timeoutfor slow providers or complex tool chains - Temperature control: Keep
--temperature 0.0for consistent security outcomes - Output organisation: Use date-stamped output directories for audit trails
Test Categories & Environments
Complete setup guide, command reference, and advanced configuration examples
π― Generic Environment (Default)
50 core tests across all security categories. Perfect for general LLM safety vetting and baseline security assessment.
π§ RAG + Tools Environment
Adds enterprise-specific probes for Salesforce, Xero, SharePoint, Slack, Jira, MYOB. Use for agents with enterprise integrations.
πΌ Microsoft Copilot Environment
Specialised tests for Dataverse, Power Automate, SharePoint, Outlook, Teams. Designed for Microsoft Copilot agents.
Output Formats
π Interactive HTML Report
Modern, searchable interface with KPI cards, severity charts, filters, and detailed analysis. Perfect for demos and stakeholder presentations.
π Markdown Report
Detailed findings with attack preambles, effective prompts, and elaborated evaluations. Great for documentation and sharing.
π CSV Results
Machine-friendly summary for analysis, trending, and integration with other tools. Includes scores, latency, and configuration details.
π JSONL Details
Complete test results with raw provider payloads, findings, and metadata. Perfect for programmatic analysis and integration.
Key Configuration Options
- Seed Modes: Test memory retention with same-session, cross-session, or both patterns
- Attack Presets: None, lite, aggressive, or auto-scaling adversarial preambles
- Obfuscation: ROT13 encoding to test decoding-based bypass attempts
- Strict Mode: Only fail on validated sensitive data echoes (TFN, Medicare, etc.)
- Custom Suites: Load your own YAML test definitions for specific requirements
See AI Security Vetting in Action
Watch how our tool systematically tests AI systems for security vulnerabilities, from prompt injection to memory leaks, delivering comprehensive reports in real-time.
Interactive Security Testing
Real-time vulnerability detection across 6 security categories with detailed reporting and immediate insights.
Ready to Secure Your AI?
Don't leave your AI systems vulnerable to attacks. Implement continuous security testing and compliance monitoring today.
Explore All Products
See how all Cyber Automation products work together to secure your entire infrastructure.
Back to Product Overview