RED_CORE

AI Security Consulting

Human-in-the-Loop Red Teaming

Most AI security tools rely on automated testing. We use human expertise.

Our team combines QA engineering, professional security research, and social engineering backgrounds to find vulnerabilities that automated tools miss—context-aware attacks, nuanced prompt injection, and novel exploitation techniques that require human creativity.

We use AI assistance, but humans drive the testing.

Services

Security Assessment

Comprehensive testing for prompt injection, data exfiltration, and authorization bypass vulnerabilities. Includes executive summary, technical findings, and remediation guidance.

2-week engagement · $6,000-$8,000

Compliance Support

Security documentation and testing to support SOC 2, ISO 27001, and enterprise vendor assessments. Includes reusable materials for customer security reviews.

Custom engagement

Ongoing Testing

Quarterly security assessments and continuous validation as your AI system evolves. Includes priority support for security questions.

Retainer basis

How We Test

Our methodology combines structured testing with adversarial creativity:

  • Threat Modeling — Map attack surfaces specific to your AI system's architecture and data flows
  • Manual Testing — Human-driven prompt injection, social engineering, and context manipulation attacks
  • Validation — Document exploitable vulnerabilities with proof-of-concept demonstrations
  • Remediation — Provide specific guidance on mitigations and controls

Track Record

We placed first in the NSF-funded Hack-a-Prompt 2.0 competition, identifying 162 vulnerabilities across 10+ frontier LLMs out of 40,000+ participants—using human-driven techniques that required understanding context, trust boundaries, and social engineering, not just running automated scans.

View technical writeup →

Contact

contact@red_core.zip