We are building a dedicated AI Red Team to rigorously test and harden enterprise-scaling AI products.
We are looking for an adversarial machine learning specialist who thinks like an attacker.
This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers.
This is a hands-on technical role at the core of AI security.
You will help ensure AI systems are resilient before they are deployed at scale.
You don't just run test cases - you design new ones.