We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products.
We are looking for an adversarial machine learning specialist who thinks like an attacker.
This role focuses on identifying vulnerabilities in LLM-drivensystems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers.
This is a hands‑on technical role at the core of AI security.
You will help ensure AI systems are resilient before they are deployed at scale.
You don't just run test cases — you design new ones.