Offensive AI Security Engineer - Red Team
Job Title:Offensive AI Security Engineer – Red Team
We are seeking an Offensive AI Security Engineer to join our AI RedTeam within the Security Engineering team. This role focuses on adversarial machine learning (ML), AI-driven offensive security, and redteaming AI systems to uncover vulnerabilities in AI-powered automotive security models and vehicle platforms.
As part of Lucid's Offensive AI Security team, you will attack, manipulate, and exploit AI/ML models to identify real-world threats and weaknesses in AI-driven security solutions. You will develop AI-enhanced security automation tools, perform LLM-based penetration testing, and integrate AI/ML attack techniques into offensive security operations.
Key Responsibilities:
AI RedTeaming & Adversarial Attack Development
- Design and execute adversarial attacks on AI-powered security systems.
- Conduct LLM-based penetration testing to uncover AI security flaws in vehicle cybersecurity applications.
- Identify attack surfaces in AI-driven perception systems (LiDAR, Radar, Cameras) and develop exploits against automotive AI models.
AI-Driven Offensive Security Automation
- Build tools and systems to analyze emerging AI threats in vehicle security and enterprise AI models.
- Develop AI-assisted security automation tools for:
- Reconnaissance – Automate vulnerability discovery using LLMs and RAG-based intelligence gathering.
- Exploitation – Use AI to generate attack payloads and automate offensive security operations.
- Fuzzing – Enhance automated fuzz testing with AI-driven input mutation strategies.
- Reverse Engineering – Apply LLM-assisted binary analysis for rapid security assessments.
- Build offensive and defensive AI security tools, integrating ML-driven automation into security assessments and exploit development.
ML-Driven Security Research & Exploitation
- Use ML to characterize a program, identifying security-critical functions and behavioral anomalies.
- Understand LLVM Intermediate Representation (LLVM IR) to analyze compiled AI/ML software for security weaknesses.
- Develop AI-driven techniques for:
- Threat Detection – Use ML to automate malware detection and anomaly recognition.
- Cryptographic Algorithm Classification – Identify cryptographic weaknesses in compiled binaries.
- Function Recognition – Use AI models to automate binary function analysis and decompilation.
- Vulnerability Discovery – Automate zero-day discovery using ML-based exploit prediction models.
- Evaluate ML security models for robustness, performance, and adversarial resilience.
Offensive AI Research & RedTeaming Strategy
- Research novel AI attack techniques and evaluate their impact on vehicle cybersecurity and enterprise AI security models.
- Collaborate with internal RedTeams, SOC analysts, and AI security researchers to refine AI-driven offensive security approaches.
- Stay ahead of emerging AI threats, tracking advancements in AI security, adversarial ML, and autonomous vehicle AI exploitation.
Required Skills & Qualifications:
AI RedTeaming & Offensive Security Expertise
✔ Hands-on experience with AI/ML exploitation, adversarial ML, and AI-driven pentesting.
✔ Strong background in attacking AI models, including LLMs, deep learning systems, and computer vision AI.
✔ Experience with LLM-based vulnerability analysis, prompt engineering attacks, and model evasion techniques.
✔ Proficiency in penetration testing, AI fuzzing, and redteaming AI-driven security applications.
Cybersecurity & AI Security Research
✔ Experience in exploiting AI-powered vehicle security mechanisms.
✔ Ability to analyze, test, and exploit AI models within embedded systems and cloud environments.
Programming & Security Tooling
✔ Strong Python, C, C++ skills for AI security research and offensive security automation.
✔ Proficiency with AI security frameworks (TensorFlow, PyTorch, Hugging Face, OpenAI APIs, LangChain).
✔ Hands-on experience with reverse engineering tools (Ghidra, IDA Pro, Binary Ninja, Qiling, Angr).
✔ Familiarity with AI fuzzing tools (AFL++, Honggfuzz, Syzkaller) and adversarial ML frameworks (CleverHans, Foolbox, ART).
Preferred Qualifications:
- Bachelor's degree in Cybersecurity, AI/ML, Computer Science, or related technical field(s) is required. Master's degree or higher education is preferred.
- Experience in offensive AI security, redteaming AI models, or AI-powered security automation.
- Prior work in LLM-based security analysis, AI-driven pentesting, and adversarial ML research.
By Submitting your application, you understand and agree that your personal data will be processed in accordance with our Candidate Privacy Notice. If you are a California resident, please refer to our California Candidate Privacy Notice.