CompTIA AuthorisedIntermediate · New 2026First of its kindHighest-Demand AI-Sec Cert
CompTIA SecAI+ AI Security Practitioner
The first vendor-neutral certification for securing AI systems and using AI in cybersecurity operations — prompt-injection defence, model risk, AI-assisted SOC and AI governance.
⏱Duration: 4 days / 32 hrs
💻Format: Instructor-Led + AI Lab Range
🌐Delivery: On-site · Virtual · Hybrid
✅Pass rate: 90%
📅Next intake: 26 May 2026
🤖
Secure AI engineering
Threat-model GenAI, RAG and agent-based systems before they ship
🛡️
Defend AI systems
Prompt-injection, jailbreaks, model theft, training-data poisoning
🔍
AI in the SOC
Use GenAI for triage, hunting and report generation — responsibly
📜
AI governance
NIST AI RMF, ISO 42001, EU AI Act — controls that hold up at audit
What this course is
Where AI security stops being wishful thinking.
SecAI+ is CompTIA's first AI-focused security credential and the first vendor-neutral cert that explicitly covers both sides of the AI-security coin: how to secure AI systems and how to use AI safely in cybersecurity operations.
At Nexperts, SecAI+ is taught on a real AI lab range. We jailbreak a deployed LLM application, extract training data, poison a fine-tune, and on the defensive side build an AI-assisted SOC triage workflow. By day 4 you have hands-on experience attacking and defending GenAI in production.
Every Malaysian enterprise is rushing to deploy GenAI. Most have no idea how to secure it. SecAI+ is the first credential that proves you do.
The 2026 SecAI+ objectives map to OWASP LLM Top 10, MITRE ATLAS, NIST AI RMF and the EU AI Act — the four frameworks that will define AI-security audits for the next decade. We cover all four with hands-on labs.
Who should take this course
🛡️
Cybersecurity professionals
Already in security and now responsible for protecting AI systems being deployed at your company.
👨💻
AI / ML engineers
Building GenAI applications and need a security mental model before something embarrassing happens in production.
🔐
Security architects
Designing controls for AI-enabled workloads. SecAI+ gives the formal vocabulary and the threat-modelling toolkit.
💼
CISOs & security managers
Your board is asking about AI risk. SecAI+ gives you a credible technical position to lead from.
📚
Compliance officers
NIST AI RMF, ISO 42001 and EU AI Act — SecAI+ is the bridge between policy and the technical surface.
🌟
Security+ / CySA+ holders
Natural progression for security practitioners moving into AI-augmented work.
Prerequisites
✓ CompTIA Security+ or equivalent intermediate security knowledge
✓ Working understanding of how an LLM application is built (RAG, prompts, APIs)
✓ Familiarity with at least one programming language (Python preferred)
✓ Basic awareness of cloud (AWS / Azure / GCP) at a conceptual level
→ Don't have Security+? Ask about our Security+ → SecAI+ pathway bundle.
Course Curriculum
Five domains. One AI-security toolkit.
SecAI+ covers AI Foundations & Risk, Securing AI Systems, AI Threats & Adversarial Techniques, AI in Cybersecurity Operations, and AI Governance & Compliance. We deliver in attack-then-defend order.
Hands-On AI Lab Range
9 builds. Real attack and defence.
Every learner gets access to the Nexperts AI lab range — a curated stack of intentionally vulnerable LLM apps, RAG pipelines and agent systems for safe attack-and-defend exercises.
01
Threat-Modelling Workshop
Take a real RAG chatbot architecture. Threat-model it under MITRE ATLAS in 90 minutes.
Threat Model
02
Prompt Injection Sprint
Jailbreak 5 progressively hardened LLM apps. Document every successful injection.
Offence
03
Indirect Injection via Documents
Plant payloads in a RAG corpus. Watch them execute. Then defend against the same attack.
Offence
04
Model Extraction Attempt
Extract approximate behaviour from a black-box model via crafted queries.
Offence
05
Guardrails Hardening
Take a vulnerable LLM app. Apply input filters, output checks and policy engines. Re-test.
Defence
06
Training-Data Poisoning
Poison a fine-tune dataset. Observe the backdoor activation. Detect via data-lineage audit.
Offence
07
AI-Assisted SOC Triage
Build a GenAI-assisted alert triage workflow. Validate against false-positive rate.
Defence
08
Deepfake Detection Drill
Run a deepfake voice and video detection drill on simulated CFO-fraud calls.
Detection
09
SecAI Audit Review
Run a NIST AI RMF assessment against a real internal use-case. Produce the audit report.
Governance
+ 12 micro-tasks across the AI lab range. All target apps are yours to keep.
Exam Information
One scenario-heavy exam. First of its kind.
SecAI+ is delivered as a 90-minute exam with 60–75 multiple-choice and performance-based items. As a brand-new cert, the early pass rates have been lower than typical CompTIA exams — we're focused on closing the gap.
CompTIA SecAI+ Exam
Questions60 – 75 (MCQ + performance-based)
Duration90 minutes
Passing score750 / 900
FormatPearson VUE / Online proctored
Validity3 years (CE renewal)
Industry avg pass rate~62% first attempt (new cert)
Nexperts pass rate90% first attempt
Performance-Based Question Coaching
Item count6–8 PBQs per exam
Item typeDrag-drop, scenario, prompt review
Common gotchasMapping ATLAS techniques to controls
Coaching4-hour PBQ workshop
StrategyIdentify the threat first, then the control
OutcomePBQ score uplift averages +20%
WalkthroughAnonymised PBQ archive
Our 3-Mock Programme
01
Diagnostic Mock
End of day 1. Maps weak knowledge areas. Average score: 56%.
02
Adversarial-Heavy Mock
Mid-course. 50% PBQs on adversarial scenarios. Average score: 70%.
03
Final Clearance
Full timed simulation. 80%+ before we book. Average score: 83%.
0%
Pass Rate
90% of our SecAI+ candidates pass on first attempt.
As a brand-new cert, the global first-attempt rate for SecAI+ sits around 62%. We hit 90% by being one of the first authorised SecAI+ providers in Southeast Asia, with curriculum reviewed by a working AI red-teamer.
Real AI lab rangeAdversarial-sprint workshop90% first attemptFree retake voucherFirst-cohort instructor
Why our pass rate is 90%
Industry average: ~62%
SecAI+ is brand-new. Most candidates revise terminology but never run a real prompt-injection or adversarial sprint. The exam tests both, and they fail on the practical scenarios.
Nexperts: 90%
Our instructor was on the SecAI+ technical-review committee. We have a real AI lab range. And we drill the adversarial scenarios under timer until candidates can do them in their sleep.
Your AI-Security Path
SecAI+ is the entry to AI-security leadership.
SecAI+ is the foundation. From here, the natural progressions are EC-Council C|OASP (Offensive AI Security), CompTIA SecurityX (architectural depth), and the upcoming CSA-AI track for SOC-side AI work.
Before this
Security+ or CySA+
Provides the security base. SecAI+ assumes you understand baseline security concepts.
Expected salary range after SecAI+: RM 12,000 – RM 22,000/month for AI-security and ML-security engineering roles in MNC environments.
Student Reviews
What our SecAI+ graduates say.
4.9
★★★★★
46 reviews
5★
92%
4★
7%
3★
1%
★★★★★
"Best new cert I've taken in years. The adversarial sprint genuinely changed how I evaluate every GenAI proposal at work. Cleared SecAI+ on the first attempt."
FH
Faridah Hassan
Security Architect · Maybank
✓ Passed first attempt · 814/900
★★★★★
"I was the first ML engineer at my company tasked with 'security'. Walked into class scared. Walked out with a real defensive playbook. The lab range is incredible."
SK
Sundaram Kumaresan
ML Engineer · RHB
✓ Passed first attempt
★★★★★
"NIST AI RMF and EU AI Act mapping was the highlight for me. Took the templates back to my company's compliance team and we're now using them as the baseline."
ZL
Zalina Latif
Compliance Officer · PETRONAS
✓ Passed first attempt · 798/900
★★★★★
"Fellow CISO mates were skeptical of 'another cert'. SecAI+ is genuinely different. The level of technical content makes you credible to your AI engineering team."
LK
Lim Kok Wai
CISO · IHH Healthcare
✓ Passed first attempt · 856/900
Copy page link
Share this course page with your team or save the URL for later.