Cybersecurity in AI Systems Training Course
Protecting AI systems involves distinct challenges that set them apart from conventional cybersecurity methods. Artificial intelligence solutions are susceptible to adversarial attacks, data poisoning, and model theft, any of which can severely disrupt business continuity and compromise data integrity. This course delves into essential cybersecurity practices for AI environments, addressing adversarial machine learning, safeguarding data within machine learning workflows, and adhering to compliance standards for robust AI deployment.
This instructor-led, live training (available online or onsite) is designed for AI and cybersecurity professionals at an intermediate level who aim to identify and mitigate security weaknesses specific to AI models and systems, with a particular focus on highly regulated sectors such as finance, data governance, and consulting.
Upon completing this training, participants will be capable of:
- Identifying various adversarial attacks targeting AI systems and employing effective defensive strategies.
- Applying model hardening techniques to enhance the security of machine learning pipelines.
- Maintaining data security and integrity within machine learning models.
- Navigating regulatory compliance obligations concerning AI security.
Format of the Course
- Interactive lectures and discussions.
- Extensive exercises and practical application.
- Hands-on implementation within a live-lab environment.
Course Customization Options
- To request a customized training version of this course, please contact us to make arrangements.
Course Outline
Introduction to AI Security Challenges
- Understanding security risks unique to AI systems
- Comparing traditional cybersecurity vs. AI cybersecurity
- Overview of attack surfaces in AI models
Adversarial Machine Learning
- Types of adversarial attacks: evasion, poisoning, and extraction
- Implementing adversarial defenses and countermeasures
- Case studies on adversarial attacks in different industries
Model Hardening Techniques
- Introduction to model robustness and hardening
- Techniques for reducing model vulnerability to attacks
- Hands-on with defensive distillation and other hardening methods
Data Security in Machine Learning
- Securing data pipelines for training and inference
- Preventing data leakage and model inversion attacks
- Best practices for managing sensitive data in AI systems
AI Security Compliance and Regulatory Requirements
- Understanding regulations around AI and data security
- Compliance with GDPR, CCPA, and other data protection laws
- Developing secure and compliant AI models
Monitoring and Maintaining AI System Security
- Implementing continuous monitoring for AI systems
- Logging and auditing for security in machine learning
- Responding to AI security incidents and breaches
Future Trends in AI Cybersecurity
- Emerging techniques in securing AI and machine learning
- Opportunities for innovation in AI cybersecurity
- Preparing for future AI security challenges
Summary and Next Steps
Requirements
- Foundational understanding of machine learning and AI concepts
- Familiarity with cybersecurity principles and best practices
Audience
- AI and machine learning engineers seeking to enhance security in AI systems
- Cybersecurity specialists focused on protecting AI models
- Compliance and risk management professionals in data governance and security
Open Training Courses require 5+ participants.
Cybersecurity in AI Systems Training Course - Booking
Cybersecurity in AI Systems Training Course - Enquiry
Cybersecurity in AI Systems - Consultancy Enquiry
Testimonials (1)
The profesional knolage and the way how he presented it before us
Miroslav Nachev - PUBLIC COURSE
Course - Cybersecurity in AI Systems
Upcoming Courses
Related Courses
ISACA Advanced in AI Security Management (AAISM)
21 HoursThe AAISM serves as a sophisticated framework designed for evaluating, overseeing, and managing security risks associated with artificial intelligence systems.
This instructor-led, live training session, available both online and onsite, targets advanced professionals seeking to establish robust security controls and governance practices for enterprise-level AI environments.
Upon completing this program, participants will be equipped to:
- Assess AI security risks using recognized industry methodologies.
- Establish governance models that support the responsible deployment of AI.
- Harmonize AI security policies with organizational objectives and regulatory requirements.
- Strengthen resilience and accountability within AI-operated processes.
Course Format
- Guided lectures supplemented by expert insights.
- Hands-on workshops and activity-based assessments.
- Practical exercises grounded in real-world AI governance scenarios.
Customization Options
- To tailor the training to your organization’s specific AI strategy, please reach out to us for customization.
AI Governance, Compliance, and Security for Enterprise Leaders
14 HoursThis instructor-led, live training in Greece (online or onsite) is aimed at intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks of using AI across departments.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Establish security, auditing, and oversight policies for AI deployment in the enterprise.
- Develop procurement and usage guidelines for third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 HoursArtificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO\/IEC 42001.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduction to AI Trust, Risk, and Security Management (AI TRiSM)
21 HoursThis instructor-led, live training session in Greece (available online or on-site) is tailored for IT professionals at beginner to intermediate levels who wish to understand and implement AI TRiSM within their organizations.
Upon completion of this training, participants will be able to:
- Comprehend the fundamental concepts and significance of AI trust, risk, and security management.
- Identify and mitigate risks inherent to AI systems.
- Apply security best practices specific to AI technologies.
- Understand regulatory compliance requirements and ethical implications for AI.
- Formulate strategies for effective AI governance and management.
Building Secure and Responsible LLM Applications
14 HoursThis instructor-led, live training in Greece (online or onsite) is designed for intermediate to advanced AI developers, architects, and product managers aiming to identify and mitigate risks inherent in LLM-powered applications, such as prompt injection, data leakage, and unfiltered outputs, while implementing security controls like input validation, human-in-the-loop oversight, and output guardrails.
Upon completion of this training, participants will be able to:
- Grasp the core vulnerabilities of LLM-based systems.
- Implement secure design principles within LLM application architecture.
- Utilize tools such as Guardrails AI and LangChain for validation, filtering, and safety assurance.
- Integrate techniques including sandboxing, red teaming, and human-in-the-loop reviews into production-grade pipelines.
EXO Security and Governance: Offline Model Management
14 HoursThis instructor-led, live training in Greece (online or onsite) is designed for security engineers and compliance officers who aim to strengthen EXO deployments, control model access, and govern AI workloads operating entirely on-premise.
Introduction to AI Security and Risk Management
14 HoursThis instructor-led, live training in Greece (online or onsite) is designed for beginner-level IT security, risk, and compliance professionals seeking to grasp foundational AI security concepts, threat vectors, and global frameworks such as the NIST AI RMF and ISO/IEC 42001.
Upon completion of this training, participants will be able to:
- Comprehend the unique security risks inherent in AI systems.
- Identify threat vectors including adversarial attacks, data poisoning, and model inversion.
- Apply foundational governance models, such as the NIST AI Risk Management Framework.
- Align AI usage with emerging standards, compliance guidelines, and ethical principles.
OWASP GenAI Security
14 HoursBased on the latest OWASP GenAI Security Project guidance, participants will learn to identify, assess, and mitigate AI-specific threats through hands-on exercises and real-world scenarios.
Privacy-Preserving Machine Learning
14 HoursThis guided, live training in Greece (online or at your location) targets advanced professionals aiming to deploy and assess methods like federated learning, secure multiparty computation, homomorphic encryption, and differential privacy within practical machine learning workflows.
Upon completing this training, participants will be able to:
- Comprehend and evaluate essential privacy-preserving techniques in machine learning.
- Deploy federated learning systems utilizing open-source frameworks.
- Utilize differential privacy to ensure secure data sharing and model training.
- Employ encryption and secure computation strategies to shield model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 HoursThis instructor-led, live training in Greece (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 HoursThis instructor-led, live training in Greece (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments.
- Apply tamper resistance and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies specific to embedded and constrained systems.
Securing AI Models: Threats, Attacks, and Defenses
14 HoursThis instructor-led, live training in Greece (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models, using both conceptual frameworks and hands-on defences like robust training and differential privacy.
By the end of this training, participants will be able to:
- Identify and classify AI-specific threats such as adversarial attacks, inversion, and poisoning.
- Use tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
- Apply practical defences including adversarial training, noise injection, and privacy-preserving techniques.
- Design threat-aware model evaluation strategies in production environments.
Security and Privacy in TinyML Applications
21 HoursTinyML refers to the deployment of machine learning models on low-power, resource-constrained devices operating at the network edge.
This instructor-led live training, available online or onsite, is designed for advanced professionals looking to secure TinyML pipelines and implement privacy-preserving techniques in edge AI applications.
Upon completing this course, participants will be equipped to:
- Identify security risks specific to on-device TinyML inference.
- Implement privacy-preserving mechanisms for edge AI deployments.
- Harden TinyML models and embedded systems against adversarial threats.
- Apply best practices for secure data handling in constrained environments.
Course Format
- Engaging lectures accompanied by expert-led discussions.
- Practical exercises focusing on real-world threat scenarios.
- Hands-on implementation using embedded security and TinyML tooling.
Course Customization Options
- Organizations may request a tailored version of this training to align with their specific security and compliance requirements.
Safe & Secure Agentic AI: Governance, Identity, and Red-Teaming
21 HoursThis course explores governance, identity management, and adversarial testing for agentic AI systems, with a focus on enterprise-safe deployment patterns and practical red-teaming techniques.
Delivered as instructor-led live training (available online or onsite), this programme is designed for advanced-level practitioners aiming to design, secure, and evaluate agent-based AI systems within production environments.
By the conclusion of this training, participants will be able to:
- Establish governance models and policies for safe agentic AI deployments.
- Design non-human identity and authentication flows for agents, ensuring least-privilege access.
- Implement access controls, audit trails, and observability mechanisms tailored to autonomous agents.
- Plan and conduct red-team exercises to identify misuses, escalation paths, and data exfiltration risks.
- Mitigate common threats to agentic systems through policy, engineering controls, and monitoring.
Course Format
- Interactive lectures and threat-modeling workshops.
- Hands-on labs covering identity provisioning, policy enforcement, and adversary simulation.
- Red-team/blue-team exercises and an end-of-course assessment.
Customization Options
- To request customized training for this course, please contact us to make arrangements.