Learn about the two common types of adversarial attacks on AI systems – poisoning and evasion. Discover how these attacks manipulate data to deceive AI models and compromise their performance. Gain insights into protecting your AI systems from these threats.
Table of Contents
Question
Which of the following are examples of adversarial attacks on an AI system?
Select the two that apply.
A. Terminal execution
B. Poisoning
C. Denial of Service (DoS) attack
D. Evasion
Answer
The two examples of adversarial attacks on an AI system are:
B. Poisoning
D. Evasion
Explanation
Poisoning and evasion are part of a two-pronged approach to attack an AI system. An AI model demonstrates robustness by still producing accurate results even with the inaccurate data inserted by the adversary in the training set.
Adversarial attacks on AI systems are malicious attempts to manipulate or deceive AI models, causing them to make incorrect predictions or decisions. Among the options provided, poisoning and evasion are the two types of adversarial attacks.
- Poisoning attacks target the training data used to build AI models. An attacker introduces carefully crafted malicious data points into the training dataset, aiming to manipulate the model’s learning process. By poisoning the data, the attacker can influence the model’s behavior, causing it to make incorrect predictions or classifications. For example, an attacker might add malicious samples to a dataset used for training a spam email classifier, causing the model to misclassify spam emails as legitimate.
- Evasion attacks, also known as adversarial examples, involve crafting input data that appears normal to humans but is specifically designed to deceive AI models. The attacker manipulates the input data in a way that exploits the model’s vulnerabilities, causing it to make incorrect predictions. For instance, an attacker might slightly modify an image of a stop sign by adding imperceptible perturbations, causing an autonomous vehicle’s AI system to misclassify it as a speed limit sign.
Terminal execution and Denial of Service (DoS) attacks, while serious security threats, are not specific to adversarial attacks on AI systems. Terminal execution refers to the ability to run arbitrary commands on a system, while DoS attacks aim to make a system or network resource unavailable to its intended users.
To protect AI systems from adversarial attacks, it is essential to implement robust security measures, such as input validation, anomaly detection, and adversarial training. Regularly monitoring and updating AI models, as well as keeping training data secure, can help mitigate the risks posed by poisoning and evasion attacks.
IBM Artificial Intelligence Fundamentals certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Artificial Intelligence Fundamentals graded quizzes and final assessments, earn IBM Artificial Intelligence Fundamentals digital credential and badge.