Publication | Open Access
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
488
Citations
0
References
2018
Year
Artificial IntelligenceEngineeringInformation SecurityPotential Security ThreatsAi SafetyIntelligent SystemsThreat LandscapeResponsible AiData ScienceThreat (Computer)Threat DetectionPredictive AnalyticsComputer ScienceForecastingData SecurityThreat HuntingSecurityAi ResearchersCyber Threat IntelligenceTechnologySafe Artificial Intelligence
Artificial intelligence is rapidly expanding, powering diverse beneficial applications, yet its potential for malicious use has received comparatively little scrutiny. The report surveys potential security threats from malicious AI and proposes methods to forecast, prevent, and mitigate them, while examining likely near‑term attacks if defenses lag. The authors conduct a survey of malicious AI threat scenarios and outline forecasting, prevention, and mitigation strategies, concentrating on imminent attack types in the absence of adequate defenses.
Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.