Artificial intelligence is a great tool in fields such as artificial intelligence, health, technology and more. But when it falls into the wrong hands, it can have dire consequences. And the worst part is that we don’t know when and where the threat will come from.
Griffin is a computer science researcher at University College London (UCL). Together with his colleagues, he compiled a list of 20 illegal activities carried out by AI and ranked them by potential harm, gain or profit, ease of implementation, and difficulty to detect and stop.
Artificial intelligence: serious threats
Fake videos: Impersonating someone by making them say or do things they’ve never said or done in order to solicit access to secure data, manipulate public opinion, or damage someone’s reputation. Played videos are virtually undetectable.
Self-driving car hacking: Taking control of a self-driving vehicle in order to use it as a weapon (for example, perpetrating a terrorist attack, causing an accident, etc.).
Tailored Phishing: Creating personalized and automated massages to increase the effectiveness of phishing aimed at securing information or installing malware.
Hacking AI-controlled systems: Disrupting infrastructure, for example causing widespread power outages, traffic jams, or disruption of food logistics.
Large-scale blackmail: Collecting personal data to send automated threat messages. Artificial intelligence can also be used to generate false evidence.
False information written by artificial intelligence: Writing propaganda articles that appear to have been published by a reliable source. Artificial intelligence can also be used to create multiple versions of certain content to increase its visibility and credibility.
Artificial intelligence: moderate threats
Military robots: Taking control of criminal robots or weapons. A potentially very dangerous threat, but difficult to implement, military equipment is often heavily guarded.
Fraud: Selling fraudulent services using artificial intelligence. There are many famous historical examples of scams successfully selling expensive counterfeit technologies to large organizations, including national governments and the military.
Data corruption: Intentionally modifying or presenting inaccurate data to promote certain biases. For example, making a detector immune to weapons or encouraging an algorithm to invest in the wrong markets.
Learning-based cyberattack: Performing both specific and large-scale attacks, such as using artificial intelligence to look for weaknesses in systems before launching multiple attacks simultaneously.
Autonomous Strike Drones: Capture autonomous drones or use them to attack a target. These drones can be particularly threatening if they act en masse in self-organized swarms.
Denial of access: Denial of or damage to users’ access to a financial service, employment, public service or social activity. This technique, which is not profitable in itself, can be used as blackmail.
Facial recognition: hijacking facial recognition systems, for example by taking fake ID photos (access to a smartphone, security cameras, passenger controls, etc.).
Manipulation of financial markets: harming competitors, breaking trading algorithms to artificially lower or raise a value, causing a financial collapse…
Artificial intelligence: Low-level threats
Bias Exploitation: Leveraging existing biases in algorithms such as YouTube recommendations or Google rankings to manipulate viewers to enhance product profile or defame competitors.
Burglar robots: Using small autonomous robots that break into mailboxes or windows to retrieve keys or open doors. Damage is potentially low because it is very localized at a small scale.
AI detection blocking: Blocks artificial intelligence sequencing and data collection to erase evidence or hide criminal information.
Artificial intelligence-written fake reviews: Creating fake reviews on sites like Amazon or Tripadvisor to harm or support a product.
Artificial Intelligence Assisted Tracking: Using location systems to track a person’s location and activity.
Counterfeiting: Producing counterfeit content such as images or music that can be sold under false authorship.