Artificial intelligence (AI) is transforming our world in exciting ways, from smart assistants to autonomous vehicles. However, we must be vigilant about the AI risks of its uncontrolled or unethical applications. Two emerging threats are deepfakes and large-scale AI-powered disinformation campaigns. This article examines the dark side of AI through these examples and suggests potential solutions.
What are Deepfakes and How Do They Work?
Deepfakes leverage a set of AI techniques called deep learning to fabricate or alter video, image, and audio content with startling realism. They work by training neural networks on large datasets of images, videos, and audio clips to learn how to mimic the features of a source.
For example, a deepfake video of a politician could train the AI model on hundreds of hours of their speeches and interviews. The neural net would learn to synthesise new footage that replicates the politician’s facial expressions, lip movements, voice, and mannerisms. A separate dataset containing the desired fake speech or action is used to get the model to combine the realistic impersonation with custom actions and words.
The resulting deepfake can be extremely difficult for humans to distinguish from genuine footage. However, subtle anomalies may reveal manipulation in forensic analysis.
Potential Dangers of Malicious Deepfakes
Is AI dangerous? Deepfakes pose many dangers since convincing fakes can be created with minimal resources and expertise. Some key AI risks include:
Spreading False Information at Scale
Deepfakes allow fabricating events or putting words in people’s mouths. This can be used to spread false news, hoaxes, and doctored evidence. Duplicitous deepfakes are hard to debunk once they go viral.
Impersonation, Coercion and Fraud
Identity theft using deepfakes can enable financial fraud, market manipulation, and reputation sabotage. Criminals could impersonate executives to issue fraudulent directives or politicians to sway elections. Victims may also be coerced with threats of leaking fake explicit videos.
Violating Privacy and Facilitating Harassment
Deepfakes have already been used to create explicit videos without consent. This constitutes a gross violation of privacy and enables extortion and psychological abuse. The victim’s suffering is amplified as the deepfake spreads virally.
Undermining Evidence and Accountability
Deepfakes allow doubting even real evidence as possibly fake. Authoritarian regimes can dismiss human rights violations as deepfakes. Criminals can escape justice by claiming video evidence is doctored. This damages accountability.
National Security Risks
State-sponsored disinformation using deepfakes poses national security AI risks. They could fabricate events to justify military action or political concessions. Diplomatic relations may suffer due to deepfake impersonations of world leaders.
AI-Powered Disinformation Campaigns
In addition to deepfakes, AI is also turbocharging the spread of disinformation and fake news. Natural language processing and social media bots allow for generating and disseminating false narratives with unprecedented scale and precision.
Key AI Capabilities Facilitating Mass Disinformation
Automated Content Generation
AI text generators like GPT-3 can churn out millions of fake news articles, comments, social media posts, etc. They can efficiently publish huge volumes of natural-sounding disinformation.
AI profiling and optimisation algorithms can identify target demographics likely to engage with and spread tailored disinformation campaigns. This enhances impact.
Coordinated Inauthentic Activity
Thousands of AI bots automate liking, sharing, and reposting of disinformation to artificially boost engagement metrics and reach. This masks coordination as a “viral” activity.
Disinformation tactics evolve to avoid detection. AI helps mask coordinated campaigns as generic chatter. Improvements like slight message variations across bot accounts are harder to filter than identical messages.
AI monitors responses to disinformation campaigns and continuously adjusts tactics and targets to improve propagation. Humans alone cannot match this pace and scale.
Potential Solutions and Mitigation Strategies
Addressing these complex AI disinformation threats requires a multi-pronged approach with collaboration between policymakers, tech companies, media, and the public. Some potential mitigation strategies include:
- Ethical Guidelines: Establish clear ethics standards and screening practices for companies building any public-facing AI applications to identify and prevent AI risks.
- Laws and Regulations: Enact laws prohibiting malicious uses of AI like non-consensual deepfakes. Require social media platforms to remove AI-generated disinformation.
- Improved Detection: Ongoing research into forensic techniques using AI pattern recognition to better detect deepfakes and coordinated disinformation campaigns. This aids policy enforcement.
- Transparency and Provenance Tracking: Provide clear indicators on user interfaces revealing AI-generated or manipulated media. Maintain provenance trails for generated assets to improve accountability.
- Investments in Countering Disinformation: Fund high-quality investigative journalism, fact-checkers, and research into disinformation tactics to counter false narratives.
- Media Literacy Initiatives: Educate the public on how to spot AI manipulation techniques and exercise discernment. Critical thinking is the best defence.
The Bottom Line
AI is a transformative technology that will reshape our world in the coming decades. However, preserving truth, trust, and accountability requires overcoming emerging threats from irresponsible applications. With collective vigilance, wisdom, and responsibility, we can steer AI progress toward the common good.