This short film by Thales demonstrates the impact of cybersecurity on our daily lives by imagining a world without it. Discussion: What systems in your daily routine depend on cybersecurity?
https://learn.microsoft.com/en-us/training/modules/cybersecurity-careers-minecraft-education-cyber-defender/what-is-cybersecurity
Which of these threats fundamentally changes when AI is added, and which ones remain essentially the same? Is social engineering now a software problem rather than a human one?
DOUBAO VIDEO (2 min) - Show ~60-90 seconds of the video if time permits - Key point: AI capabilities are advancing rapidly across the globe - ByteDance context: same company facing US national security concerns over TikTok - Tension: impressive technology vs. data governance concerns WHAT TO EMPHASIZE: - This AI was trained on massive amounts of user data - The same company has access to data from 150M+ US TikTok users - Capability and data collection go hand in hand TRANSITION: "This impressive AI exists because of data. Let's talk about how data became so valuable..."
Which AI capability here creates the largest asymmetry between attackers and defenders? Is that asymmetry reversible?
What kinds of attacks does ATLAS capture well, and what does it miss entirely?
These tactics represent the adversary's lifecycle against AI systems. Understanding them helps in identifying potential weak points and implementing effective defensive strategies across the AI pipeline.
## ATLAS Analysis 1. **Initial Access (Phishing):** - The attacker uses AI to craft a highly convincing phishing email targeting the CFO. By mimicking a colleague's tone and referencing specific projects, the email bypasses traditional detection mechanisms. 2. **Data Poisoning (Reconnaissance):** - Leveraging AI tools, the attacker gathers extensive information about the CFO, including their communication style, professional network, and ongoing projects. This data is used to refine the phishing attempt. 3. **Model Manipulation (Evasion):** - AI-generated content is designed to evade spam filters and security systems, ensuring the phishing email reaches the target's inbox. 4. **Exfiltration (Sensitive Data Theft):** - Once the CFO is deceived, the attacker gains access to sensitive financial data, completing their objective. If AI makes phishing nearly indistinguishable from legitimate communication, does the concept of “user training” still make sense? What replaces training as the primary control?
## ATLAS Analysis of Deepfakes in Disinformation Campaigns 1. **Initial Access (Content Creation):** - The attacker uses AI tools like GANs to generate realistic deepfake videos of a political candidate making controversial statements. 2. **Data Poisoning (Amplification):** - The attacker spreads the deepfake content across social media platforms, leveraging bots and fake accounts to amplify its reach and credibility. 3. **Model Manipulation (Evasion):** - The deepfake is crafted to bypass AI-based content moderation systems, ensuring it remains online long enough to influence public opinion. 4. **Exfiltration (Impact on Trust):** - The widespread dissemination of the deepfake erodes trust in the targeted candidate and the authenticity of media, achieving the attacker’s goal of disinformation.
What are Deepfakes? AI-generated media (video/audio/image) that appears authentic but is completely fake or manipulated. Core Technology: Generative Adversarial Networks (GANs) Large Language Models (LLMs) How Do GANs Work? A GAN is a system of two competing neural networks that evolve through adversarial training: Generator:Creates fake content (e.g., fake videos). Deepfakes are weaponized tools in disinformation campaigns, manipulating public opinion, spreading propaganda, and causing tangible harm. - Political Manipulation Goal: Influence elections or create social unrest. Impact: Spreads outrage rapidly on social media, potentially swaying results before verification. - Reputational Damage Goal: Destroy personal or corporate reputations. Impact: Leads to job loss, brand damage, financial losses, and emotional distress. - Financial Fraud Goal: Trick individuals into transferring funds or revealing data. Impact: CEO fraud and voice cloning lead to significant financial losses. - Undermining Trust in Media Goal: Make people question the authenticity of all media. Impact: Breaks down trust in reliable sources, making it hard to distinguish fact from fiction. Challenges: Rapid Advancement: Detection tools struggle to keep up with fast-evolving technology. Accessibility: User-friendly tools enable creation with minimal technical skill. Subtle Manipulations: Inconsistencies are often too subtle for untrained eyes.