======================================================================== LECTURE GUIDE: Political & Societal Security ======================================================================== TOTAL TIME: ~40 minutes (online pacing is slightly faster — less room transition, but more chat management) LEARNING GOALS: 1. Explain why AI changes the economics and mechanics of disinformation 2. Describe the "governance trap": doing nothing vs. controlling speech 3. Compare governance approaches and their failure modes 4. Identify practical resilience strategies at citizen + system levels ONLINE PREPARATION CHECKLIST: - [ ] Launch a Zoom poll in advance for the "Which concerns you most?" prompt (deepfakes / AI text / personas / targeting / bots) - [ ] Prepare a second Zoom poll for the closing line-drawing discussion (remove / leave up / label / nothing) - [ ] Have a shared doc or Jamboard link ready if you want to run the optional whiteboard activity - [ ] Test screen sharing before class; have slides and any links open in browser tabs - [ ] Decide which discussion to run as breakout rooms vs. chat waterfall — both take ~4 min; pick your format ahead of time - [ ] Post the day's agenda and discussion prompts in the chat at the start of class (paste them as text so students can refer back) - [ ] Bring 1 timely example of synthetic media or election misinformation — a tweet/screenshot you can share on screen works best - [ ] Prepare 2 neutral definitions: "disinformation" vs "misinformation" vs "propaganda" — paste into chat when you introduce them SECTION BREAKDOWN (TARGET TIMES): 1. Opening + democratic dilemma: 5 min 2. Why AI changes everything + toolkit: 8 min 3. Anatomy of a campaign + deepfake case: 8 min 4. Discussion: information diet (chat waterfall): 4 min 5. Governance trap + spectrum: 10 min 6. Moderation vs censorship: 4 min 7. Resilience principles + citizen actions: 4 min 8. Discussion: where to draw the line (breakout rooms or poll): 4 min 9. Takeaways + closing question: 2 min ADAPTATION FOR 30-MIN VERSION: - Merge "AI changes everything" + "toolkit" into one slide discussion - Skip the spectrum table OR use only 3 rows (laissez-faire, transparency, regulation) - Keep exactly one discussion prompt — prefer the "line-drawing" scenario (more concrete for online) - Run that discussion as a quick poll + 90-sec debrief instead of full breakout ONLINE FACILITATION NOTES: - Use the chat actively: at the start of every slide, paste the key question so students can react before you speak - "Chat waterfall" format: ask a question, tell everyone to TYPE but NOT SEND, then say "send now" — responses arrive together so no anchoring effect - Breakout rooms: 2-3 min is enough online; tell students the prompt BEFORE sending them out and paste it in the main chat - Keep examples non-partisan; emphasize tactics and system incentives over party politics - Watch for "epistemic defeatism" ("nothing is true") — redirect to resilience + institutions - Silence is common online; if no one responds, call on someone by name or reframe the question WHITEBOARD (OPTIONAL DIGITAL ADAPTATION): - Share a Jamboard or slide with a triangle labeled: Truth / Freedom / Legitimacy - Ask students to drop a sticky note or annotate where each policy approach sits on the triangle - Alternatively, paste the triangle as a text diagram in chat and ask students to respond with a letter (A=Truth, B=Freedom, C=Legitimacy) for which trade-off worries them most ========================================================================
OPENING (3 min) - Democracy assumes citizens can access reliable information and make informed decisions - AI breaks this assumption at scale - The solutions (content moderation, fact-checking, regulation) all risk becoming tools of control - This is THE governance challenge of our era ONLINE OPENER — paste into chat at the start: "Who should decide what's true online?" — tell students to type one word or phrase in chat (but hold until you say "send") QUICK DEFINITIONS (paste into chat, 30 sec): - Misinformation: false, shared without intent to deceive - Disinformation: false, shared with intent to deceive - Propaganda: persuasive messaging serving a political goal (may be true, false, or mixed) DIGITAL BOARD PROMPT (optional): Share screen with a simple slide: "Truth ←————→ Freedom" and ask students to drop a reaction emoji — 👍 for Truth side, ❤️ for Freedom side — to see where the class sits. TRANSITION: "Before we talk solutions, let's understand what we're actually up against."
AI TRANSFORMATION (3 min) - This isn't just "more of the same" — it's a qualitative shift - The economics of disinformation have inverted - Defense was already losing; AI tips the balance further KEY INSIGHT: We're entering a world where seeing is no longer believing, and "proof" can be fabricated on demand. CALL-OUT: - The hardest part isn't generating fakes — it's distribution + credibility. AI helps there too (personalization + scale). ONLINE TIP: - Read the table out loud — students may be on mobile and the table can be hard to read; verbally walk each row. TRANSITION: "Here's what that toolkit actually looks like in practice."
TOOLKIT (3 min) - Walk through each capability - Emphasize: these aren't theoretical — they exist NOW - The barrier to entry keeps dropping DEEPFAKES: Voice cloning needs ~3 seconds of audio. Video is getting there. AI TEXT: GPT-era models write better than most humans PERSONAS: StyleGAN faces, AI-written bios, years of fake posting history TARGETING: Cambridge Analytica was primitive compared to what's possible BOTS: Coordinated inauthentic behavior at scale ONLINE POLL (run it now): "Which of these concerns you most?" — launch the pre-built Zoom poll (Options: Deepfakes / AI-generated text / Synthetic personas / Personalized targeting / Amplification bots) Show results on screen and comment briefly — 30 sec. PITFALL TO AVOID: - Students may fixate on deepfakes; remind them "cheap, boring, high-volume" text often moves more opinion than flashy video. TRANSITION: "Let's see how these tools fit together into a full campaign."
ANATOMY (3 min) - This is the modern playbook — fully automated influence operations - A small team (or individual) can run campaigns that once required state resources - The feedback loop means campaigns get better over time KEY INSIGHT: This isn't about one viral post — it's about systematically shifting what people believe over time. CONNECT TO PREVIOUS CONTENT: Remember the disinformation pipeline from cybersecurity? AI supercharges every step. ONLINE TIP: - Walk through each numbered step slowly and wait a beat — students need a moment to absorb the scale - Optional: paste the 6 steps in chat so students can follow along OPTIONAL EMPHASIS: - The real "weapon" is attention: platforms optimize for engagement; campaigns optimize for triggering it. TRANSITION: "This isn't hypothetical — let's look at what this already looks like in elections."
CASE: DEEPFAKES (3 min) - Slovakia is the clearest case of AI directly affecting an election - The audio was fake but voters couldn't know in time - Pattern: last-minute releases when there's no time to verify LIAR'S DIVIDEND: - This is the flip side: real evidence gets dismissed - "That video of me? Obviously a deepfake." - We lose ability to hold leaders accountable ONLINE — paste in chat: "How would you verify if a video of a politician is real? Type one method." Read 2-3 responses aloud, then move on — don't let this become a full discussion (that's next slide). FOLLOW-UP IF TIME: - "If you can't verify quickly, what do you do: share with a caveat, wait, or ignore?" TRANSITION: "This gets personal — let's talk about your own information habits."
DISCUSSION (4 min) — CHAT WATERFALL FORMAT Instructions to give students: "I'll give you 60 seconds to type a response to one of these questions — but don't hit send yet. When I say 'go,' everyone sends at once." After responses flood in, read 3-4 aloud and respond briefly. FACILITATION: - Make this personal — everyone has been fooled - No shame — the point is these systems are designed to fool us - Focus on the DIFFICULTY of verification, not individual failure POSSIBLE RESPONSES: - "I check multiple sources" → Ask: what if multiple sources are wrong? - "I trust certain outlets" → Ask: how did you decide which to trust? - "I'm skeptical of everything" → Ask: is that sustainable? Does democracy work if no one trusts anything? INSIGHT TO DRAW OUT: Perfect skepticism is as dangerous as perfect credulity. We need heuristics, but AI is breaking them. IF THE CHAT IS QUIET: - Offer 3 heuristics to debate (not endorse): source reputation, corroboration across independent outlets, and "what would change my mind?" - Or call on a student by name: "[Name], what's your gut check when you see something surprising?" TRANSITION: "So if we can't trust our own judgment, can regulation help? Let's look at the trap governments face."
GOVERNANCE TRAP (3 min) - This is the central tension of the lecture - Both paths lead to bad outcomes - Authoritarian regimes LOVE content regulation — they use "fighting disinformation" to silence critics EXAMPLES: - Singapore's POFMA law: used against political opposition - Russia's "fake news" laws: jail for contradicting government - Even well-intentioned laws get abused KEY QUESTION: Can you design a system that stops lies but protects dissent? ONLINE ADAPTATION: - Paste into chat: "Do nothing vs. regulate — which risk worries you MORE? React: 👍 for 'do nothing is worse' or ❤️ for 'regulate is worse'" - Show the split on screen, then say: "This is exactly the dilemma. Let's see what options governments have tried." TRANSITION: "There's a whole spectrum of approaches — all with problems."
--- ## Governance Approaches: A Spectrum | Approach | Mechanism | Risk | |----------|-----------|------| | Laissez-faire | Let the market decide | Disinformation wins | | Platform self-regulation | Trust & Safety teams | Inconsistent, unaccountable | | Transparency mandates | Require disclosure | Doesn't stop content | | Content regulation | Government mandates removal | Censorship, overreach | | Liability for platforms | Legal consequences | Over-removal, chilling effects |
SPECTRUM (3 min) - Walk through each approach and its failure mode - Point: EVERY approach has serious problems LAISSEZ-FAIRE: "Marketplace of ideas" assumes truth wins. It doesn't. SELF-REGULATION: Facebook/X/TikTok make inconsistent, opaque decisions TRANSPARENCY: EU's DSA approach — useful but insufficient CONTENT REGULATION: Germany's NetzDG, UK Online Safety Bill — who defines illegal? LIABILITY: Section 230 debate — if platforms are liable, they'll over-remove ONLINE TIP: - This table can be hard to read online — consider sharing screen with a zoomed-in version, OR - Walk the table verbally: "Row 1: Laissez-faire. Mechanism is [X]. Risk is [Y]." at a slow pace - Ask in chat: "Which approach is closest to what the US has now?" (Answer: mostly self-regulation + some transparency; Section 230 enables laissez-faire) KEY INSIGHT: This is genuinely hard. Anyone who says they have the answer is selling something. TRANSITION: "Want to see what the far end of that spectrum actually looks like? Let's watch a few minutes."
*NYT Opinion — ~8 min | Play 0:00–4:00 as a minimum; full video if time allows*
VIDEO (4–8 min depending on clip length) URL: https://www.youtube.com/watch?v=CLo3e1Pak-Y ONLINE SETUP: - Open the video in a browser tab before class and have it queued to 0:00 - Share your screen + enable "Share computer sound" in Zoom before playing - Paste the URL in chat so students can follow along if your stream lags WHAT TO WATCH FOR (paste in chat before playing): "As you watch, note: (1) what data is collected, (2) who has access, and (3) what happens to people flagged by the system." MINIMUM CLIP (0:00–4:00): Covers the surveillance infrastructure and social credit framing EXTENDED CLIP (0:00–8:00): Adds the human impact and international spread — recommended if you have time AFTER THE VIDEO — chat prompt: "One word: what's the most alarming thing you just saw?" Read 3-4 responses aloud, then bridge to the next slide. DISCUSSION QUESTIONS (pick 1 if time): - "China frames this as public safety and efficiency. Where's the line between smart city infrastructure and a surveillance state?" - "Could something like this happen in a democracy? What would have to change first?" - "The US collects a lot of data too — what's actually different here?" KEY INSIGHT TO LAND: This isn't a dystopian future — it's operational now. The spectrum slide showed "content regulation" as an approach; this is what that looks like taken to its logical end. TRANSITION: "So here's the hard question that connects China's system to debates we're having right now..."
--- ## When Does Moderation Become Censorship? Content removal is justified when: - ??? Inciting imminent violence? - ??? Demonstrably false health information? - ??? Election disinformation? - ??? "Hate speech" (defined how, by whom)? - ??? Foreign propaganda? - ??? Unpopular political opinions? **The slope is real:** Every power created to stop "disinformation" can be used against legitimate speech
CENSORSHIP QUESTION (3 min) - Walk down the list — each step is harder - Violence: most agree this is ok to remove - Health misinfo: COVID showed how contested this is - Election disinfo: who certifies what's true about an election? - Hate speech: definitions vary wildly by culture - Foreign propaganda: RT? BBC? Al Jazeera? Where's the line? - Unpopular opinions: this is where it breaks down HISTORICAL EXAMPLES: - "Disinformation" label applied to: lab leak hypothesis, Hunter Biden laptop, vaccine side effects - Some turned out to be true, partially true, or legitimately debatable - Moderation decisions made in haste, with incomplete information ONLINE ENGAGEMENT: - Read the list slowly; after each item, pause and say "thumbs up if you think this should be removed" - OR paste the list in chat and ask students to type the last category they'd remove (where they'd draw their line) - Read 2-3 student "lines" aloud and note the disagreement: "Notice we don't all agree — and we're all in the same room." SAFETY NOTE: - Keep this discussion on process (epistemics, incentives, institutions), not on relitigating the underlying controversies. TRANSITION: "If perfect content moderation is impossible, what else can we design for?"
RESILIENCE (3 min) - These are design principles, not complete solutions - Focus on SYSTEM design, not content policing FRICTION: "Share" buttons are too easy. What if you had to wait? Read the article? TRANSPARENCY: Know when you're seeing AI content, ads, boosted posts PLURALISM: Dangerous when one platform (Facebook in Myanmar) IS the internet REDUNDANCY: Local news, multiple fact-checkers, diverse media ecosystem INOCULATION: If people know the tactics, they're resistant (research shows this works) ACCOUNTABILITY: Focus on coordinated behavior, not individual posts ONLINE QUICK POLL (optional): Paste in chat: "Which resilience lever do you think is most promising? Type one word: Friction / Transparency / Pluralism / Redundancy / Inoculation / Accountability" Tally responses live on a visible note or just call out what you're seeing: "I'm seeing a lot of Inoculation — let's keep that in mind." KEY INSIGHT: Resilience means the system survives attack, not that attacks don't happen. OPTIONAL CONNECTOR: - Compare to cybersecurity: you assume breach; you design for graceful failure and rapid recovery. TRANSITION: "And there's a role for all of us as individuals too."
CITIZEN ACTION (2 min) - Individual behavior matters but isn't sufficient - Collective action is necessary PERSONAL: The "pause" is the most important. Outrage is the goal of manipulation. COLLECTIVE: Democracy is a team sport. Institutions need active support. ONLINE TIP: - This slide is a good moment to let students turn cameras on if they've been off — frame it as a brief "check in" before the final discussion - You can also use this moment for a quick chat: "What's ONE thing from this list you actually do already?" - Read 2-3 responses — it signals that students already practice some of these and makes the material feel actionable ACKNOWLEDGE THE LIMITS: - Individual vigilance can't solve systemic problems - But apathy guarantees failure - The goal is informed, engaged skepticism — not paranoid distrust ONE-LINER: - "Your job isn't to be a fact-checker; it's to be a good node in the network." TRANSITION: "Let's put all of this to a concrete test."
DISCUSSION (4 min) — BREAKOUT ROOMS (preferred) OR POLL + DEBRIEF Paste the scenario in chat BEFORE sending students to breakout rooms. BREAKOUT ROOM FORMAT (2-3 min rooms, groups of 3-4): - Prompt: "Agree on one answer to 'what should happen,' then identify your biggest concern about that answer." - When you bring them back: ask 2-3 groups to share their answer AND their concern (the concern is the interesting part) POLL FORMAT (if breakouts feel too long): - Launch pre-built Zoom poll: "What should happen?" with the 4 options - Show results on screen (results will be split — that's the point) - Debrief: "Notice we disagree. Now: who should break the tie, and how?" FACILITATION: - If they say "remove it" → Ask: who verifies it's false? What if government is lying? - If they say "leave it up" → Ask: what about real harm from false beliefs? - If they say "labels" → Ask: do labels work? Evidence is mixed. DRAW OUT THE TENSION: - Every option has costs - The question isn't "what's best" but "what's least bad" - Different societies will answer differently based on their histories and values OPTIONAL WRAP (30 sec): - Ask 1-2 students to state (1) their policy, (2) their biggest worry about it. - Or paste in chat: "Type your policy in one word, then a comma, then your biggest worry." TRANSITION: "Let's close with the big picture."
--- ## Key Takeaways 1. **AI fundamentally changes disinformation** - scale, speed, and quality make old defenses obsolete 2. **The governance dilemma is real** - every solution risks becoming the problem it solves 3. **There is no neutral ground** - inaction is also a choice with consequences 4. **Resilience beats defense** - design systems that survive manipulation, don't try to eliminate it 5. **Democracy requires active maintenance** - informed, engaged citizens are the last line of defense
TAKEAWAYS (2 min) - Read each slowly, add brief elaboration ONLINE TIP: - Ask students to open their notes and write down the one takeaway that surprised them most - Cold-call 1-2 students by name: "[Name], which one landed for you?" - This keeps engagement high in the final stretch when online attention drifts CLOSING THOUGHT: "The question isn't whether AI will be used to manipulate democracy — it already is. The question is whether we can build systems and citizens resilient enough to survive it." This is genuinely uncertain. We might fail. But giving up guarantees failure.
CLOSING QUESTION - Leave this on screen - Don't answer it — let it sit - This is the question their generation will have to answer ONLINE EXIT TICKET (paste in chat, 1 min): "Drop one word in chat: the resilience lever you think matters most — Friction, Transparency, Pluralism, Redundancy, Inoculation, or Accountability." Read the distribution aloud; thank students; end class. OPTIONAL: Ask students to post their answer in the Canvas discussion board as a short reflection — useful if you want to assess engagement from the async format.