======================================================================== LECTURE GUIDE: Environmental Security ======================================================================== TOTAL TIME: ~40 minutes (without exercise) | ~50 minutes (with exercise) LEARNING GOALS: 1. Define environmental security and explain why it is a human security issue, not just an ecological one 2. Describe AI's dual role: a significant contributor to environmental stress AND a powerful tool for environmental governance 3. Analyze the carbon and resource footprint of AI systems using real data 4. Evaluate competing claims about individual vs. systemic responsibility for environmental harm 5. Connect current US policy rollbacks to global environmental security risk PREPARATION CHECKLIST: - [ ] Open electricitymaps.com/map before class — have it live on screen for the carbon intensity demonstration - [ ] Open nature.org carbon footprint calculator in a second tab for the individual action discussion - [ ] Check actonclimate.com/trumptracker for the latest policy rollbacks — the list updates frequently - [ ] Review the Google Drive reading (https://drive.google.com/file/d/1GlzBwdWZAeaDpW1s3fMdIqqYrcLER_mK/view) for key arguments to reference - [ ] Identify 1-2 recent news examples of extreme weather, AI data center energy controversy, or climate policy change - [ ] Prepare the optional mini-exercise group assignments if using SECTION BREAKDOWN (TARGET TIMES): 1. Opening + framing environmental security: 5 min 2. Threat landscape: 5 min 3. AI's environmental footprint: 8 min (includes live demo) 4. AI as environmental tool: 6 min 5. Discussion: carbon responsibility: 4 min 6. Policy context + Trump rollbacks: 6 min 7. Governance + who pays: 4 min 8. Closing question: 2 min NOTE: Case studies (AI & the Energy Grid; Climate as Infrastructure Risk) are now a Canvas assignment — students find their own real case within each theme. ADAPTATION FOR 30-MIN VERSION: - Skip the "Threat Landscape" slide (students have prior exposure to climate basics) - Skip the optional mini-exercise - Reduce discussions to 90 seconds each - Merge "AI as Tool" and "Who Bears Responsibility" into a single combined slide KEY MESSAGES TO REINFORCE THROUGHOUT: 1. Environmental insecurity is the threat multiplier — it amplifies every other human security dimension 2. AI has a real, measurable environmental footprint that scales with adoption; this is not abstract 3. The hardest governance question is not technical but political: who pays and who decides? ========================================================================
- Defined as: access to a stable, safe, and sustainable natural environment
- Environmental threats are **threat multipliers** — they destabilize all other security dimensions
> "When the climate breaks, everything else breaks with it."
the IPAT equation illustrates that humanity's environmental impact is driven by a combination of our total population numbers, our affluence or per capita consumption levels, and the types of technology or production modes we utilize OPENING (3 min) - Remind students where this sits in the human security framework from Week 1 - Environmental security is not just "saving trees" — it's about what happens to food, water, health, economic stability, community cohesion when ecosystems fail CONNECT TO PRIOR LECTURES: - Economic security (Week 5): climate disasters destroy regional economies - Political security (Week 6): resource scarcity drives conflict and displacement, which drives disinformation and authoritarian opportunity KEY MOVE: - Ask: "Name a security problem from a previous week that climate change makes worse." - Expected: food supply, migration pressure, political instability, disease outbreak - Every answer is correct — that's the point TRANSITION: "Before we talk about AI, let's map the threat landscape."
THREAT LANDSCAPE (3 min) - Walk the table briskly — students have climate literacy; don't spend too long here - Focus on the "Security Impact" column: connect each to human security dimensions they've studied KEY EMPHASIS: - Climate is not one threat — it's a threat amplifier operating across all columns simultaneously - "Irreversible" is the crucial word for biodiversity and certain carbon tipping points CALL-OUT: - Timescale column matters: "accelerating now" is not 2050 — the IPCC 2023 synthesis report says we are already in the impacts phase TRANSITION: "Now let's talk about something that doesn't appear in the 1994 UNDP framework: AI. Because AI has entered the picture on both sides of this equation."
Sources: Strubell et al. 2019 (Energy and Policy Considerations for Deep Learning); IEA Data Centres and Data Transmission Networks Report 2023; Goldman Sachs Power Up report 2024
AI FOOTPRINT (5 min) - These numbers shock students — lean into that - The GPT-3 comparison is vivid: "One model training run = driving 5 cars for their entire usable life" - The query comparison is actionable: every time you ask ChatGPT something you're burning ~10x the energy of a Google search LIVE DEMO (2 min): - Open electricitymaps.com/map on screen - Show the carbon intensity map in real time - Ask: "Where would you locate a data center if you wanted to minimize carbon footprint?" (Pacific Northwest hydro, Nordics, not the Southeast US coal corridor) - Point: the SAME compute, different locations, wildly different carbon impact CRITICAL NUANCE: - "But tech companies buy renewable energy credits!" — yes, but that's not the same as running on renewables; additionality matters - Inference (running the model every time someone asks it something) now exceeds training in aggregate energy use because of scale ONLINE TIP: - Share the electricitymaps link in chat so students can explore on their own screens - Ask: "Where is your college's data center, and what's the grid mix there?" TRANSITION: "So AI has a carbon problem. But here's the paradox — it's also one of our best tools for solving the climate crisis."
Source: Oak Ridge National Laboratory Frontier system documentation; TVA 2023 generation mix
FRONTIER SLIDE (2 min) - This slide is a visceral anchor — the photo makes the abstract concrete - "This is what compute looks like at scale. This is not a server rack in a closet." KEY POINTS: - Frontier is a best-case scenario: publicly funded, scientifically justified, on a relatively clean grid, with published energy data - Private AI training runs often have none of those properties: undisclosed location, undisclosed energy source, no published carbon accounting - The comparison is useful: "If Frontier had to publish its energy use, why don't private AI labs?" PERSONAL CONNECTION (if appropriate): - If you visited Oak Ridge, share what it felt like to stand next to it — the scale, the cooling infrastructure, the physical weight of compute DISCUSSION SEED: - "Frontier is publicly funded and its energy use is public record. Should the same transparency apply to private AI training runs?" TRANSITION: "Now let's look at the other side of this picture — real-time carbon intensity of the grid powering systems like this."
LIVE DEMO SLIDE (2 min) - This slide is just a launch point — switch to the actual website - Show: Virginia (where much of AWS East is located), versus Iceland, versus Norway - The visual makes the abstract concrete: "The same AI query emits 10x more carbon depending on which server answers it" QUESTIONS TO ASK LIVE: - "Find Virginia on this map. What color is it?" (typically orange/yellow — mixed grid) - "Find Iceland. What about Norway?" (green — largely hydro/geothermal) - "Where would you site a data center if you were trying to be green?" POLICY HOOK: - Some tech companies are publicly committed to 24/7 carbon-free energy (Google's moonshot) - But most renewable energy purchases are RECs — accounting tricks, not physics - The grid still has to balance in real time; your data center still draws from whatever's available TRANSITION: "Now let's look at the other side: AI as a climate solution."
AI AS TOOL (4 min) - This is the "paradox" side: the same technology that creates carbon can help manage it - Don't oversell: these are genuine contributions, but they don't offset AI's own footprint yet EXAMPLES TO DEVELOP: 1. Global Forest Watch: uses ML to analyze satellite imagery for deforestation alerts — real-time, global coverage that was impossible before 2. DeepMind cooling: 40% reduction in cooling energy is massive at Google scale; this kind of optimization is being extended to grid management 3. Climate modeling: AI can run ensemble simulations that previously took weeks in hours — enabling better adaptation planning CRITICAL THINKING PUSH: - "If AI helps us be more efficient, does that efficiency gain get reinvested in more AI, creating a rebound effect?" (Jevons paradox) - The history of efficiency tech is that it usually leads to more consumption, not less AVOID TECHNO-OPTIMISM TRAP: - AI cannot solve a political problem - Better climate models don't change policy if there is no political will - Precision agriculture doesn't help if farmers can't access or afford the technology TRANSITION: "Which brings us to a harder question: who is responsible for the carbon footprint of your AI use?"
DISCUSSION (4 min) Format: 1 min individual think, 3 min open discussion or chat waterfall FRAMING: - Show the Nature Conservancy calculator link - Ask: "If you calculate your personal carbon footprint, what does that tell you about AI policy?" FACILITATION TIPS: - The "personal responsibility vs. systemic change" tension is the core debate in climate politics - Students often split: some want individual action, some dismiss it as corporate deflection - Push both sides: * To the "individual responsibility" camp: "If every person optimized perfectly, would it solve the structural problem?" * To the "systemic change" camp: "Does that mean individual choices are irrelevant? What would it take to make systemic change happen?" CRITICAL CONTEXT: - The concept of the "personal carbon footprint" was popularized by a BP advertising campaign in 2004 - This is not a conspiracy theory — it's documented PR strategy to shift responsibility from fossil fuel companies to individuals - That doesn't make personal choices meaningless, but it should inform how we weight them POSSIBLE STUDENT ANSWERS: - "Companies should pay" → follow-up: "How do we make them? Through regulation, litigation, market pressure?" - "Individuals should choose" → follow-up: "What percentage of emissions are actually in individual consumer control?" - "It's all hypocritical anyway" → follow-up: "Is that a reason to do nothing, or a reason to fix the system?" IF DISCUSSION STALLS: - "Should ChatGPT tell you the carbon cost of every query, the way nutrition labels tell you calories? Why or why not?" TRANSITION: "Let's look at what's happening at the policy level right now — because the governance picture just changed dramatically."
POLICY CONTEXT (4 min) - This slide is intentionally factual, not partisan: these are documented policy actions - The actonclimate.com tracker is a useful live resource — consider pulling it up on screen KEY FRAMING: - "These are policy choices with security consequences — that's our analytical lens today" - Not: "this is bad" — but: "what are the security implications of withdrawing from international climate governance?" SECURITY IMPLICATIONS TO DISCUSS: 1. Global coordination vacuum: US withdrawal from Paris weakens the enforcement architecture; other major emitters may adjust commitments 2. Geopolitical: China and EU now compete to lead climate governance — what does that mean for US soft power? 3. Domestic: rollbacks on methane, EV, and renewable energy affect local communities differently; coal country vs. wind country 4. Military: the US military identifies climate change as a security threat (DoD Climate Risk Analysis 2021) — there's a civil-military tension here AVOID: - Don't make this a political debate about the current administration's merits - Focus: "What are the security consequences of any government withdrawing from international environmental governance?" TRACKER NOTE: - The actonclimate tracker is updated regularly — preview it before class to reference 1-2 specific recent actions - Presenting live data grounds the lecture in current events TRANSITION: "So if the US pulls back, who fills the governance gap? And what role does AI play in that question?"
GOVERNANCE (4 min) - Walk the table; emphasize the "Limitation" column — this is where the real analytical work happens - The governance gap is the core policy challenge KEY INSIGHT: - This is the same coordination problem as climate governance generally — but faster - AI energy demand is on an exponential curve; governance moves linearly - By the time binding frameworks emerge, the infrastructure is locked in ANALOGY: - Compare to the internet: governance frameworks for privacy, liability, and harm took 25 years after commercialization - Can we afford 25 years for AI-energy governance? CONNECT TO PREVIOUS LECTURES: - Week 3 (Ethics): who is accountable when AI systems create environmental harm? - Week 6 (Political): environmental governance is politically contested; AI adds another layer PUSH QUESTION: - "If a company trains a large model in a coal-heavy grid to avoid EU carbon regulations, and that model is then used globally — where should carbon accountability sit?" ESG CRITIQUE: - ESG (Environmental, Social, Governance) reporting is currently voluntary and inconsistent - Major AI companies' sustainability reports use different methodologies, making comparison impossible - This is a transparency and accountability gap, not just a technical one TRANSITION: "Let's close with the big picture question."
CASE STUDIES NOTE: The two concrete cases (AI and the Energy Grid; Climate Change as Infrastructure Risk) have been moved to the Canvas case study assignment for Week 7. Students research their own real-world example within one of those two theme areas and apply the analytical framework from this lecture.
--- ## Mini-Exercise (Optional, 5 min): Environmental Security Audit Pick one use case (or assign groups): 1) **AI Infrastructure Company** — assess the carbon exposure of a hyperscale data center 2) **Climate NGO** — design an AI-powered deforestation monitoring system 3) **City Government** — use AI to model climate risk for infrastructure planning Deliverable (1 slide / whiteboard): - What is the environmental security benefit or risk? - Who bears the cost and who bears the benefit? - What governance mechanism would you propose?
MINI-EXERCISE (5 min total: 3 min work + 2 min share) [OPTIONAL] - SKIP THIS if running short on time — the closing discussion is more important SETUP (30 sec): - Assign groups or let students self-select - Point to the three deliverable questions — paste in chat if online USE CASE NOTES: 1) AI Infrastructure Company: - Expected: identify energy source, water usage, carbon accounting methodology - Expected governance: carbon tax, mandatory disclosure, grid coordination agreements - Push: "A carbon tax applied to data centers — would that change where companies build?" 2) Climate NGO: - Expected: satellite ML for forest monitoring, low-power edge inference nodes - Expected benefit: real-time illegal logging alerts; governance hook for enforcement - Push: "Who gets this data? Governments, communities, NGOs — and who acts on it?" 3) City Government: - Expected: flood risk mapping, infrastructure stress modeling, heat island analysis - Expected governance: integration with building codes, zoning, emergency planning - Push: "What happens when the city's AI risk model is wrong? Who is accountable?" SHARE-OUTS (2 min): - Ask each group: "What's your proposed governance mechanism, and its biggest limitation?" - Highlight the common theme: governance gaps arise everywhere that technical capability outpaces institutional design DEBRIEF: "Every solution creates a new governance question. That's the pattern."
TAKEAWAYS (2 min) - Read each slowly; add one sentence of elaboration per point - The "multiplier" framing is the core theoretical contribution of this lecture — make sure it lands ONLINE TIP: - Ask students to put in chat: "Which takeaway surprised you most?" - Or: "Rate the AI-environment paradox on a scale of 1–5: how much does it change how you think about using AI tools?" CLOSING THOUGHT: "We are building the most energy-intensive computing infrastructure in human history at the exact moment we need to decarbonize. That's not a coincidence — it's a governance challenge. And governance challenges have political solutions, not just technical ones." OPTIONAL EXIT TICKET: Paste in chat: "One word: what should we prioritize — AI's environmental benefits, or its environmental costs? React: 👍 benefits outweigh costs, ❤️ costs outweigh benefits, 🤔 genuinely uncertain" Read the distribution and close.
---
## If AI accelerates both climate solutions and climate emissions — who decides which wins?
CLOSING QUESTION - Leave on screen, don't answer - This is the question that sits at the intersection of technology governance, climate policy, and political will - It connects back to every governance discussion this semester ONLINE EXIT TICKET (paste in chat): "Type one actor — a government, a company, an institution, a movement — that you think is best positioned to answer that question." Read 3-4 responses, note any patterns, thank students. CONNECTION TO NEXT CLASS: - Mar 5 guest lecture with Emerald (Emma) Dudzinski — connect to her expertise in advance if known - Environmental security connects to community security, health security, and food security in coming weeks
======================================================================== POST-LECTURE NOTES ======================================================================== COMMON STUDENT QUESTIONS: - "Aren't tech companies already going green?" — Address: the difference between renewable energy credits (RECs) and actual 24/7 carbon-free energy; additionality problem - "Isn't nuclear the answer?" — Address: low-carbon yes, but creates different security tradeoffs (waste, proliferation, cost); not a silver bullet - "Should I feel guilty for using ChatGPT?" — Address: individual use is small; the policy question is at the infrastructure and regulatory level; don't internalize systemic problems as personal moral failures CONNECTIONS TO OTHER COURSE CONTENT: - Week 1 (Human Security): environmental security as one of seven UNDP dimensions; the threat multiplier concept - Week 2 (AI/Cyber): AI infrastructure as a security-critical system - Week 3 (Ethics): accountability for environmental harm from AI systems - Week 5 (Economic): climate disruption as economic shock; energy costs as factor in AI competitiveness - Week 6 (Political): climate governance as a geopolitical contest; US withdrawal from international frameworks ASSESSMENT CONNECTION: - This lecture is relevant to any case study analysis involving AI deployment decisions, infrastructure planning, or policy evaluation - The "who bears the cost / who bears the benefit" framework is directly applicable to the course project RESOURCES FOR DEEPER EXPLORATION: - IEA Data Centres and Data Transmission Networks (2023): https://www.iea.org/reports/data-centres-and-data-transmission-networks - Strubell et al. (2019), "Energy and Policy Considerations for Deep Learning in NLP": https://arxiv.org/abs/1906.02629 - Goldman Sachs "Power Up" report on AI energy demand (2024) - Electricity Maps live carbon intensity: https://app.electricitymaps.com/map - Nature Conservancy carbon footprint calculator: https://www.nature.org/en-us/get-involved/how-to-help/carbon-footprint-calculator/ - Act on Climate Trump Tracker: https://www.actonclimate.com/trumptracker/ - Course reading: https://drive.google.com/file/d/1GlzBwdWZAeaDpW1s3fMdIqqYrcLER_mK/view ========================================================================