🤖 AI in Education – Week 4

Equity and Ethics in AI

Big Idea:
AI systems can act as distorted mirrors that reflect and often magnify existing educational inequalities.

Essential Question:
How does AI reflect bias—and how can we design for equity?

🎯 Today’s Learning Goals

  • Identify sources of bias in AI systems
  • Analyze AI’s impact on educational equity
  • Apply ethical frameworks to real examples
  • Develop recommendations for more equitable AI

🔍 Opening Hook (10 min)

📺 Watch: "How algorithms become biased"

🗣️ Debrief:

  • What stood out?
  • How could this impact education?
  • Who should be responsible for fixing these issues?

🧠 Core Concepts

Sources of Bias

  • 📂 Data
  • ⚙️ Model Design
  • 🛠️ Implementation

📂 Source of Bias: Data

  • AI learns from historical data — if the past is biased, so is the model
  • Missing or underrepresented groups = inaccurate predictions
  • Labels may reflect human prejudice (e.g., test scores, behavior reports)
  • Data collection often lacks transparency or consent
  • Garbage in = garbage out (even with a great model)

⚙️ Source of Bias: Model Design

  • Algorithms prioritize some variables over others — who decides what matters?
  • Weighting, thresholds, and defaults affect outcomes
  • “Neutral” models can still amplify disparities
  • Lack of transparency (black-box models) makes it hard to spot bias
  • Designers often don't reflect the communities their tools affect

🛠️ Source of Bias: Implementation

  • AI can work fine technically but still harm in practice
  • Unequal rollout across schools (who gets the tool?)
  • Lack of teacher training or student input leads to misuse
  • Systems may be used in ways not originally intended
  • Ethical gaps in oversight, consent, and accountability

🧭 Four Corners Activity

👉 Who holds the most responsibility for fixing AI bias?

  • 🔧 Developers
  • 👩‍🏫 Educators
  • 🏛️ Policymakers
  • 👨‍👩‍👧‍👦 Students/Families

💬 Think-Pair-Share

🧠 Have you ever seen tech or data being used in a biased or unfair way in education?

  • Reflect individually
  • Share with a partner
  • Discuss as a group

🧪 Buzz Groups: Ethical Frameworks

Small group challenge:
📝 Apply an ethical framework (e.g., Microsoft, EU, FATML) to a mini AI-in-education scenario.

  • What does the framework emphasize?
  • What actions would it suggest?

☕ Break Time (10 min)

🧩 Applied Learning Project

Case Study Analysis (65 min)

Step 1: Analyze a Case

🗂️ Choose a real-world AI bias case:

  • Amazon hiring tool (📂 data)
  • Scotland facial recognition (🛠️ implementation)
  • Apple Card credit limits (⚙️ design)

🔍 Use the framework to assess:

  • Type of AI
  • Bias source(s)
  • Who was affected
  • How bias was discovered/addressed

Step 2: Stakeholder Role Play

👥 Take on a role:

  • Student
  • Teacher
  • Administrator
  • Parent
  • Developer

🎙️ From your role:

  • How are you affected?
  • What are your concerns?
  • What would you want changed?

Step 3: Policy & Redesign

🛠️ With your group, develop:

  • 1–2 policy recommendations
  • A redesign proposal
  • A fair implementation plan

📣 Be ready to share one key takeaway!

📋 Formative Assessment

Stakeholder Statements (10 min)

✍️ Write a short reflection (100–150 words) from your stakeholder role.

In your group, discuss:

  • Common concerns
  • Whose voice gets prioritized
  • How to center equity in decisions

🧘‍♀️ Closing Reflection

💬 What concerns you most about AI bias in education?
💡 What’s one action you can take in your role to support equity?

🔮 Next Week:

Personalization – Promise or Pitfall?

📅 Assignment:
Apply an ethical framework to an AI tool or policy in your own educational context