AI Bias & Algorithmic Justice: When Technology Perpetuates Inequality
🌅 Karakia & Cultural Opening
"Kia tōtika ai te taiao" - That the world may be just and balanced
Opening Protocol (5 minutes)
- Justice Intention: Setting commitment to understand how technology can either perpetuate or eliminate inequality
- Cultural Grounding: Acknowledging responsibility to ensure technology serves all people fairly
- Critical Lens: Preparing minds to see beyond surface claims of technological neutrality
🎯 Learning Objectives & Success Criteria
By the end of this lesson, ākonga will be able to:
- Identify: Recognize AI bias in real-world systems and understand its sources
- Analyze: Examine how algorithmic bias affects different communities disproportionately
- Evaluate: Assess AI systems using justice frameworks including Te Ao Māori perspectives
- Design: Propose solutions for creating more equitable AI systems
Success Criteria - Ākonga will demonstrate:
- ✓ Ability to identify bias in AI systems through case study analysis
- ✓ Understanding of how bias affects Māori and other marginalized communities
- ✓ Application of justice principles to evaluate algorithmic fairness
- ✓ Creative problem-solving for algorithmic justice solutions
Phase 1: AI Bias Investigation - Uncovering Hidden Inequality (25 minutes)
Bias Detective Challenge
20 minutes investigation + 5 minutes synthesisReal-World AI Bias Cases (15 minutes):
Working in groups of 4, investigate one of these documented cases of AI bias:
Case 1: Facial Recognition & Race
The Problem: Facial recognition systems show 34.7% error rate for dark-skinned women vs 0.8% for light-skinned men (MIT study)
Investigation Questions:
- Why might training data cause this bias?
- How does this affect surveillance and policing?
- What are the implications for Māori and Pasifika communities?
- How could this be fixed?
Case 2: Hiring Algorithms & Gender
The Problem: Amazon's AI hiring tool systematically discriminated against women, downgrading resumes with words like "women's" (as in "women's chess club captain")
Investigation Questions:
- How did historical hiring patterns bias the AI?
- What careers or opportunities could be affected?
- How might this impact wahine Māori in particular?
- What safeguards should exist?
Case 3: Health AI & Indigenous Communities
The Problem: AI health tools trained primarily on Pākehā/European populations show reduced accuracy for Māori patients, potentially missing health conditions
Investigation Questions:
- Why are medical AI systems often biased against Indigenous people?
- How does this connect to historical medical racism?
- What health equity issues could this create?
- How could mātauranga Māori inform better health AI?
Case 4: Predictive Policing & Communities of Color
The Problem: Predictive policing AI uses historical arrest data, creating feedback loops that increase police presence in already over-policed communities
Investigation Questions:
- How do historical policing biases get amplified by AI?
- What are the impacts on Māori communities?
- How does this relate to justice system inequality?
- What would restorative justice-based AI look like?
Bias Pattern Analysis (5 minutes):
Each group presents their case in 1 minute, then class identifies common patterns:
- Where does bias come from in AI systems?
- Who is most likely to be harmed by AI bias?
- How does AI bias connect to existing social inequalities?
- Why might companies and institutions be slow to address these issues?
Phase 2: Algorithmic Justice Analysis - Te Ao Māori Frameworks for Fair AI (25 minutes)
Justice-Centered AI Evaluation Workshop
Te Ao Māori Values-Based AI Assessment (15 minutes):
Using the same groups, apply Te Ao Māori values to evaluate AI systems:
Mana Motuhake (Self-Determination)
Applied to AI: Does this AI system support or undermine people's ability to make decisions about their own lives?
- Who gets to control the AI and its decisions?
- Can people understand and challenge AI decisions affecting them?
- Does the AI respect people's autonomy and choice?
- How does it affect community self-determination?
Whakatōhea (Collective Responsibility)
Applied to AI: How does this AI system affect collective wellbeing and shared responsibility?
- Does it strengthen or weaken community bonds?
- How are benefits and risks distributed across different groups?
- Does it encourage individual competition or collective flourishing?
- What are the system's responsibilities to society?
Kotahitanga (Unity & Solidarity)
Applied to AI: Does this AI system bring people together or divide them?
- Does it create or reduce social divisions?
- How does it handle cultural differences and diversity?
- Does it promote understanding between groups?
- What voices are centered vs marginalized in its design?
Tika (Justice & Rightness)
Applied to AI: Is this AI system fundamentally just and right in its impacts?
- Are its outcomes fair across different groups?
- Does it address or perpetuate historical injustices?
- How does it handle power imbalances?
- What would make this system more tika?
Comparative Justice Framework Analysis (10 minutes):
Groups also apply one additional justice framework to their AI case:
Distributive Justice
Key Question: How are benefits and harms distributed?
- Who benefits most/least from this AI system?
- Are resources allocated fairly?
- How are risks shared across different groups?
Procedural Justice
Key Question: Are the processes fair and transparent?
- Can people understand how AI decisions are made?
- Is there accountability for AI decisions?
- Can affected people participate in AI governance?
Recognition Justice
Key Question: Are all groups recognized and valued?
- Whose knowledge and experiences are valued in AI design?
- How does the AI handle cultural differences?
- What voices are missing from AI development?
Phase 3: Design Justice Solutions - Creating Equitable AI (20 minutes)
AI Justice Design Challenge
Solution Design Process (15 minutes):
Groups redesign their AI system to embody justice principles:
Step 1: Problem Reframing (3 minutes)
- What is the real problem this AI should solve?
- Whose needs should be centered in the solution?
- What would success look like for marginalized communities?
Step 2: Inclusive Design Principles (5 minutes)
- Community Participation: How would affected communities shape AI development?
- Cultural Responsiveness: How would the AI respect different cultural values and ways of knowing?
- Transparency: How would people understand and challenge AI decisions?
- Accountability: Who would be responsible for AI impacts and how?
Step 3: Justice Implementation (4 minutes)
- What safeguards would prevent bias?
- How would benefits be distributed fairly?
- What ongoing monitoring would ensure equity?
- How would the system address past harms?
Step 4: Cultural Integration (3 minutes)
- How would mātauranga Māori inform the AI's operation?
- What traditional values would guide its decision-making?
- How would it strengthen rather than threaten cultural practices?
Solution Presentations (5 minutes):
Each group presents their redesigned AI system in 1 minute, focusing on:
- The key change that makes their AI more just
- How it embodies Te Ao Māori values
- What makes it different from current systems
- Who would benefit most from these changes
🌅 Whakamutunga - Reflection & Closing
Algorithmic Justice Commitment (5 minutes)
Personal AI Ethics Reflection:
Students complete individual reflection:
- What is one example of AI bias that you now recognize affects your life or community?
- Which justice framework (Te Ao Māori or others) do you find most useful for evaluating AI? Why?
- What is one action you could take to promote more equitable AI in your community?
- How has understanding AI bias changed your perspective on technology and justice?
Closing Circle - Justice Wisdom Sharing:
Students share one insight about AI justice, followed by teacher reflection:
"Kia tōtika ai te taiao"
AI systems are not neutral - they reflect the values and biases of their creators and the data they're trained on. Understanding this gives us power to demand better systems that serve justice rather than perpetuate inequality. Your critical analysis helps build a more equitable digital future.
🤖 AI Ethics Assessment Suite
📊 Assessment & Next Steps
Formative Assessment - Today's Evidence:
- Bias Recognition: Accuracy in identifying and explaining AI bias in case studies
- Justice Analysis: Quality of applying Te Ao Māori and other justice frameworks
- Design Thinking: Creativity and feasibility of proposed AI justice solutions
- Critical Reflection: Depth of personal and systemic understanding of AI impacts
Preparation for Lesson 3:
- AI Audit: Find one AI system you use regularly and analyze it for potential bias
- Community Research: Investigate how AI might affect your local Māori/Indigenous community
- Solution Thinking: Research one organization working on AI ethics or algorithmic justice
🛠️ Teacher Resources & Adaptations
AI Bias Case Study Resources:
- Algorithmic Justice League: Real-world bias examples and research
- MIT Technology Review: Regular AI bias reporting and analysis
- AI Now Institute: Academic research on AI social impacts
- Māori Data Sovereignty Network: Indigenous perspectives on AI and data
Cultural Consultation Support:
- Local Kaumātua: Guidance on applying Te Ao Māori values to technology
- Te Mana Raraunga: Māori data sovereignty experts
- Indigenous Tech Networks: Connect with Indigenous technology practitioners
- Justice Frameworks: Adapt frameworks to local community contexts
Differentiation Strategies:
- Technical Levels: Adjust case study complexity for different tech familiarity
- Cultural Connections: Include bias examples relevant to diverse student backgrounds
- Justice Frameworks: Offer choice in which additional framework to apply
- Solution Complexity: Allow varying levels of technical detail in design solutions