Stochastic Parrots 🦜
Learning goals
After completing this assignment, students should be able to:
- Recount the basic facts surrounding Timnit Gebru’s departure from Google
- Identify the issues raised in Gebru’s FAccT ’21 paper
Content
- Bender et al. (2021)
- Metz, Who Is Making Sure the A.I. Machines Aren’t Racist?
- Dastin and Dave, Google AI scientist Bengio resigns after colleagues’ firings: email
In-class activity
Questions to consider:
- How do different groups of people experience the (positive and/or negative) effects of the development of LLMs?
- How do LLMs cement certain language or ideas? Do LLM engineers have a responsibility to train their models on a robust set of ideas?
- What does it mean for an LLM to lack “communicative intent”? Are there use cases where it would be important for LLMs to understand the user’s input and their own output?
- What do Bender et al. (2021) mean by “coherence is [in] the eye of the beholder”?
- Who should be held responsible for an LLM’s output?
- How do Bender et al. (2021) propose researchers work towards mitigating the effects of increasingly large LLMs?
Divide into groups corresponding to the five sections of the paper:
- Environmental cost
- Unfathomable training data
- Down the garden path
- Stochastic parrots
- Paths forward
Summarize the key points of each section.
Assignment
Respond to the following prompts on a single piece of paper.
In one paragraph, summarize the argument made in Bender et al. (2021). What are the authors’ primary concerns?
In one paragraph, hypothesize motivations for Google to fire Timnit Gebru.
References
Bender, Emily M, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. https://doi.org/10.1145/3442188.3445922.