[2206.04615] Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
A landmark collaborative benchmarking effort establishing BIG-bench; highly relevant to AI safety due to its documentation of emergent and unpredictable capabilities at scale, which complicates forecasting and risk assessment.
Paper Details
Metadata
Abstract
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 450 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.
Summary
Introduces BIG-bench, a collaborative benchmark of 204 diverse tasks designed to probe language model capabilities beyond standard benchmarks, including tasks believed to be beyond current model abilities. The paper evaluates models across scales and finds that performance is often unpredictable, with some tasks showing discontinuous 'emergent' improvements at certain model sizes, while others remain flat regardless of scale.
Key Points
- •BIG-bench comprises 204 tasks contributed by 450+ researchers, covering reasoning, knowledge, creativity, and social understanding to stress-test LLMs
- •Documents 'emergent' capabilities—behaviors that appear suddenly at certain model scales rather than improving gradually, raising concerns for capability forecasting
- •Finds that larger models generally outperform smaller ones but with high variance; some tasks show inverse scaling (larger models perform worse)
- •Introduces BIG-bench Hard, a subset of 23 challenging tasks where even large models fail to exceed human performance, for future benchmarking
- •Highlights limitations of current evaluations and provides tools for extrapolating future model capabilities from scaling trends
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Epistemic Virtue Evals | Approach | 45.0 |
| Emergent Capabilities | Risk | 61.0 |
Cached Content Preview
# Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Alphabetic author list:
Aarohi Srivastava,
Abhinav Rastogi,
Abhishek Rao,
Abu Awal Md Shoeb,
Abubakar Abid,
Adam Fisch,
Adam R. Brown,
Adam Santoro,
Aditya Gupta,
Adrià Garriga-Alonso,
Agnieszka Kluska,
Aitor Lewkowycz,
Akshat Agarwal,
Alethea Power,
Alex Ray,
Alex Warstadt,
Alexander W. Kocurek,
Ali Safaya,
Ali Tazarv,
Alice Xiang,
Alicia Parrish,
Allen Nie,
Aman Hussain,
Amanda Askell,
Amanda Dsouza,
Ambrose Slone,
Ameet Rahane,
Anantharaman S. Iyer,
Anders Andreassen,
Andrea Madotto,
Andrea Santilli,
Andreas Stuhlmüller,
Andrew Dai,
Andrew La,
Andrew Lampinen,
Andy Zou,
Angela Jiang,
Angelica Chen,
Anh Vuong,
Animesh Gupta,
Anna Gottardi,
Antonio Norelli,
Anu Venkatesh,
Arash Gholamidavoodi,
Arfa Tabassum,
Arul Menezes,
Arun Kirubarajan,
Asher Mullokandov,
Ashish Sabharwal,
Austin Herrick,
Avia Efrat,
Aykut Erdem,
Ayla Karakaş,
B. Ryan Roberts,
Bao Sheng Loe,
Barret Zoph,
Bartłomiej Bojanowski,
Batuhan Özyurt,
Behnam Hedayatnia,
Behnam Neyshabur,
Benjamin Inden,
Benno Stein,
Berk Ekmekci,
Bill Yuchen Lin,
Blake Howald,
Bryan Orinion,
Cameron Diao,
Cameron Dour,
Catherine Stinson,
Cedrick Argueta,
César Ferri Ramírez,
Chandan Singh,
Charles Rathkopf,
Chenlin Meng,
Chitta Baral,
Chiyu Wu,
Chris Callison-Burch,
Chris Waites,
Christian Voigt,
Christopher D. Manning,
Christopher Potts,
Cindy Ramirez,
Clara E. Rivera,
Clemencia Siro,
Colin Raffel,
Courtney Ashcraft,
Cristina Garbacea,
Damien Sileo,
Dan Garrette,
Dan Hendrycks,
Dan Kilman,
Dan Roth,
Daniel Freeman,
Daniel Khashabi,
Daniel Levy,
Daniel Moseguí González,
Danielle Perszyk,
Danny Hernandez,
Danqi Chen,
Daphne Ippolito,
Dar Gilboa,
David Dohan,
David Drakard,
David Jurgens,
Debajyoti Datta,
Deep Ganguli,
Denis Emelin,
Denis Kleyko,
Deniz Yuret,
Derek Chen,
Derek Tam,
Dieuwke Hupkes,
Diganta Misra,
Dilyar Buzan,
Dimitri Coelho Mollo,
Diyi Yang,
Dong-Ho Lee,
Dylan Schrader,
Ekaterina Shutova,
Ekin Dogus Cubuk,
Elad Segal,
Eleanor Hagerman,
Elizabeth Barnes,
Elizabeth Donoway,
Ellie Pavlick,
Emanuele Rodola,
Emma Lam,
Eric Chu,
Eric Tang,
Erkut Erdem,
Ernie Chang,
Ethan A. Chi,
Ethan Dyer,
Ethan Jerzak,
Ethan Kim,
Eunice Engefu Manyasi,
Evgenii Zheltonozhskii,
Fanyue Xia,
Fatemeh Siar,
Fernando Martínez-Plumed,
Francesca Happé,
Francois Chollet,
Frieda Rong,
Gaurav Mishra,
Genta Indra Winata,
Gerard de Melo,
Germán Kruszewski,
Giambattista Parascandolo,
Giorgio Mariani,
Gloria Wang,
Gonzalo Jaimovitch-López,
Gregor Betz,
Guy Gur-Ari,
Hana Galijasevic,
Hannah Kim,
Hannah Rashkin,
Hannaneh Hajishirzi,
Harsh Mehta,
Hayden Bogar,
Henry Shevlin,
Hinrich Schütze,
Hiromu Yakura,
Hongming Zhang,
Hugh Mee Wong,
Ian Ng,
Isaac Noble,
Jaap Jumelet,
Jack Geissinger,
Jackson Kernion,
Jacob Hilton,
Jaehoon Lee,
Jaime Fernández Fisac,
James B. Simon,
James Koppel,
James Zheng,
James Zou,
Jan Kocoń,
Jana Thompson,
Janelle Wingfield,
Jared Kaplan,
Jarema Radom,
Jascha Sohl-Dickstein,
Jason Phang,
Jason Wei,
Jason
... (truncated, 98 KB total)11125731fea628f3 | Stable ID: NDkyOTk2Zj