Skip to content
Longterm Wiki
Back

Large language models trained on internet data

paper

Authors

Alec Radford·Jong Wook Kim·Chris Hallacy·Aditya Ramesh·Gabriel Goh·Sandhini Agarwal·Girish Sastry·Amanda Askell·Pamela Mishkin·Jack Clark·Gretchen Krueger·Ilya Sutskever

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Foundational research on vision-language models trained on internet data, relevant to AI safety concerns about large-scale model training, data quality, and alignment with human intentions.

Paper Details

Citations
45,741
9144 influential
Year
2021

Metadata

arxiv preprintprimary source

Abstract

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.

Summary

This paper introduces CLIP (Contrastive Language-Image Pre-training), a method for learning visual representations by training on 400 million image-text pairs from the internet using a simple objective of matching images with their captions. The approach enables zero-shot transfer to downstream vision tasks by leveraging natural language descriptions, eliminating the need for task-specific labeled data. The model achieves competitive performance across 30+ computer vision benchmarks, including matching ResNet-50's ImageNet accuracy without using any of its 1.28 million training examples, demonstrating that internet-scale image-text data provides effective supervision for learning generalizable visual concepts.

Cited by 1 page

PageTypeQuality
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# Learning Transferable Visual Models From Natural Language Supervision

Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya Ramesh
Gabriel Goh
Sandhini Agarwal
Girish Sastry
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever

###### Abstract

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on.
We release our code and pre-trained model weights at [https://github.com/OpenAI/CLIP](https://github.com/OpenAI/CLIP "").

## 1 Introduction and Motivating Work

Pre-training methods which learn directly from raw text have revolutionized NLP over the last few years (Dai & Le, [2015](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib31 ""); Peters et al., [2018](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib150 ""); Howard & Ruder, [2018](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib79 ""); Radford et al., [2018](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib153 ""); Devlin et al., [2018](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib36 ""); Raffel et al., [2019](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib155 "")). Task-agnostic objectives such as autoregressive and masked language modeling have scaled across many orders of magnitude in compute, model capacity, and data, steadily improving capabilities. The development of “text-to-text” as a standardized input-output interface (McCann et al., [2018](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib124 ""); Radford et al., [2019](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib154 ""); Raffel et al., [2019](https://ar5iv.labs.arxiv.org/html/2103.00020#bib.bib155 "")) has enabled task-agnostic architectures to zero-shot transfer to downstream d

... (truncated, 98 KB total)
Resource ID: 3d7d444bcdb3d33d | Stable ID: YjJkOTY1Yz