Scaling imaginative and prescient transformers to 22 billion parameters – Google AI Weblog


Giant Language Fashions (LLMs) like PaLM or GPT-3 confirmed that scaling transformers to a whole lot of billions of parameters improves efficiency and unlocks emergent talents. The largest dense fashions for picture understanding, nevertheless, have reached solely 4 billion parameters, regardless of analysis indicating that promising multimodal fashions like PaLI proceed to profit from scaling imaginative and prescient fashions alongside their language counterparts. Motivated by this, and the outcomes from scaling LLMs, we determined to undertake the following step within the journey of scaling the Imaginative and prescient Transformer.

In “Scaling Vision Transformers to 22 Billion Parameters”, we introduce the most important dense imaginative and prescient mannequin, ViT-22B. It’s 5.5x bigger than the earlier largest imaginative and prescient spine, ViT-e, which has 4 billion parameters. To allow this scaling, ViT-22B incorporates concepts from scaling textual content fashions like PaLM, with enhancements to each coaching stability (utilizing QK normalization) and coaching effectivity (with a novel method known as asynchronous parallel linear operations). On account of its modified structure, environment friendly sharding recipe, and bespoke implementation, it was capable of be educated on Cloud TPUs with a excessive {hardware} utilization1. ViT-22B advances the cutting-edge on many imaginative and prescient duties utilizing frozen representations, or with full fine-tuning. Additional, the mannequin has additionally been efficiently utilized in PaLM-e, which confirmed that a big mannequin combining ViT-22B with a language mannequin can considerably advance the cutting-edge in robotics duties.


Our work builds on many advances from LLMs, akin to PaLM and GPT-3. In comparison with the usual Imaginative and prescient Transformer structure, we use parallel layers, an method during which attention and MLP blocks are executed in parallel, as a substitute of sequentially as in the usual Transformer. This method was utilized in PaLM and diminished coaching time by 15%.

Secondly, ViT-22B omits biases within the QKV projections, a part of the self-attention mechanism, and within the LayerNorms, which will increase utilization by 3%. The diagram beneath exhibits the modified transformer structure utilized in ViT-22B:

ViT-22B transformer encoder structure makes use of parallel feed-forward layers, omits biases in QKV and LayerNorm layers and normalizes Question and Key projections.

Fashions at this scale necessitate “sharding” — distributing the mannequin parameters in numerous compute units. Alongside this, we additionally shard the activations (the intermediate representations of an enter). Even one thing so simple as a matrix multiplication necessitates further care, as each the enter and the matrix itself are distributed throughout units. We develop an method known as asynchronous parallel linear operations, whereby communications of activations and weights between units happen similtaneously computations within the matrix multiply unit (the a part of the TPU holding the overwhelming majority of the computational capability). This asynchronous method minimizes the time ready on incoming communication, thus rising gadget effectivity. The animation beneath exhibits an instance computation and communication sample for a matrix multiplication.

Asynchronized parallel linear operation. The purpose is to compute the matrix multiplication y = Ax, however each the matrix A and activation x are distributed throughout completely different units. Right here we illustrate how it may be finished with overlapping communication and computation throughout units. The matrix A is column-sharded throughout the units, every holding a contiguous slice, every block represented as Aij. Extra particulars are within the paper.

At first, the brand new mannequin scale resulted in extreme coaching instabilities. The normalization method of Gilmer et al. (2023, upcoming) resolved these points, enabling clean and secure mannequin coaching; that is illustrated beneath with instance coaching progressions.

The impact of normalizing the queries and keys (QK normalization) within the self-attention layer on the coaching dynamics. With out QK normalization (crimson) gradients grow to be unstable and the coaching loss diverges.


Right here we spotlight some outcomes of ViT-22B. Observe that within the paper we additionally discover a number of different drawback domains, like video classification, depth estimation, and semantic segmentation.

As an instance the richness of the discovered illustration, we prepare a textual content mannequin to provide representations that align textual content and picture representations (utilizing LiT-tuning). Under we present a number of outcomes for out-of-distribution photographs generated by Parti and Imagen:

Examples of picture+textual content understanding for ViT-22B paired with a textual content mannequin. The graph exhibits normalized likelihood distribution for every description of a picture.

Human object recognition alignment

To learn how aligned ViT-22B classification choices are with human classification choices, we evaluated ViT-22B fine-tuned with completely different resolutions on out-of-distribution (OOD) datasets for which human comparability information is offered by way of the model-vs-human toolbox. This toolbox measures three key metrics: How properly do fashions deal with distortions (accuracy)? How completely different are human and mannequin accuracies (accuracy distinction)? Lastly, how comparable are human and mannequin error patterns (error consistency)? Whereas not all fine-tuning resolutions perform equally well, ViT-22B variants are cutting-edge for all three metrics. Moreover, the ViT-22B fashions even have the very best ever recorded shape bias in imaginative and prescient fashions. Which means that they principally use object form, reasonably than object texture, to tell classification choices — a method identified from human notion (which has a shape bias of 96%). Commonplace fashions (e.g., ResNet-50, which has aa ~20–30% form bias) usually classify photographs just like the cat with elephant texture beneath in accordance with the feel (elephant); fashions with a excessive form bias are inclined to deal with the form as a substitute (cat). Whereas there are nonetheless many necessary variations between human and mannequin notion, ViT-22B exhibits elevated similarities to human visible object recognition.

Cat or elephant? Automobile or clock? Hen or bicycle? Example images with the form of 1 object and the feel of a special object, used to measure form/texture bias.
Form bias analysis (larger = extra shape-biased). Many imaginative and prescient fashions have a low form / excessive texture bias, whereas ViT-22B fine-tuned on ImageNet (crimson, inexperienced, blue educated on 4B photographs as indicated by brackets after mannequin names, until educated on ImageNet solely) have the very best form bias recorded in a ML mannequin to this point, bringing them nearer to a human-like form bias.

Out-of-distribution efficiency

Measuring efficiency on OOD datasets helps assess generalization. On this experiment we assemble label-maps (mappings of labels between datasets) from JFT to ImageNet and likewise from ImageNet to completely different out-of-distribution datasets like ObjectNet (outcomes after pre-training on this information proven within the left curve beneath). Then the fashions are totally fine-tuned on ImageNet.

We observe that scaling Imaginative and prescient Transformers will increase OOD efficiency: though ImageNet accuracy saturates, we see a big enhance on ObjectNet from ViT-e to ViT-22B (proven by the three orange dots within the higher proper beneath).

Although ImageNet accuracy saturates, we see a big enhance in efficiency on ObjectNet from ViT-e/14 to ViT-22B.

Linear probe

Linear probe is a way the place a single linear layer is educated on high of a frozen mannequin. In comparison with full fine-tuning, that is less expensive to coach and simpler to arrange. We noticed that the linear probe of ViT-22B efficiency approaches that of state-of-the-art full fine-tuning of smaller fashions utilizing high-resolution photographs (coaching with larger decision is usually far more costly, however for a lot of duties it yields higher outcomes). Listed below are outcomes of a linear probe educated on the ImageNet dataset and evaluated on the ImageNet validation dataset and different OOD ImageNet datasets.


The information of the larger mannequin could be transferred to a smaller mannequin utilizing the distillation method. That is useful as huge fashions are slower and costlier to make use of. We discovered that ViT-22B information could be transferred to smaller fashions like ViT-B/16 and ViT-L/16, reaching a brand new cutting-edge on ImageNet for these mannequin sizes.

Equity and bias

ML fashions could be prone to unintended unfair biases, akin to choosing up spurious correlations (measured utilizing demographic parity) or having efficiency gaps throughout subgroups. We present that scaling up the scale helps in mitigating such points.

First, scale affords a extra favorable tradeoff frontier — efficiency improves with scale even when the mannequin is post-processed after coaching to manage its degree of demographic parity beneath a prescribed, tolerable degree. Importantly, this holds not solely when efficiency is measured by way of accuracy, but additionally different metrics, akin to calibration, which is a statistical measure of the truthfulness of the mannequin’s estimated possibilities. Second, classification of all subgroups tends to enhance with scale as demonstrated beneath. Third, ViT-22B reduces the efficiency hole throughout subgroups.

High: Accuracy for every subgroup in CelebA earlier than debiasing. Backside: The y-axis exhibits absolutely the distinction in efficiency throughout the 2 particular subgroups highlighted on this instance: females and males. ViT-22B has a small hole in efficiency in comparison with smaller ViT architectures.


We’ve got introduced ViT-22B, at the moment the most important imaginative and prescient transformer mannequin at 22 billion parameters. With small however crucial adjustments to the unique structure, we achieved wonderful {hardware} utilization and coaching stability, yielding a mannequin that advances the cutting-edge on a number of benchmarks. Nice efficiency could be achieved utilizing the frozen mannequin to provide embeddings after which coaching skinny layers on high. Our evaluations additional present that ViT-22B exhibits elevated similarities to human visible notion relating to form and texture bias, and affords advantages in equity and robustness, when in comparison with present fashions.


It is a joint work of Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin Fathy, Elsayed Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetić, Dustin Tran, Thomas Kipf, Mario Lučić, Xiaohua Zhai, Daniel Keysers Jeremiah Harmsen, and Neil Houlsby

We wish to thank Jasper Uijlings, Jeremy Cohen, Arushi Goel, Radu Soricut, Xingyi Zhou, Lluis Castrejon, Adam Paszke, Joelle Barral, Federico Lebron, Blake Hechtman, and Peter Hawkins. Their experience and unwavering assist performed a vital function within the completion of this paper. We additionally acknowledge the collaboration and dedication of the proficient researchers and engineers at Google Analysis.

1Observe: ViT-22B has 54.9% mannequin FLOPs utilization (MFU) whereas PaLM reported
46.2% MFU and we measured 44.0% MFU for ViT-e on the identical {hardware}. 


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button