Abstract: In rapidly evolving field of vision-language models (VLMs), contrastive language-image pre-training (CLIP) has made significant strides, becoming foundation for various downstream tasks.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results