About the talk:ProText: Prompt Learning with Text Only Supervision, allows one to finetune CLIP by leveraging contextual knowledge derived from Large Language Models (LLMs) without relying on visual samples. ProText exhibits strong transferability towards unseen datasets and classes and effectively overcomes the transferability limitations of LLM-based Prompt Ensembling methods.Through text-only training, ProText improves over previous prompt ensembling and image-supervised methods in challenging cross-dataset transfer settings.We have open-sourced our checkpoints and source code. ProText offers a drop-in replacement for CLIP’s text transformer. We look forward to the impact of ProText on applications beyond what we tested in the manuscript.abs: https://arxiv.org/abs/2401.02418
About the talk:ProText: Prompt Learning with Text Only Supervision, allows one to finetune CLIP by leveraging contextual knowledge derived from Large Language Models (LLMs) without relying on visual samples. ProText exhibits strong transferability towards unseen datasets and classes and effectively overcomes the transferability limitations of LLM-based Prompt Ensembling methods.Through text-only training, ProText improves over previous prompt ensembling and image-supervised methods in challenging cross-dataset transfer settings.We have open-sourced our checkpoints and source code. ProText offers a drop-in replacement for CLIP’s text transformer. We look forward to the impact of ProText on applications beyond what we tested in the manuscript.abs: https://arxiv.org/abs/2401.02418
Add comment