Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
@@ -36,7 +36,7 @@ def shot(image, labels_text, model_name, hypothesis_template):
|
|
36 |
iface = gr.Interface(shot,
|
37 |
inputs,
|
38 |
"label",
|
39 |
-
examples=[["festival.jpg", "lantern, firecracker, couplet", "ViT/B-16", "a photo of a {}"],
|
40 |
# ["cat-dog-music.png", "音乐表演, 体育运动", "ViT/B-16", "a photo of a {}"],
|
41 |
# ["football-match.jpg", "梅西, C罗, 马奎尔", "ViT/B-16", "a photo of a {}"]],
|
42 |
description="""<p>Chinese CLIP is a contrastive-learning-based vision-language foundation model pretrained on large-scale Chinese data. For more information, please refer to the paper and official github. Also, Chinese CLIP has already been merged into Huggingface Transformers! <br><br>
|
|
|
36 |
iface = gr.Interface(shot,
|
37 |
inputs,
|
38 |
"label",
|
39 |
+
examples=[["festival.jpg", "lantern, firecracker, couplet", "ViT/B-16", "a photo of a {}"]],
|
40 |
# ["cat-dog-music.png", "音乐表演, 体育运动", "ViT/B-16", "a photo of a {}"],
|
41 |
# ["football-match.jpg", "梅西, C罗, 马奎尔", "ViT/B-16", "a photo of a {}"]],
|
42 |
description="""<p>Chinese CLIP is a contrastive-learning-based vision-language foundation model pretrained on large-scale Chinese data. For more information, please refer to the paper and official github. Also, Chinese CLIP has already been merged into Huggingface Transformers! <br><br>
|