Zhong
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -130,24 +130,24 @@ Even listener-based scores (REC) fail to consistently match human preferences, i
|
|
130 |
|
131 |
| Model | Instr. | BLEU-1 | BLEU-4 | ROUGE-1 | ROUGE-L | METEOR | CIDEr | SPICE | BERT | CLIP | REC | Human | Irrel% |
|
132 |
| --------- | ------ | --------- | -------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |
|
133 |
-
| LLaVA-7B | Dft. | 13.27 | 1.60 | 18.09 | 16.30 | 19.29 | 2.10 | 10.50 | 85.51 | 79.02 |
|
134 |
-
| | Brf. | 28.74 | 6.05 | **36.46** | 35.50 | 19.15 | 10.80 | 24.59 | 89.02 | 70.72 |
|
135 |
-
| LLaVA-13B | Dft. | 8.17 | 1.07 | 11.98 | 10.94 | 16.89 | 0.77 | 7.92 | 84.61 | 79.85 |
|
136 |
-
| | Brf. | 28.96 | 5.81 | 36.44 | **35.64** | 20.13 | 8.14 | 21.63 | 88.42 | 72.99 |
|
137 |
-
| LLaVA-34B | Dft. | 6.29 | 0.78 | 9.82 | 9.11 | 16.15 | 0.07 | 7.61 | 84.39 | 79.86 |
|
138 |
-
| | Brf. | 28.55 | 6.38 | 32.99 | 31.67 | 20.48 | 9.60 | 16.50 | 88.50 | 74.95 |
|
139 |
-
| XComposer | Dft. | 5.25 | 0.65 | 8.38 | 7.81 | 14.58 | 3.10 | 6.37 | 84.11 | 79.86 |
|
140 |
-
| | Brf. | 13.59 | 2.17 | 17.77 | 16.69 | 19.95 | 5.52 | 10.63 | 85.52 | 79.66 |
|
141 |
-
| MiniCPM-V | Dft. | 6.38 | 0.67 | 9.86 | 8.78 | 15.28 | 0.05 | 6.30 | 84.29 | 80.38 |
|
142 |
-
| | Brf. | 16.03 | 3.15 | 19.56 | 18.19 | 18.77 | 6.36 | 11.16 | 86.29 | 78.55 |
|
143 |
-
| GLaMM | Dft. | 15.01 | 3.32 | 16.69 | 16.29 | 11.49 | 9.08 | 3.90 | 86.42 | 58.26 |
|
144 |
-
| | Brf. | 18.46 | 4.45 | 20.92 | 20.46 | 14.18 | 10.48 | 4.44 | 86.65 | 58.60 |
|
145 |
-
| CogVLM | Dft. | 31.13 | **8.70** | 33.89 | 32.32 | 23.50 | **41.62** | 24.09 | 89.78 | 66.54 |
|
146 |
-
| | Brf. | **31.39** | 8.69 | 34.70 | 32.94 | **24.87** | 41.41 | **24.74** | **90.00** | 69.15 |
|
147 |
-
| GPT-4o | Dft. | 7.47 | 0.85 | 11.61 | 10.43 | 17.39 | 0.03 | 7.21 | 84.57 | **80.81** | **
|
148 |
-
| | Brf. | 25.30 | 5.78 | 28.76 | 27.36 | 19.02 | 8.17 | 15.31 | 88.11 | 76.58 |
|
149 |
-
| Human | Spk. | 66.18 | 22.58 | 70.15 | 66.45 | 48.28 | 112.04 | 42.35 | 93.89 | 71.60 |
|
150 |
-
| | Wrt. | - | - | - | - | - | - | - | - | 70.43 |
|
151 |
|
152 |
Model performance under different **Instr.** (Instruction) settings: **Dft.** (Default) prompt and **Brf.** (Brief) prompt. All model predictions are evaluated against Human **Wrt.** (Written) results as the reference texts. We also compute Human **Spk.** (Spoken) data in comparison with human-written data. **Irrel%** refers to the percentage of irrelevant words in the referring expression of the examples evaluated as successful.
|
153 |
|
|
|
130 |
|
131 |
| Model | Instr. | BLEU-1 | BLEU-4 | ROUGE-1 | ROUGE-L | METEOR | CIDEr | SPICE | BERT | CLIP | REC | Human | Irrel% |
|
132 |
| --------- | ------ | --------- | -------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |
|
133 |
+
| LLaVA-7B | Dft. | 13.27 | 1.60 | 18.09 | 16.30 | 19.29 | 2.10 | 10.50 | 85.51 | 79.02 | 32.41 | 39.46 | 87.30 |
|
134 |
+
| | Brf. | 28.74 | 6.05 | **36.46** | 35.50 | 19.15 | 10.80 | 24.59 | 89.02 | 70.72 | 25.51 | 30.57 | 41.95 |
|
135 |
+
| LLaVA-13B | Dft. | 8.17 | 1.07 | 11.98 | 10.94 | 16.89 | 0.77 | 7.92 | 84.61 | 79.85 | 30.13 | 46.40 | 91.85 |
|
136 |
+
| | Brf. | 28.96 | 5.81 | 36.44 | **35.64** | 20.13 | 8.14 | 21.63 | 88.42 | 72.99 | 28.92 | 32.53 | 49.65 |
|
137 |
+
| LLaVA-34B | Dft. | 6.29 | 0.78 | 9.82 | 9.11 | 16.15 | 0.07 | 7.61 | 84.39 | 79.86 | 33.42 | 46.53 | 92.90 |
|
138 |
+
| | Brf. | 28.55 | 6.38 | 32.99 | 31.67 | 20.48 | 9.60 | 16.50 | 88.50 | 74.95 | 35.24 | 36.77 | 56.11 |
|
139 |
+
| XComposer | Dft. | 5.25 | 0.65 | 8.38 | 7.81 | 14.58 | 3.10 | 6.37 | 84.11 | 79.86 | 38.06 | 52.19 | 92.81 |
|
140 |
+
| | Brf. | 13.59 | 2.17 | 17.77 | 16.69 | 19.95 | 5.52 | 10.63 | 85.52 | 79.66 | 38.47 | 51.65 | 80.36 |
|
141 |
+
| MiniCPM-V | Dft. | 6.38 | 0.67 | 9.86 | 8.78 | 15.28 | 0.05 | 6.30 | 84.29 | 80.38 | 37.93 | 45.12 | 92.97 |
|
142 |
+
| | Brf. | 16.03 | 3.15 | 19.56 | 18.19 | 18.77 | 6.36 | 11.16 | 86.29 | 78.55 | 35.04 | 45.79 | 72.87 |
|
143 |
+
| GLaMM | Dft. | 15.01 | 3.32 | 16.69 | 16.29 | 11.49 | 9.08 | 3.90 | 86.42 | 58.26 | 5.78 | 3.84 | 74.68 |
|
144 |
+
| | Brf. | 18.46 | 4.45 | 20.92 | 20.46 | 14.18 | 10.48 | 4.44 | 86.65 | 58.60 | 5.72 | 4.85 | 70.52 |
|
145 |
+
| CogVLM | Dft. | 31.13 | **8.70** | 33.89 | 32.32 | 23.50 | **41.62** | 24.09 | 89.78 | 66.54 | 33.29 | 26.67 | **26.39** |
|
146 |
+
| | Brf. | **31.39** | 8.69 | 34.70 | 32.94 | **24.87** | 41.41 | **24.74** | **90.00** | 69.15 | 38.80 | 33.53 | 29.88 |
|
147 |
+
| GPT-4o | Dft. | 7.47 | 0.85 | 11.61 | 10.43 | 17.39 | 0.03 | 7.21 | 84.57 | **80.81** | **41.29** | **59.80** | 89.81 |
|
148 |
+
| | Brf. | 25.30 | 5.78 | 28.76 | 27.36 | 19.02 | 8.17 | 15.31 | 88.11 | 76.58 | 40.08 | 51.72 | 52.75 |
|
149 |
+
| Human | Spk. | 66.18 | 22.58 | 70.15 | 66.45 | 48.28 | 112.04 | 42.35 | 93.89 | 71.60 | 64.56 | 92.20 | 9.15 |
|
150 |
+
| | Wrt. | - | - | - | - | - | - | - | - | 70.43 | 63.69 | 89.29 | 7.29 |
|
151 |
|
152 |
Model performance under different **Instr.** (Instruction) settings: **Dft.** (Default) prompt and **Brf.** (Brief) prompt. All model predictions are evaluated against Human **Wrt.** (Written) results as the reference texts. We also compute Human **Spk.** (Spoken) data in comparison with human-written data. **Irrel%** refers to the percentage of irrelevant words in the referring expression of the examples evaluated as successful.
|
153 |
|