Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ Please place the downloaded PDF files in the ./pdf directory.
|
|
28 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
29 |
|
30 |
### Optional
|
31 |
-
- If you would like to evaluate
|
32 |
- Please add the following code below line [284](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L284).
|
33 |
```python
|
34 |
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
@@ -70,7 +70,7 @@ Please place the downloaded PDF files in the ./pdf directory.
|
|
70 |
```bash
|
71 |
CUDA_VISIBLE_DEVICES=0,1 python -m lmms_eval \
|
72 |
--model llava_onevision \
|
73 |
-
--model_args pretrained="
|
74 |
--tasks jgraphqa \
|
75 |
--batch_size=1 \
|
76 |
--log_samples \
|
|
|
28 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
29 |
|
30 |
### Optional
|
31 |
+
- If you would like to evaluate r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1, Please modify [lmms_eval/models/llava_onevision.py](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py) as follows.
|
32 |
- Please add the following code below line [284](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L284).
|
33 |
```python
|
34 |
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
|
|
70 |
```bash
|
71 |
CUDA_VISIBLE_DEVICES=0,1 python -m lmms_eval \
|
72 |
--model llava_onevision \
|
73 |
+
--model_args pretrained="r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1",model_name=llava_llama_3,conv_template=llava_llama_3,device_map=auto \
|
74 |
--tasks jgraphqa \
|
75 |
--batch_size=1 \
|
76 |
--log_samples \
|