Akira Kinoshita
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,9 @@ Please place the downloaded PDF files in the ./pdf directory.
|
|
26 |
- Run "create_dataset_for_lmms-eval.ipynb" to generate "jgraphqa.parquet".
|
27 |
- Copy "jgraphqa.yaml", "utils.py", and the generated "jgraphqa.parquet" file into the [lmms_eval/tasks](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks)/jgraphqa directory.
|
28 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
29 |
-
|
|
|
|
|
30 |
- Please add the following code below line [284](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L284).
|
31 |
```python
|
32 |
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
|
|
26 |
- Run "create_dataset_for_lmms-eval.ipynb" to generate "jgraphqa.parquet".
|
27 |
- Copy "jgraphqa.yaml", "utils.py", and the generated "jgraphqa.parquet" file into the [lmms_eval/tasks](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks)/jgraphqa directory.
|
28 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
29 |
+
|
30 |
+
### Optional
|
31 |
+
- If you would like to evaluate akirakinoshita/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1, Please modify [lmms_eval/models/llava_onevision.py](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py) as follows.
|
32 |
- Please add the following code below line [284](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L284).
|
33 |
```python
|
34 |
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|