Update README.md
Browse files
README.md
CHANGED
@@ -28,41 +28,7 @@ Please place the downloaded PDF files in the ./pdf directory.
|
|
28 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
29 |
|
30 |
### Optional
|
31 |
-
- If you would like to evaluate r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1, Please
|
32 |
-
- Please add the following code below line [284](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L284).
|
33 |
-
```python
|
34 |
-
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
35 |
-
inputs = self._image_processor(visual)
|
36 |
-
image_tensor = torch.tensor(inputs['pixel_values']).to(dtype=torch.float16, device=self.device)
|
37 |
-
image_tensor = [image_tensor]
|
38 |
-
# if type(image_tensor) is list:
|
39 |
-
# image_tensor = [_image.to(dtype=torch.float16, device=self.device) for _image in image_tensor]
|
40 |
-
# else:
|
41 |
-
# image_tensor = image_tensor.to(dtype=torch.float16, device=self.device)
|
42 |
-
|
43 |
-
task_type = "image" # This part is already present in the original code
|
44 |
-
```
|
45 |
-
- Please add the following code below line [342](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L342).
|
46 |
-
```python
|
47 |
-
kwargs["image_sizes"] = [[v.size[0], v.size[1]] for v in visual] if isinstance(visual, list) else [[visual.size[0], visual.size[1]]] # This part is already present in the original code
|
48 |
-
_image_grid_thw = torch.tensor(inputs['image_grid_thw'], dtype=torch.long)
|
49 |
-
kwargs["image_grid_thws"] = [_image_grid_thw]
|
50 |
-
elif task_type == "video": # This part is already present in the original code
|
51 |
-
```
|
52 |
-
- Please add the following code below line [455](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L455).
|
53 |
-
```python
|
54 |
-
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
55 |
-
inputs = self._image_processor(visual)
|
56 |
-
image_tensor = torch.tensor(inputs['pixel_values']).to(dtype=torch.float16, device=self.device)
|
57 |
-
image_tensor = [image_tensor]
|
58 |
-
if type(image_tensor) is list: # This part is already present in the original code
|
59 |
-
```
|
60 |
-
- Please add the following code below line [539](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L539).
|
61 |
-
```python
|
62 |
-
gen_kwargs["image_sizes"] = [batched_visuals[0][idx].size for idx in range(len(batched_visuals[0]))] # This part is already present in the original code
|
63 |
-
_image_grid_thw = torch.tensor(inputs['image_grid_thw'], dtype=torch.long)
|
64 |
-
gen_kwargs["image_grid_thws"] = [_image_grid_thw]
|
65 |
-
```
|
66 |
|
67 |
|
68 |
## Usage
|
|
|
28 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
29 |
|
30 |
### Optional
|
31 |
+
- If you would like to evaluate r-g2-2024/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1, Please overwrite [lmms_eval/models/llava_onevision.py](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py) with the attached "llava_onevision.py".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
|
34 |
## Usage
|