Akira Kinoshita
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -25,12 +25,41 @@ Please place the downloaded PDF files in the ./pdf directory.
|
|
25 |
- Copy jgraphqa.yaml, utils.py, and the generated jgraphqa.parquet file into the [lmms_eval/tasks](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks)/jgraphqa directory.
|
26 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
27 |
- Please modify [lmms_eval/models/llava_onevision.py](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py) as follows.
|
28 |
-
|
29 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
-
|
32 |
-
|
33 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## 🤗Usage
|
36 |
- Using the lmms-eval framework, please run the following command:
|
|
|
25 |
- Copy jgraphqa.yaml, utils.py, and the generated jgraphqa.parquet file into the [lmms_eval/tasks](https://github.com/EvolvingLMMs-Lab/lmms-eval/tree/main/lmms_eval/tasks)/jgraphqa directory.
|
26 |
(You will need to create the jgraphqa directory if it does not already exist.)
|
27 |
- Please modify [lmms_eval/models/llava_onevision.py](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py) as follows.
|
28 |
+
- Please add the following code below line [284](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L284).
|
29 |
```python
|
30 |
+
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
31 |
+
inputs = self._image_processor(visual)
|
32 |
+
image_tensor = torch.tensor(inputs['pixel_values']).to(dtype=torch.float16, device=self.device)
|
33 |
+
image_tensor = [image_tensor]
|
34 |
+
# if type(image_tensor) is list:
|
35 |
+
# image_tensor = [_image.to(dtype=torch.float16, device=self.device) for _image in image_tensor]
|
36 |
+
# else:
|
37 |
+
# image_tensor = image_tensor.to(dtype=torch.float16, device=self.device)
|
38 |
|
39 |
+
task_type = "image" # This part is already present in the original code
|
|
|
40 |
```
|
41 |
+
- Please add the following code below line [342](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L342).
|
42 |
+
```python
|
43 |
+
kwargs["image_sizes"] = [[v.size[0], v.size[1]] for v in visual] if isinstance(visual, list) else [[visual.size[0], visual.size[1]]] # This part is already present in the original code
|
44 |
+
_image_grid_thw = torch.tensor(inputs['image_grid_thw'], dtype=torch.long)
|
45 |
+
kwargs["image_grid_thws"] = [_image_grid_thw]
|
46 |
+
elif task_type == "video": # This part is already present in the original code
|
47 |
+
```
|
48 |
+
- Please add the following code below line [455](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L455).
|
49 |
+
```python
|
50 |
+
# image_tensor = process_images(visual, self._image_processor, self._config) # This part is already present in the original code
|
51 |
+
inputs = self._image_processor(visual)
|
52 |
+
image_tensor = torch.tensor(inputs['pixel_values']).to(dtype=torch.float16, device=self.device)
|
53 |
+
image_tensor = [image_tensor]
|
54 |
+
if type(image_tensor) is list: # This part is already present in the original code
|
55 |
+
```
|
56 |
+
- Please add the following code below line [539](https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/main/lmms_eval/models/llava_onevision.py#L539).
|
57 |
+
```python
|
58 |
+
gen_kwargs["image_sizes"] = [batched_visuals[0][idx].size for idx in range(len(batched_visuals[0]))] # This part is already present in the original code
|
59 |
+
_image_grid_thw = torch.tensor(inputs['image_grid_thw'], dtype=torch.long)
|
60 |
+
gen_kwargs["image_grid_thws"] = [_image_grid_thw]
|
61 |
+
```
|
62 |
+
|
63 |
|
64 |
## 🤗Usage
|
65 |
- Using the lmms-eval framework, please run the following command:
|