Akira Kinoshita commited on
Commit
f7e52e5
·
verified ·
1 Parent(s): a149da0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -6,4 +6,34 @@ language:
6
  - ja
7
  size_categories:
8
  - n<1K
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - ja
7
  size_categories:
8
  - n<1K
9
+ ---
10
+
11
+
12
+ # JGraphQA
13
+
14
+ ## Introduction
15
+ We introduce JGraphQA, a multimodal benchmark designed to evaluate the chart understanding capabilities of Large Multimodal Models (LMMs) in Japanese.
16
+ To create JGraphQA, we first conducted a detailed analysis of the existing ChartQA benchmark. Then, focusing on Japanese investor relations (IR) materials, we collected a total of 100 images consisting of four types: pie charts, line charts, bar charts, and tables. For each image, we created two question-answer pairs.
17
+ All questions and answers were manually crafted and verified to ensure accurate and meaningful evaluation.
18
+
19
+ ## 🔔News
20
+ - Access the URLs listed in the citation_pdf_url column of source.csv and download the corresponding PDF files.
21
+ Rename each downloaded file according to the file name specified in the local_file_name column of source.csv.
22
+ (Alternatively, you may keep the original file names of the downloaded files and instead update the file names in the local_file_name column accordingly.)
23
+ Please place the downloaded PDF files in the ./pdf directory.
24
+ -
25
+
26
+ ## 🤗Usage
27
+ ```python
28
+ CUDA_VISIBLE_DEVICES=0,1 python -m lmms_eval \
29
+ --model llava_onevision \
30
+ --model_args pretrained="akirakinoshita/Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1",model_name=llava_llama_3,conv_template=llava_llama_3,device_map=auto \
31
+ --tasks jgraphqa \
32
+ --batch_size=1 \
33
+ --log_samples \
34
+ --log_samples_suffix llava-onevision \
35
+ --output_path ./logs/ \
36
+ --wandb_args=project=lmms-eval,job_type=eval,name=Llama-3.1-70B-Instruct-multimodal-JP-Graph-v0.1
37
+ ```
38
+
39
+