scikkk commited on
Commit
0d12af2
·
verified ·
1 Parent(s): 7cf8a39

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +21 -3
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  license: apache-2.0
3
  language:
4
- - en
5
  metrics:
6
- - accuracy
7
  pipeline_tag: image-text-to-text
8
  library_name: transformers
9
  base_model:
10
- - OpenGVLab/InternVL2-8B
11
  ---
12
  # MathCoder-VL: Bridging Vision and Code for Enhanced Multimodal Mathematical Reasoning
13
 
@@ -23,12 +23,30 @@ We introduce MathCoder-VL, a series of open-source large multimodal models (LMMs
23
  |-------------------------------------------------------------------|-----------------------------------------------------------------------|
24
  | [Mini-InternVL-Chat-2B-V1-5](https://huggingface.co/OpenGVLab/Mini-InternVL-Chat-2B-V1-5) | [MathCoder-VL-2B](https://huggingface.co/MathLLMs/MathCoder-VL-2B) |
25
  | [InternVL2-8B](https://huggingface.co/OpenGVLab/InternVL2-8B) | [MathCoder-VL-8B](https://huggingface.co/MathLLMs/MathCoder-VL-8B)|
 
26
 
27
 
28
 
29
  ## Usage
30
  For training and inference code, please refer to [InternVL](https://github.com/OpenGVLab/InternVL).
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Motivation
34
 
 
1
  ---
2
  license: apache-2.0
3
  language:
4
+ - en
5
  metrics:
6
+ - accuracy
7
  pipeline_tag: image-text-to-text
8
  library_name: transformers
9
  base_model:
10
+ - OpenGVLab/InternVL2-8B
11
  ---
12
  # MathCoder-VL: Bridging Vision and Code for Enhanced Multimodal Mathematical Reasoning
13
 
 
23
  |-------------------------------------------------------------------|-----------------------------------------------------------------------|
24
  | [Mini-InternVL-Chat-2B-V1-5](https://huggingface.co/OpenGVLab/Mini-InternVL-Chat-2B-V1-5) | [MathCoder-VL-2B](https://huggingface.co/MathLLMs/MathCoder-VL-2B) |
25
  | [InternVL2-8B](https://huggingface.co/OpenGVLab/InternVL2-8B) | [MathCoder-VL-8B](https://huggingface.co/MathLLMs/MathCoder-VL-8B)|
26
+ | [InternVL2-8B](https://huggingface.co/OpenGVLab/InternVL2-8B) | [FigCodifier-8B](https://huggingface.co/MathLLMs/FigCodifier)|
27
 
28
 
29
 
30
  ## Usage
31
  For training and inference code, please refer to [InternVL](https://github.com/OpenGVLab/InternVL).
32
 
33
+ ```
34
+ from datasets import load_dataset
35
+
36
+ mm_mathinstruct = load_dataset("MathLLMs/MM-MathInstruct")
37
+ print(mm_mathinstruct)
38
+ ```
39
+
40
+ It should print:
41
+ ```
42
+ DatasetDict({
43
+ train: Dataset({
44
+ features: ['id', 'image', 'question', 'solution', 'image_path'],
45
+ num_rows: 2871988
46
+ })
47
+ })
48
+ ```
49
+
50
 
51
  ## Motivation
52