Spaces:
Build error
Build error
update readme
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
-
# 🤖 Multi-
|
2 |
|
3 |
-
Train a multi-
|
4 |
|
5 |
Based on the open-source multi-modal model [OpenFlamingo](https://github.com/mlfoundations/open_flamingo), we create various **visual instruction** data with open datasets, including VQA, Image Captioning, Visual Reasoning, Text OCR, and Visual Dialogue. Additionally, we also train the language model component of OpenFlamingo using only **language-only instruction** data.
|
6 |
|
@@ -37,7 +37,7 @@ conda env create -f environment.yml
|
|
37 |
|
38 |
Download the OpenFlamingo pre-trained model from [openflamingo/OpenFlamingo-9B](https://huggingface.co/openflamingo/OpenFlamingo-9B)
|
39 |
|
40 |
-
Download our LoRA Weight from [here](
|
41 |
|
42 |
Then place these models in checkpoints folders like this:
|
43 |
|
@@ -61,7 +61,8 @@ conda env create -f environment.yml
|
|
61 |
# Examples
|
62 |
|
63 |
### Recipe:
|
64 |
-

|
67 |
### Movie:
|
@@ -135,4 +136,4 @@ torchrun --nproc_per_node=8 mmgpt/train/instruction_finetune.py \
|
|
135 |
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
136 |
- [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4)
|
137 |
- [LLaVA](https://github.com/haotian-liu/LLaVA/tree/main)
|
138 |
-
- [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
|
|
1 |
+
# 🤖 Multi-modal GPT
|
2 |
|
3 |
+
Train a multi-modal chatbot with visual and language instructions!
|
4 |
|
5 |
Based on the open-source multi-modal model [OpenFlamingo](https://github.com/mlfoundations/open_flamingo), we create various **visual instruction** data with open datasets, including VQA, Image Captioning, Visual Reasoning, Text OCR, and Visual Dialogue. Additionally, we also train the language model component of OpenFlamingo using only **language-only instruction** data.
|
6 |
|
|
|
37 |
|
38 |
Download the OpenFlamingo pre-trained model from [openflamingo/OpenFlamingo-9B](https://huggingface.co/openflamingo/OpenFlamingo-9B)
|
39 |
|
40 |
+
Download our LoRA Weight from [here](https://download.openmmlab.com/mmgpt/v0/mmgpt-lora-v0-release.pt)
|
41 |
|
42 |
Then place these models in checkpoints folders like this:
|
43 |
|
|
|
61 |
# Examples
|
62 |
|
63 |
### Recipe:
|
64 |
+

|
65 |
+
|
66 |
### Travel plan:
|
67 |

|
68 |
### Movie:
|
|
|
136 |
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
137 |
- [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4)
|
138 |
- [LLaVA](https://github.com/haotian-liu/LLaVA/tree/main)
|
139 |
+
- [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|