Spaces:
Configuration error
Configuration error
Upload 9 files
Browse files- .gitattributes +1 -0
- DeepSeek_VL2_paper.pdf +3 -0
- LICENSE-CODE +21 -0
- LICENSE-MODEL +91 -0
- Makefile +97 -0
- README.md +404 -14
- inference.py +167 -0
- pyproject.toml +54 -0
- requirements.txt +20 -3
- web_demo.py +674 -0
.gitattributes
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
DeepSeek_VL2_paper.pdf filter=lfs diff=lfs merge=lfs -text
|
DeepSeek_VL2_paper.pdf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a79c31356d7a353ef25df880d983527ed0843aa9b160e568942001f40630ddbe
|
3 |
+
size 5888873
|
LICENSE-CODE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) 2023 DeepSeek
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
LICENSE-MODEL
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
DEEPSEEK LICENSE AGREEMENT
|
2 |
+
|
3 |
+
Version 1.0, 23 October 2023
|
4 |
+
|
5 |
+
Copyright (c) 2023 DeepSeek
|
6 |
+
|
7 |
+
Section I: PREAMBLE
|
8 |
+
|
9 |
+
Large generative models are being widely adopted and used, and have the potential to transform the way individuals conceive and benefit from AI or ML technologies.
|
10 |
+
|
11 |
+
Notwithstanding the current and potential benefits that these artifacts can bring to society at large, there are also concerns about potential misuses of them, either due to their technical limitations or ethical considerations.
|
12 |
+
|
13 |
+
In short, this license strives for both the open and responsible downstream use of the accompanying model. When it comes to the open character, we took inspiration from open source permissive licenses regarding the grant of IP rights. Referring to the downstream responsible use, we added use-based restrictions not permitting the use of the model in very specific scenarios, in order for the licensor to be able to enforce the license in case potential misuses of the Model may occur. At the same time, we strive to promote open and responsible research on generative models for content generation.
|
14 |
+
|
15 |
+
Even though downstream derivative versions of the model could be released under different licensing terms, the latter will always have to include - at minimum - the same use-based restrictions as the ones in the original license (this license). We believe in the intersection between open and responsible AI development; thus, this agreement aims to strike a balance between both in order to enable responsible open-science in the field of AI.
|
16 |
+
|
17 |
+
This License governs the use of the model (and its derivatives) and is informed by the model card associated with the model.
|
18 |
+
|
19 |
+
NOW THEREFORE, You and DeepSeek agree as follows:
|
20 |
+
|
21 |
+
1. Definitions
|
22 |
+
"License" means the terms and conditions for use, reproduction, and Distribution as defined in this document.
|
23 |
+
"Data" means a collection of information and/or content extracted from the dataset used with the Model, including to train, pretrain, or otherwise evaluate the Model. The Data is not licensed under this License.
|
24 |
+
"Output" means the results of operating a Model as embodied in informational content resulting therefrom.
|
25 |
+
"Model" means any accompanying machine-learning based assemblies (including checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding to the model architecture as embodied in the Complementary Material, that have been trained or tuned, in whole or in part on the Data, using the Complementary Material.
|
26 |
+
"Derivatives of the Model" means all modifications to the Model, works based on the Model, or any other model which is created or initialized by transfer of patterns of the weights, parameters, activations or output of the Model, to the other model, in order to cause the other model to perform similarly to the Model, including - but not limited to - distillation methods entailing the use of intermediate data representations or methods based on the generation of synthetic data by the Model for training the other model.
|
27 |
+
"Complementary Material" means the accompanying source code and scripts used to define, run, load, benchmark or evaluate the Model, and used to prepare data for training or evaluation, if any. This includes any accompanying documentation, tutorials, examples, etc, if any.
|
28 |
+
"Distribution" means any transmission, reproduction, publication or other sharing of the Model or Derivatives of the Model to a third party, including providing the Model as a hosted service made available by electronic or other remote means - e.g. API-based or web access.
|
29 |
+
"DeepSeek" (or "we") means Beijing DeepSeek Artificial Intelligence Fundamental Technology Research Co., Ltd., Hangzhou DeepSeek Artificial Intelligence Fundamental Technology Research Co., Ltd. and/or any of their affiliates.
|
30 |
+
"You" (or "Your") means an individual or Legal Entity exercising permissions granted by this License and/or making use of the Model for whichever purpose and in any field of use, including usage of the Model in an end-use application - e.g. chatbot, translator, etc.
|
31 |
+
"Third Parties" means individuals or legal entities that are not under common control with DeepSeek or You.
|
32 |
+
|
33 |
+
Section II: INTELLECTUAL PROPERTY RIGHTS
|
34 |
+
|
35 |
+
Both copyright and patent grants apply to the Model, Derivatives of the Model and Complementary Material. The Model and Derivatives of the Model are subject to additional terms as described in Section III.
|
36 |
+
|
37 |
+
2. Grant of Copyright License. Subject to the terms and conditions of this License, DeepSeek hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform, sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model.
|
38 |
+
|
39 |
+
3. Grant of Patent License. Subject to the terms and conditions of this License and where and as applicable, DeepSeek hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this paragraph) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model and the Complementary Material, where such license applies only to those patent claims licensable by DeepSeek that are necessarily infringed by its contribution(s). If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model and/or Complementary Material constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for the Model and/or works shall terminate as of the date such litigation is asserted or filed.
|
40 |
+
|
41 |
+
|
42 |
+
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
|
43 |
+
|
44 |
+
4. Distribution and Redistribution. You may host for Third Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or Derivatives of the Model thereof in any medium, with or without modifications, provided that You meet the following conditions:
|
45 |
+
a. Use-based restrictions as referenced in paragraph 5 MUST be included as an enforceable provision by You in any type of legal agreement (e.g. a license) governing the use and/or distribution of the Model or Derivatives of the Model, and You shall give notice to subsequent users You Distribute to, that the Model or Derivatives of the Model are subject to paragraph 5. This provision does not apply to the use of Complementary Material.
|
46 |
+
b. You must give any Third Party recipients of the Model or Derivatives of the Model a copy of this License;
|
47 |
+
c. You must cause any modified files to carry prominent notices stating that You changed the files;
|
48 |
+
d. You must retain all copyright, patent, trademark, and attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of the Model.
|
49 |
+
e. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions - respecting paragraph 4.a. – for use, reproduction, or Distribution of Your modifications, or for any such Derivatives of the Model as a whole, provided Your use, reproduction, and Distribution of the Model otherwise complies with the conditions stated in this License.
|
50 |
+
|
51 |
+
5. Use-based restrictions. The restrictions set forth in Attachment A are considered Use-based restrictions. Therefore You cannot use the Model and the Derivatives of the Model for the specified restricted uses. You may use the Model subject to this License, including only for lawful purposes and in accordance with the License. Use may include creating any content with, finetuning, updating, running, training, evaluating and/or reparametrizing the Model. You shall require all of Your users who use the Model or a Derivative of the Model to comply with the terms of this paragraph (paragraph 5).
|
52 |
+
|
53 |
+
6. The Output You Generate. Except as set forth herein, DeepSeek claims no rights in the Output You generate using the Model. You are accountable for the Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in the License.
|
54 |
+
|
55 |
+
Section IV: OTHER PROVISIONS
|
56 |
+
|
57 |
+
7. Updates and Runtime Restrictions. To the maximum extent permitted by law, DeepSeek reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this License.
|
58 |
+
|
59 |
+
8. Trademarks and related. Nothing in this License permits You to make use of DeepSeek’ trademarks, trade names, logos or to otherwise suggest endorsement or misrepresent the relationship between the parties; and any rights not expressly granted herein are reserved by DeepSeek.
|
60 |
+
|
61 |
+
9. Personal information, IP rights and related. This Model may contain personal information and works with IP rights. You commit to complying with applicable laws and regulations in the handling of personal information and the use of such works. Please note that DeepSeek's license granted to you to use the Model does not imply that you have obtained a legitimate basis for processing the related information or works. As an independent personal information processor and IP rights user, you need to ensure full compliance with relevant legal and regulatory requirements when handling personal information and works with IP rights that may be contained in the Model, and are willing to assume solely any risks and consequences that may arise from that.
|
62 |
+
|
63 |
+
10. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, DeepSeek provides the Model and the Complementary Material on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Model, Derivatives of the Model, and the Complementary Material and assume any risks associated with Your exercise of permissions under this License.
|
64 |
+
|
65 |
+
11. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall DeepSeek be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Model and the Complementary Material (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if DeepSeek has been advised of the possibility of such damages.
|
66 |
+
|
67 |
+
12. Accepting Warranty or Additional Liability. While redistributing the Model, Derivatives of the Model and the Complementary Material thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of DeepSeek, and only if You agree to indemnify, defend, and hold DeepSeek harmless for any liability incurred by, or claims asserted against, DeepSeek by reason of your accepting any such warranty or additional liability.
|
68 |
+
|
69 |
+
13. If any provision of this License is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
|
70 |
+
|
71 |
+
14. Governing Law and Jurisdiction. This agreement will be governed and construed under PRC laws without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this agreement. The courts located in the domicile of Hangzhou DeepSeek Artificial Intelligence Fundamental Technology Research Co., Ltd. shall have exclusive jurisdiction of any dispute arising out of this agreement.
|
72 |
+
|
73 |
+
END OF TERMS AND CONDITIONS
|
74 |
+
|
75 |
+
Attachment A
|
76 |
+
|
77 |
+
Use Restrictions
|
78 |
+
|
79 |
+
You agree not to use the Model or Derivatives of the Model:
|
80 |
+
|
81 |
+
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
|
82 |
+
- For military use in any way;
|
83 |
+
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
|
84 |
+
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
|
85 |
+
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
|
86 |
+
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
|
87 |
+
- To defame, disparage or otherwise harass others;
|
88 |
+
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
|
89 |
+
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
|
90 |
+
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
|
91 |
+
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
|
Makefile
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
print-% : ; @echo $* = $($*)
|
2 |
+
PROJECT_NAME = DeepSeek-VL
|
3 |
+
COPYRIGHT = "DeepSeek."
|
4 |
+
PROJECT_PATH = deepseek_vl
|
5 |
+
SHELL = /bin/bash
|
6 |
+
SOURCE_FOLDERS = deepseek_vl
|
7 |
+
PYTHON_FILES = $(shell find $(SOURCE_FOLDERS) -type f -name "*.py" -o -name "*.pyi") cli_chat.py inference.py
|
8 |
+
COMMIT_HASH = $(shell git log -1 --format=%h)
|
9 |
+
PATH := $(HOME)/go/bin:$(PATH)
|
10 |
+
PYTHON ?= $(shell command -v python3 || command -v python)
|
11 |
+
PYTESTOPTS ?=
|
12 |
+
|
13 |
+
.PHONY: default
|
14 |
+
default: install
|
15 |
+
|
16 |
+
# Tools Installation
|
17 |
+
|
18 |
+
check_pip_install = $(PYTHON) -m pip show $(1) &>/dev/null || (cd && $(PYTHON) -m pip install $(1) --upgrade)
|
19 |
+
check_pip_install_extra = $(PYTHON) -m pip show $(1) &>/dev/null || (cd && $(PYTHON) -m pip install $(2) --upgrade)
|
20 |
+
|
21 |
+
pylint-install:
|
22 |
+
$(call check_pip_install_extra,pylint,pylint[spelling])
|
23 |
+
$(call check_pip_install,pyenchant)
|
24 |
+
|
25 |
+
flake8-install:
|
26 |
+
$(call check_pip_install,flake8)
|
27 |
+
$(call check_pip_install,flake8-bugbear)
|
28 |
+
$(call check_pip_install,flake8-comprehensions)
|
29 |
+
$(call check_pip_install,flake8-docstrings)
|
30 |
+
$(call check_pip_install,flake8-pyi)
|
31 |
+
$(call check_pip_install,flake8-simplify)
|
32 |
+
|
33 |
+
py-format-install:
|
34 |
+
$(call check_pip_install,isort)
|
35 |
+
$(call check_pip_install_extra,black,black[jupyter])
|
36 |
+
|
37 |
+
ruff-install:
|
38 |
+
$(call check_pip_install,ruff)
|
39 |
+
|
40 |
+
mypy-install:
|
41 |
+
$(call check_pip_install,mypy)
|
42 |
+
|
43 |
+
pre-commit-install:
|
44 |
+
$(call check_pip_install,pre-commit)
|
45 |
+
$(PYTHON) -m pre_commit install --install-hooks
|
46 |
+
|
47 |
+
go-install:
|
48 |
+
# requires go >= 1.16
|
49 |
+
command -v go || (sudo apt-get install -y golang && sudo ln -sf /usr/lib/go/bin/go /usr/bin/go)
|
50 |
+
|
51 |
+
addlicense-install: go-install
|
52 |
+
command -v addlicense || go install github.com/google/addlicense@latest
|
53 |
+
|
54 |
+
addlicense: addlicense-install
|
55 |
+
addlicense -c $(COPYRIGHT) -ignore tests/coverage.xml -l mit -y 2023-$(shell date +"%Y") -check $(SOURCE_FOLDERS)
|
56 |
+
|
57 |
+
# Python linters
|
58 |
+
|
59 |
+
pylint: pylint-install
|
60 |
+
$(PYTHON) -m pylint $(PROJECT_PATH)
|
61 |
+
|
62 |
+
flake8: flake8-install
|
63 |
+
$(PYTHON) -m flake8 --count --show-source --statistics
|
64 |
+
|
65 |
+
py-format: py-format-install
|
66 |
+
$(PYTHON) -m isort --project $(PROJECT_PATH) --check $(PYTHON_FILES) && \
|
67 |
+
$(PYTHON) -m black --check $(PYTHON_FILES)
|
68 |
+
|
69 |
+
ruff: ruff-install
|
70 |
+
$(PYTHON) -m ruff check .
|
71 |
+
|
72 |
+
ruff-fix: ruff-install
|
73 |
+
$(PYTHON) -m ruff check . --fix --exit-non-zero-on-fix
|
74 |
+
|
75 |
+
mypy: mypy-install
|
76 |
+
$(PYTHON) -m mypy $(PROJECT_PATH) --install-types --non-interactive
|
77 |
+
|
78 |
+
pre-commit: pre-commit-install
|
79 |
+
$(PYTHON) -m pre_commit run --all-files
|
80 |
+
|
81 |
+
# Utility functions
|
82 |
+
|
83 |
+
lint: ruff flake8 py-format mypy pylint addlicense
|
84 |
+
|
85 |
+
format: py-format-install ruff-install addlicense-install
|
86 |
+
$(PYTHON) -m isort --project $(PROJECT_PATH) $(PYTHON_FILES)
|
87 |
+
$(PYTHON) -m black $(PYTHON_FILES)
|
88 |
+
$(PYTHON) -m ruff check . --fix --exit-zero
|
89 |
+
addlicense -c $(COPYRIGHT) -ignore tests/coverage.xml -l mit -y 2023-$(shell date +"%Y") $(SOURCE_FOLDERS) cli_chat.py inference.py
|
90 |
+
|
91 |
+
clean-py:
|
92 |
+
find . -type f -name '*.py[co]' -delete
|
93 |
+
find . -depth -type d -name "__pycache__" -exec rm -r "{}" +
|
94 |
+
find . -depth -type d -name ".ruff_cache" -exec rm -r "{}" +
|
95 |
+
find . -depth -type d -name ".mypy_cache" -exec rm -r "{}" +
|
96 |
+
|
97 |
+
clean: clean-py
|
README.md
CHANGED
@@ -1,14 +1,404 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!-- markdownlint-disable first-line-h1 -->
|
2 |
+
<!-- markdownlint-disable html -->
|
3 |
+
<!-- markdownlint-disable no-duplicate-header -->
|
4 |
+
|
5 |
+
<div align="center">
|
6 |
+
<img src="images/logo.svg" width="60%" alt="DeepSeek AI" />
|
7 |
+
</div>
|
8 |
+
<hr>
|
9 |
+
<div align="center">
|
10 |
+
|
11 |
+
<a href="https://www.deepseek.com/" target="_blank">
|
12 |
+
<img alt="Homepage" src="images/badge.svg" />
|
13 |
+
</a>
|
14 |
+
<a href="https://huggingface.co/spaces/deepseek-ai/deepseek-vl2-small" target="_blank">
|
15 |
+
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20VL-536af5?color=536af5&logoColor=white" />
|
16 |
+
</a>
|
17 |
+
<a href="https://huggingface.co/deepseek-ai" target="_blank">
|
18 |
+
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" />
|
19 |
+
</a>
|
20 |
+
|
21 |
+
</div>
|
22 |
+
|
23 |
+
|
24 |
+
<div align="center">
|
25 |
+
|
26 |
+
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank">
|
27 |
+
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" />
|
28 |
+
</a>
|
29 |
+
<a href="images/qr.jpeg" target="_blank">
|
30 |
+
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" />
|
31 |
+
</a>
|
32 |
+
<a href="https://twitter.com/deepseek_ai" target="_blank">
|
33 |
+
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" />
|
34 |
+
</a>
|
35 |
+
|
36 |
+
</div>
|
37 |
+
|
38 |
+
<div align="center">
|
39 |
+
|
40 |
+
<a href="LICENSE-CODE">
|
41 |
+
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53">
|
42 |
+
</a>
|
43 |
+
<a href="LICENSE-MODEL">
|
44 |
+
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53">
|
45 |
+
</a>
|
46 |
+
</div>
|
47 |
+
|
48 |
+
|
49 |
+
<p align="center">
|
50 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-VL2/tree/main?tab=readme-ov-file#3-model-download"><b>📥 Model Download</b></a> |
|
51 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-VL2/tree/main?tab=readme-ov-file#4-quick-start"><b>⚡ Quick Start</b></a> |
|
52 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-VL2/tree/main?tab=readme-ov-file#5-license"><b>📜 License</b></a> |
|
53 |
+
<a href="https://github.com/deepseek-ai/DeepSeek-VL2/tree/main?tab=readme-ov-file#6-citation"><b>📖 Citation</b></a> <br>
|
54 |
+
<a href="./DeepSeek_VL2_paper.pdf"><b>📄 Paper Link</b></a> |
|
55 |
+
<a href="https://arxiv.org/abs/2412.10302"><b>📄 Arxiv Paper Link</b></a> |
|
56 |
+
<a href="https://huggingface.co/spaces/deepseek-ai/deepseek-vl2-small"><b>👁️ Demo</b></a>
|
57 |
+
</p>
|
58 |
+
|
59 |
+
## 1. Introduction
|
60 |
+
|
61 |
+
Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively.
|
62 |
+
DeepSeek-VL2 achieves competitive or state-of-the-art performance with similar or fewer activated parameters compared to existing open-source dense and MoE-based models.
|
63 |
+
|
64 |
+
|
65 |
+
[DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding]()
|
66 |
+
|
67 |
+
Zhiyu Wu*, Xiaokang Chen*, Zizheng Pan*, Xingchao Liu*, Wen Liu**, Damai Dai, Huazuo Gao, Yiyang Ma, Chengyue Wu, Bingxuan Wang, Zhenda Xie, Yu Wu, Kai Hu, Jiawei Wang, Yaofeng Sun, Yukun Li, Yishi Piao, Kang Guan, Aixin Liu, Xin Xie, Yuxiang You, Kai Dong, Xingkai Yu, Haowei Zhang, Liang Zhao, Yisong Wang, Chong Ruan*** (* Equal Contribution, ** Project Lead, *** Corresponding author)
|
68 |
+
|
69 |
+

|
70 |
+
|
71 |
+
## 2. Release
|
72 |
+
✅ <b>2025-2-6</b>: Naive Implemented Gradio Demo on Huggingface Space [deepseek-vl2-small](https://huggingface.co/spaces/deepseek-ai/deepseek-vl2-small).
|
73 |
+
|
74 |
+
✅ <b>2024-12-25</b>: Gradio Demo Example, Incremental Prefilling and VLMEvalKit Support.
|
75 |
+
|
76 |
+
✅ <b>2024-12-13</b>: DeepSeek-VL2 family released, including <code>DeepSeek-VL2-tiny</code>, <code>DeepSeek-VL2-small</code>, <code>DeepSeek-VL2</code>.
|
77 |
+
|
78 |
+
## 3. Model Download
|
79 |
+
|
80 |
+
We release the DeepSeek-VL2 family, including <code>DeepSeek-VL2-tiny</code>, <code>DeepSeek-VL2-small</code>, <code>DeepSeek-VL2</code>.
|
81 |
+
To support a broader and more diverse range of research within both academic and commercial communities.
|
82 |
+
Please note that the use of this model is subject to the terms outlined in [License section](#5-license).
|
83 |
+
|
84 |
+
### Huggingface
|
85 |
+
|
86 |
+
| Model | Sequence Length | Download |
|
87 |
+
|--------------|-----------------|-----------------------------------------------------------------------------|
|
88 |
+
| DeepSeek-VL2-tiny | 4096 | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/deepseek-vl2-tiny) |
|
89 |
+
| DeepSeek-VL2-small | 4096 | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/deepseek-vl2-small) |
|
90 |
+
| DeepSeek-VL2 | 4096 | [🤗 Hugging Face](https://huggingface.co/deepseek-ai/deepseek-vl2) |
|
91 |
+
|
92 |
+
|
93 |
+
## 4. Quick Start
|
94 |
+
|
95 |
+
### Installation
|
96 |
+
|
97 |
+
On the basis of `Python >= 3.8` environment, install the necessary dependencies by running the following command:
|
98 |
+
|
99 |
+
```shell
|
100 |
+
pip install -e .
|
101 |
+
```
|
102 |
+
|
103 |
+
### Simple Inference Example with One Image
|
104 |
+
|
105 |
+
**Note: You may need 80GB GPU memory to run this script with deepseek-vl2-small and even larger for deepseek-vl2.**
|
106 |
+
|
107 |
+
```python
|
108 |
+
import torch
|
109 |
+
from transformers import AutoModelForCausalLM
|
110 |
+
|
111 |
+
from deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM
|
112 |
+
from deepseek_vl2.utils.io import load_pil_images
|
113 |
+
|
114 |
+
|
115 |
+
# specify the path to the model
|
116 |
+
model_path = "deepseek-ai/deepseek-vl2-tiny"
|
117 |
+
vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)
|
118 |
+
tokenizer = vl_chat_processor.tokenizer
|
119 |
+
|
120 |
+
vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
|
121 |
+
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
|
122 |
+
|
123 |
+
## single image conversation example
|
124 |
+
## Please note that <|ref|> and <|/ref|> are designed specifically for the object localization feature. These special tokens are not required for normal conversations.
|
125 |
+
## If you would like to experience the grounded captioning functionality (responses that include both object localization and reasoning), you need to add the special token <|grounding|> at the beginning of the prompt. Examples could be found in Figure 9 of our paper.
|
126 |
+
conversation = [
|
127 |
+
{
|
128 |
+
"role": "<|User|>",
|
129 |
+
"content": "<image>\n<|ref|>The giraffe at the back.<|/ref|>.",
|
130 |
+
"images": ["./images/visual_grounding_1.jpeg"],
|
131 |
+
},
|
132 |
+
{"role": "<|Assistant|>", "content": ""},
|
133 |
+
]
|
134 |
+
|
135 |
+
# load images and prepare for inputs
|
136 |
+
pil_images = load_pil_images(conversation)
|
137 |
+
prepare_inputs = vl_chat_processor(
|
138 |
+
conversations=conversation,
|
139 |
+
images=pil_images,
|
140 |
+
force_batchify=True,
|
141 |
+
system_prompt=""
|
142 |
+
).to(vl_gpt.device)
|
143 |
+
|
144 |
+
# run image encoder to get the image embeddings
|
145 |
+
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
|
146 |
+
|
147 |
+
# run the model to get the response
|
148 |
+
outputs = vl_gpt.language.generate(
|
149 |
+
inputs_embeds=inputs_embeds,
|
150 |
+
attention_mask=prepare_inputs.attention_mask,
|
151 |
+
pad_token_id=tokenizer.eos_token_id,
|
152 |
+
bos_token_id=tokenizer.bos_token_id,
|
153 |
+
eos_token_id=tokenizer.eos_token_id,
|
154 |
+
max_new_tokens=512,
|
155 |
+
do_sample=False,
|
156 |
+
use_cache=True
|
157 |
+
)
|
158 |
+
|
159 |
+
answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)
|
160 |
+
print(f"{prepare_inputs['sft_format'][0]}", answer)
|
161 |
+
```
|
162 |
+
|
163 |
+
And the output is something like:
|
164 |
+
```
|
165 |
+
<|User|>: <image>
|
166 |
+
<|ref|>The giraffe at the back.<|/ref|>.
|
167 |
+
|
168 |
+
<|Assistant|>: <|ref|>The giraffe at the back.<|/ref|><|det|>[[580, 270, 999, 900]]<|/det|><|end▁of▁sentence|>
|
169 |
+
```
|
170 |
+
|
171 |
+
### Simple Inference Example with Multiple Images
|
172 |
+
|
173 |
+
**Note: You may need 80GB GPU memory to run this script with deepseek-vl2-small and even larger for deepseek-vl2.**
|
174 |
+
|
175 |
+
```python
|
176 |
+
import torch
|
177 |
+
from transformers import AutoModelForCausalLM
|
178 |
+
|
179 |
+
from deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM
|
180 |
+
from deepseek_vl2.utils.io import load_pil_images
|
181 |
+
|
182 |
+
|
183 |
+
# specify the path to the model
|
184 |
+
model_path = "deepseek-ai/deepseek-vl2-tiny"
|
185 |
+
vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)
|
186 |
+
tokenizer = vl_chat_processor.tokenizer
|
187 |
+
|
188 |
+
vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
|
189 |
+
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
|
190 |
+
|
191 |
+
# multiple images/interleaved image-text
|
192 |
+
conversation = [
|
193 |
+
{
|
194 |
+
"role": "<|User|>",
|
195 |
+
"content": "This is image_1: <image>\n"
|
196 |
+
"This is image_2: <image>\n"
|
197 |
+
"This is image_3: <image>\n Can you tell me what are in the images?",
|
198 |
+
"images": [
|
199 |
+
"images/multi_image_1.jpeg",
|
200 |
+
"images/multi_image_2.jpeg",
|
201 |
+
"images/multi_image_3.jpeg",
|
202 |
+
],
|
203 |
+
},
|
204 |
+
{"role": "<|Assistant|>", "content": ""}
|
205 |
+
]
|
206 |
+
|
207 |
+
# load images and prepare for inputs
|
208 |
+
pil_images = load_pil_images(conversation)
|
209 |
+
prepare_inputs = vl_chat_processor(
|
210 |
+
conversations=conversation,
|
211 |
+
images=pil_images,
|
212 |
+
force_batchify=True,
|
213 |
+
system_prompt=""
|
214 |
+
).to(vl_gpt.device)
|
215 |
+
|
216 |
+
# run image encoder to get the image embeddings
|
217 |
+
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
|
218 |
+
|
219 |
+
# run the model to get the response
|
220 |
+
outputs = vl_gpt.language.generate(
|
221 |
+
inputs_embeds=inputs_embeds,
|
222 |
+
attention_mask=prepare_inputs.attention_mask,
|
223 |
+
pad_token_id=tokenizer.eos_token_id,
|
224 |
+
bos_token_id=tokenizer.bos_token_id,
|
225 |
+
eos_token_id=tokenizer.eos_token_id,
|
226 |
+
max_new_tokens=512,
|
227 |
+
do_sample=False,
|
228 |
+
use_cache=True
|
229 |
+
)
|
230 |
+
|
231 |
+
answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=False)
|
232 |
+
print(f"{prepare_inputs['sft_format'][0]}", answer)
|
233 |
+
```
|
234 |
+
|
235 |
+
And the output is something like:
|
236 |
+
```
|
237 |
+
<|User|>: This is image_1: <image>
|
238 |
+
This is image_2: <image>
|
239 |
+
This is image_3: <image>
|
240 |
+
Can you tell me what are in the images?
|
241 |
+
|
242 |
+
<|Assistant|>: The images show three different types of vegetables. Image_1 features carrots, which are orange with green tops. Image_2 displays corn cobs, which are yellow with green husks. Image_3 contains raw pork ribs, which are pinkish-red with some marbling.<|end▁of▁sentence|>
|
243 |
+
```
|
244 |
+
|
245 |
+
### Simple Inference Example with Incremental Prefilling
|
246 |
+
|
247 |
+
**Note: We use incremental prefilling to inference within 40GB GPU using deepseek-vl2-small.**
|
248 |
+
|
249 |
+
```python
|
250 |
+
import torch
|
251 |
+
from transformers import AutoModelForCausalLM
|
252 |
+
|
253 |
+
from deepseek_vl2.models import DeepseekVLV2Processor, DeepseekVLV2ForCausalLM
|
254 |
+
from deepseek_vl2.utils.io import load_pil_images
|
255 |
+
|
256 |
+
|
257 |
+
# specify the path to the model
|
258 |
+
model_path = "deepseek-ai/deepseek-vl2-small"
|
259 |
+
vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)
|
260 |
+
tokenizer = vl_chat_processor.tokenizer
|
261 |
+
|
262 |
+
vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
|
263 |
+
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
|
264 |
+
|
265 |
+
# multiple images/interleaved image-text
|
266 |
+
conversation = [
|
267 |
+
{
|
268 |
+
"role": "<|User|>",
|
269 |
+
"content": "This is image_1: <image>\n"
|
270 |
+
"This is image_2: <image>\n"
|
271 |
+
"This is image_3: <image>\n Can you tell me what are in the images?",
|
272 |
+
"images": [
|
273 |
+
"images/multi_image_1.jpeg",
|
274 |
+
"images/multi_image_2.jpeg",
|
275 |
+
"images/multi_image_3.jpeg",
|
276 |
+
],
|
277 |
+
},
|
278 |
+
{"role": "<|Assistant|>", "content": ""}
|
279 |
+
]
|
280 |
+
|
281 |
+
# load images and prepare for inputs
|
282 |
+
pil_images = load_pil_images(conversation)
|
283 |
+
prepare_inputs = vl_chat_processor(
|
284 |
+
conversations=conversation,
|
285 |
+
images=pil_images,
|
286 |
+
force_batchify=True,
|
287 |
+
system_prompt=""
|
288 |
+
).to(vl_gpt.device)
|
289 |
+
|
290 |
+
with torch.no_grad():
|
291 |
+
# run image encoder to get the image embeddings
|
292 |
+
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
|
293 |
+
|
294 |
+
# incremental_prefilling when using 40G GPU for vl2-small
|
295 |
+
inputs_embeds, past_key_values = vl_gpt.incremental_prefilling(
|
296 |
+
input_ids=prepare_inputs.input_ids,
|
297 |
+
images=prepare_inputs.images,
|
298 |
+
images_seq_mask=prepare_inputs.images_seq_mask,
|
299 |
+
images_spatial_crop=prepare_inputs.images_spatial_crop,
|
300 |
+
attention_mask=prepare_inputs.attention_mask,
|
301 |
+
chunk_size=512 # prefilling size
|
302 |
+
)
|
303 |
+
|
304 |
+
# run the model to get the response
|
305 |
+
outputs = vl_gpt.generate(
|
306 |
+
inputs_embeds=inputs_embeds,
|
307 |
+
input_ids=prepare_inputs.input_ids,
|
308 |
+
images=prepare_inputs.images,
|
309 |
+
images_seq_mask=prepare_inputs.images_seq_mask,
|
310 |
+
images_spatial_crop=prepare_inputs.images_spatial_crop,
|
311 |
+
attention_mask=prepare_inputs.attention_mask,
|
312 |
+
past_key_values=past_key_values,
|
313 |
+
|
314 |
+
pad_token_id=tokenizer.eos_token_id,
|
315 |
+
bos_token_id=tokenizer.bos_token_id,
|
316 |
+
eos_token_id=tokenizer.eos_token_id,
|
317 |
+
max_new_tokens=512,
|
318 |
+
|
319 |
+
do_sample=False,
|
320 |
+
use_cache=True,
|
321 |
+
)
|
322 |
+
|
323 |
+
answer = tokenizer.decode(outputs[0][len(prepare_inputs.input_ids[0]):].cpu().tolist(), skip_special_tokens=False)
|
324 |
+
|
325 |
+
print(f"{prepare_inputs['sft_format'][0]}", answer)
|
326 |
+
```
|
327 |
+
|
328 |
+
And the output is something like:
|
329 |
+
```
|
330 |
+
<|User|>: This is image_1: <image>
|
331 |
+
This is image_2: <image>
|
332 |
+
This is image_3: <image>
|
333 |
+
Can you tell me what are in the images?
|
334 |
+
|
335 |
+
<|Assistant|>: The first image contains carrots. The second image contains corn. The third image contains meat.<|end▁of▁sentence|>
|
336 |
+
```
|
337 |
+
|
338 |
+
Parse the bounding box coordinates, please refer to [parse_ref_bbox](https://github.com/deepseek-ai/DeepSeek-VL2/blob/main/deepseek_vl2/serve/app_modules/utils.py#L270-L298).
|
339 |
+
|
340 |
+
|
341 |
+
### Full Inference Example
|
342 |
+
```shell
|
343 |
+
# without incremental prefilling
|
344 |
+
CUDA_VISIBLE_DEVICES=0 python inference.py --model_path "deepseek-ai/deepseek-vl2"
|
345 |
+
|
346 |
+
# with incremental prefilling, when using 40G GPU for vl2-small
|
347 |
+
CUDA_VISIBLE_DEVICES=0 python inference.py --model_path "deepseek-ai/deepseek-vl2-small" --chunk_size 512
|
348 |
+
|
349 |
+
```
|
350 |
+
|
351 |
+
|
352 |
+
### Gradio Demo
|
353 |
+
|
354 |
+
* Install the necessary dependencies:
|
355 |
+
```shell
|
356 |
+
pip install -e .[gradio]
|
357 |
+
```
|
358 |
+
|
359 |
+
* then run the following command:
|
360 |
+
|
361 |
+
```shell
|
362 |
+
# vl2-tiny, 3.37B-MoE in total, activated 1B, can be run on a single GPU < 40GB
|
363 |
+
CUDA_VISIBLE_DEVICES=2 python web_demo.py \
|
364 |
+
--model_name "deepseek-ai/deepseek-vl2-tiny" \
|
365 |
+
--port 37914
|
366 |
+
|
367 |
+
|
368 |
+
# vl2-small, 16.1B-MoE in total, activated 2.4B
|
369 |
+
# If run on A100 40GB GPU, you need to set the `--chunk_size 512` for incremental prefilling for saving memory and it might be slow.
|
370 |
+
# If run on > 40GB GPU, you can ignore the `--chunk_size 512` for faster response.
|
371 |
+
CUDA_VISIBLE_DEVICES=2 python web_demo.py \
|
372 |
+
--model_name "deepseek-ai/deepseek-vl2-small" \
|
373 |
+
--port 37914 \
|
374 |
+
--chunk_size 512
|
375 |
+
|
376 |
+
# # vl27.5-MoE in total, activated 4.2B
|
377 |
+
CUDA_VISIBLE_DEVICES=2 python web_demo.py \
|
378 |
+
--model_name "deepseek-ai/deepseek-vl2" \
|
379 |
+
--port 37914
|
380 |
+
```
|
381 |
+
|
382 |
+
* **Important**: This is a basic and native demo implementation without any deployment optimizations, which may result in slower performance. For production environments, consider using optimized deployment solutions, such as vllm, sglang, lmdeploy, etc. These optimizations will help achieve faster response times and better cost efficiency.
|
383 |
+
|
384 |
+
## 5. License
|
385 |
+
|
386 |
+
This code repository is licensed under [MIT License](./LICENSE-CODE). The use of DeepSeek-VL2 models is subject to [DeepSeek Model License](./LICENSE-MODEL). DeepSeek-VL2 series supports commercial use.
|
387 |
+
|
388 |
+
## 6. Citation
|
389 |
+
|
390 |
+
```
|
391 |
+
@misc{wu2024deepseekvl2mixtureofexpertsvisionlanguagemodels,
|
392 |
+
title={DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding},
|
393 |
+
author={Zhiyu Wu and Xiaokang Chen and Zizheng Pan and Xingchao Liu and Wen Liu and Damai Dai and Huazuo Gao and Yiyang Ma and Chengyue Wu and Bingxuan Wang and Zhenda Xie and Yu Wu and Kai Hu and Jiawei Wang and Yaofeng Sun and Yukun Li and Yishi Piao and Kang Guan and Aixin Liu and Xin Xie and Yuxiang You and Kai Dong and Xingkai Yu and Haowei Zhang and Liang Zhao and Yisong Wang and Chong Ruan},
|
394 |
+
year={2024},
|
395 |
+
eprint={2412.10302},
|
396 |
+
archivePrefix={arXiv},
|
397 |
+
primaryClass={cs.CV},
|
398 |
+
url={https://arxiv.org/abs/2412.10302},
|
399 |
+
}
|
400 |
+
```
|
401 |
+
|
402 |
+
## 7. Contact
|
403 |
+
|
404 |
+
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
inference.py
ADDED
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright (c) 2023-2024 DeepSeek.
|
2 |
+
#
|
3 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of
|
4 |
+
# this software and associated documentation files (the "Software"), to deal in
|
5 |
+
# the Software without restriction, including without limitation the rights to
|
6 |
+
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
7 |
+
# the Software, and to permit persons to whom the Software is furnished to do so,
|
8 |
+
# subject to the following conditions:
|
9 |
+
#
|
10 |
+
# The above copyright notice and this permission notice shall be included in all
|
11 |
+
# copies or substantial portions of the Software.
|
12 |
+
#
|
13 |
+
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
14 |
+
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
15 |
+
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
16 |
+
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
17 |
+
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
18 |
+
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
19 |
+
|
20 |
+
from argparse import ArgumentParser
|
21 |
+
from typing import List, Dict
|
22 |
+
import torch
|
23 |
+
from transformers import AutoModelForCausalLM
|
24 |
+
import PIL.Image
|
25 |
+
|
26 |
+
from deepseek_vl2.models import DeepseekVLV2ForCausalLM, DeepseekVLV2Processor
|
27 |
+
from deepseek_vl2.serve.app_modules.utils import parse_ref_bbox
|
28 |
+
|
29 |
+
|
30 |
+
def load_pil_images(conversations: List[Dict[str, str]]) -> List[PIL.Image.Image]:
|
31 |
+
"""
|
32 |
+
|
33 |
+
Args:
|
34 |
+
conversations (List[Dict[str, str]]): the conversations with a list of messages. An example is :
|
35 |
+
[
|
36 |
+
{
|
37 |
+
"role": "User",
|
38 |
+
"content": "<image>\nExtract all information from this image and convert them into markdown format.",
|
39 |
+
"images": ["./examples/table_datasets.png"]
|
40 |
+
},
|
41 |
+
{"role": "Assistant", "content": ""},
|
42 |
+
]
|
43 |
+
|
44 |
+
Returns:
|
45 |
+
pil_images (List[PIL.Image.Image]): the list of PIL images.
|
46 |
+
|
47 |
+
"""
|
48 |
+
|
49 |
+
pil_images = []
|
50 |
+
|
51 |
+
for message in conversations:
|
52 |
+
if "images" not in message:
|
53 |
+
continue
|
54 |
+
|
55 |
+
for image_path in message["images"]:
|
56 |
+
pil_img = PIL.Image.open(image_path)
|
57 |
+
pil_img = pil_img.convert("RGB")
|
58 |
+
pil_images.append(pil_img)
|
59 |
+
|
60 |
+
return pil_images
|
61 |
+
|
62 |
+
|
63 |
+
def main(args):
|
64 |
+
|
65 |
+
dtype = torch.bfloat16
|
66 |
+
|
67 |
+
# specify the path to the model
|
68 |
+
model_path = args.model_path
|
69 |
+
vl_chat_processor: DeepseekVLV2Processor = DeepseekVLV2Processor.from_pretrained(model_path)
|
70 |
+
tokenizer = vl_chat_processor.tokenizer
|
71 |
+
|
72 |
+
vl_gpt: DeepseekVLV2ForCausalLM = AutoModelForCausalLM.from_pretrained(
|
73 |
+
model_path,
|
74 |
+
trust_remote_code=True,
|
75 |
+
torch_dtype=dtype
|
76 |
+
)
|
77 |
+
vl_gpt = vl_gpt.cuda().eval()
|
78 |
+
|
79 |
+
# multiple images conversation example
|
80 |
+
# Please note that <|grounding|> token is specifically designed for the grounded caption feature. It is not needed for normal conversations.
|
81 |
+
conversation = [
|
82 |
+
{
|
83 |
+
"role": "<|User|>",
|
84 |
+
"content": "<image>\n<image>\n<|grounding|>In the first image, an object within the red rectangle is marked. Locate the object of the same category in the second image.",
|
85 |
+
"images": [
|
86 |
+
"images/incontext_visual_grounding_1.jpeg",
|
87 |
+
"images/icl_vg_2.jpeg"
|
88 |
+
],
|
89 |
+
},
|
90 |
+
{"role": "<|Assistant|>", "content": ""},
|
91 |
+
]
|
92 |
+
|
93 |
+
|
94 |
+
# load images and prepare for inputs
|
95 |
+
pil_images = load_pil_images(conversation)
|
96 |
+
print(f"len(pil_images) = {len(pil_images)}")
|
97 |
+
|
98 |
+
prepare_inputs = vl_chat_processor.__call__(
|
99 |
+
conversations=conversation,
|
100 |
+
images=pil_images,
|
101 |
+
force_batchify=True,
|
102 |
+
system_prompt=""
|
103 |
+
).to(vl_gpt.device, dtype=dtype)
|
104 |
+
|
105 |
+
with torch.no_grad():
|
106 |
+
|
107 |
+
if args.chunk_size == -1:
|
108 |
+
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
|
109 |
+
past_key_values = None
|
110 |
+
else:
|
111 |
+
# incremental_prefilling when using 40G GPU for vl2-small
|
112 |
+
inputs_embeds, past_key_values = vl_gpt.incremental_prefilling(
|
113 |
+
input_ids=prepare_inputs.input_ids,
|
114 |
+
images=prepare_inputs.images,
|
115 |
+
images_seq_mask=prepare_inputs.images_seq_mask,
|
116 |
+
images_spatial_crop=prepare_inputs.images_spatial_crop,
|
117 |
+
attention_mask=prepare_inputs.attention_mask,
|
118 |
+
chunk_size=args.chunk_size
|
119 |
+
)
|
120 |
+
|
121 |
+
# run the model to get the response
|
122 |
+
outputs = vl_gpt.generate(
|
123 |
+
# inputs_embeds=inputs_embeds[:, -1:],
|
124 |
+
# input_ids=prepare_inputs.input_ids[:, -1:],
|
125 |
+
inputs_embeds=inputs_embeds,
|
126 |
+
input_ids=prepare_inputs.input_ids,
|
127 |
+
images=prepare_inputs.images,
|
128 |
+
images_seq_mask=prepare_inputs.images_seq_mask,
|
129 |
+
images_spatial_crop=prepare_inputs.images_spatial_crop,
|
130 |
+
attention_mask=prepare_inputs.attention_mask,
|
131 |
+
past_key_values=past_key_values,
|
132 |
+
|
133 |
+
pad_token_id=tokenizer.eos_token_id,
|
134 |
+
bos_token_id=tokenizer.bos_token_id,
|
135 |
+
eos_token_id=tokenizer.eos_token_id,
|
136 |
+
max_new_tokens=512,
|
137 |
+
|
138 |
+
# do_sample=False,
|
139 |
+
# repetition_penalty=1.1,
|
140 |
+
|
141 |
+
do_sample=True,
|
142 |
+
temperature=0.4,
|
143 |
+
top_p=0.9,
|
144 |
+
repetition_penalty=1.1,
|
145 |
+
|
146 |
+
use_cache=True,
|
147 |
+
)
|
148 |
+
|
149 |
+
answer = tokenizer.decode(outputs[0][len(prepare_inputs.input_ids[0]):].cpu().tolist(), skip_special_tokens=False)
|
150 |
+
print(f"{prepare_inputs['sft_format'][0]}", answer)
|
151 |
+
|
152 |
+
vg_image = parse_ref_bbox(answer, image=pil_images[-1])
|
153 |
+
if vg_image is not None:
|
154 |
+
vg_image.save("./vg.jpg", format="JPEG", quality=85)
|
155 |
+
|
156 |
+
|
157 |
+
if __name__ == "__main__":
|
158 |
+
parser = ArgumentParser()
|
159 |
+
parser.add_argument("--model_path", type=str, required=True,
|
160 |
+
default="deepseek-ai/deepseek-vl2",
|
161 |
+
help="model name or local path to the model")
|
162 |
+
parser.add_argument("--chunk_size", type=int, default=-1,
|
163 |
+
help="chunk size for the model for prefiiling. "
|
164 |
+
"When using 40G gpu for vl2-small, set a chunk_size for incremental_prefilling."
|
165 |
+
"Otherwise, default value is -1, which means we do not use incremental_prefilling.")
|
166 |
+
args = parser.parse_args()
|
167 |
+
main(args)
|
pyproject.toml
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[build-system]
|
2 |
+
requires = ["setuptools>=40.6.0", "wheel"]
|
3 |
+
build-backend = "setuptools.build_meta"
|
4 |
+
|
5 |
+
[project]
|
6 |
+
name = "deepseek_vl2"
|
7 |
+
version = "1.0.0"
|
8 |
+
description = "DeepSeek-VL2"
|
9 |
+
authors = [{name = "DeepSeek-AI"}]
|
10 |
+
license = {file = "LICENSE-CODE"}
|
11 |
+
urls = {homepage = "https://github.com/deepseek-ai/DeepSeek-VL2"}
|
12 |
+
readme = "README.md"
|
13 |
+
requires-python = ">=3.8"
|
14 |
+
dependencies = [
|
15 |
+
"torch==2.0.1",
|
16 |
+
"transformers==4.38.2",
|
17 |
+
"timm>=0.9.16",
|
18 |
+
"xformers>=0.0.21",
|
19 |
+
"accelerate",
|
20 |
+
"sentencepiece",
|
21 |
+
"attrdict",
|
22 |
+
"einops",
|
23 |
+
]
|
24 |
+
|
25 |
+
[project.optional-dependencies]
|
26 |
+
gradio = [
|
27 |
+
"gradio==3.48.0",
|
28 |
+
"gradio-client==0.6.1",
|
29 |
+
"mdtex2html==1.3.0",
|
30 |
+
"pypinyin==0.50.0",
|
31 |
+
"tiktoken==0.5.2",
|
32 |
+
"tqdm==4.64.0",
|
33 |
+
"colorama==0.4.5",
|
34 |
+
"Pygments==2.12.0",
|
35 |
+
"markdown==3.4.1",
|
36 |
+
"SentencePiece==0.1.96"
|
37 |
+
]
|
38 |
+
lint = [
|
39 |
+
"isort",
|
40 |
+
"black[jupyter] >= 22.6.0",
|
41 |
+
"pylint[spelling] >= 2.15.0",
|
42 |
+
"flake8",
|
43 |
+
"flake8-bugbear",
|
44 |
+
"flake8-comprehensions",
|
45 |
+
"flake8-docstrings",
|
46 |
+
"flake8-pyi",
|
47 |
+
"flake8-simplify",
|
48 |
+
"ruff",
|
49 |
+
"pyenchant",
|
50 |
+
"pre-commit",
|
51 |
+
]
|
52 |
+
|
53 |
+
[tool.setuptools]
|
54 |
+
packages = {find = {exclude = ["images"]}}
|
requirements.txt
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
torch==2.0.1
|
2 |
+
transformers==4.38.2
|
3 |
+
xformers>=0.0.21
|
4 |
+
timm>=0.9.16
|
5 |
+
accelerate
|
6 |
+
sentencepiece
|
7 |
+
attrdict
|
8 |
+
einops
|
9 |
+
|
10 |
+
# for gradio demo
|
11 |
+
gradio==3.48.0
|
12 |
+
gradio-client==0.6.1
|
13 |
+
mdtex2html==1.3.0
|
14 |
+
pypinyin==0.50.0
|
15 |
+
tiktoken==0.5.2
|
16 |
+
tqdm==4.64.0
|
17 |
+
colorama==0.4.5
|
18 |
+
Pygments==2.12.0
|
19 |
+
markdown==3.4.1
|
20 |
+
SentencePiece==0.1.96
|
web_demo.py
ADDED
@@ -0,0 +1,674 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copyright (c) 2023-2024 DeepSeek.
|
2 |
+
#
|
3 |
+
# Permission is hereby granted, free of charge, to any person obtaining a copy of
|
4 |
+
# this software and associated documentation files (the "Software"), to deal in
|
5 |
+
# the Software without restriction, including without limitation the rights to
|
6 |
+
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
7 |
+
# the Software, and to permit persons to whom the Software is furnished to do so,
|
8 |
+
# subject to the following conditions:
|
9 |
+
#
|
10 |
+
# The above copyright notice and this permission notice shall be included in all
|
11 |
+
# copies or substantial portions of the Software.
|
12 |
+
#
|
13 |
+
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
14 |
+
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
15 |
+
# FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
16 |
+
# COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
17 |
+
# IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
18 |
+
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
19 |
+
|
20 |
+
# -*- coding:utf-8 -*-
|
21 |
+
from argparse import ArgumentParser
|
22 |
+
|
23 |
+
import io
|
24 |
+
import sys
|
25 |
+
import base64
|
26 |
+
from PIL import Image
|
27 |
+
|
28 |
+
import gradio as gr
|
29 |
+
import torch
|
30 |
+
|
31 |
+
from deepseek_vl2.serve.app_modules.gradio_utils import (
|
32 |
+
cancel_outputing,
|
33 |
+
delete_last_conversation,
|
34 |
+
reset_state,
|
35 |
+
reset_textbox,
|
36 |
+
wrap_gen_fn,
|
37 |
+
)
|
38 |
+
from deepseek_vl2.serve.app_modules.overwrites import reload_javascript
|
39 |
+
from deepseek_vl2.serve.app_modules.presets import (
|
40 |
+
CONCURRENT_COUNT,
|
41 |
+
MAX_EVENTS,
|
42 |
+
description,
|
43 |
+
description_top,
|
44 |
+
title
|
45 |
+
)
|
46 |
+
from deepseek_vl2.serve.app_modules.utils import (
|
47 |
+
configure_logger,
|
48 |
+
is_variable_assigned,
|
49 |
+
strip_stop_words,
|
50 |
+
parse_ref_bbox,
|
51 |
+
pil_to_base64,
|
52 |
+
display_example
|
53 |
+
)
|
54 |
+
|
55 |
+
from deepseek_vl2.serve.inference import (
|
56 |
+
convert_conversation_to_prompts,
|
57 |
+
deepseek_generate,
|
58 |
+
load_model,
|
59 |
+
)
|
60 |
+
from deepseek_vl2.models.conversation import SeparatorStyle
|
61 |
+
|
62 |
+
logger = configure_logger()
|
63 |
+
|
64 |
+
MODELS = [
|
65 |
+
"DeepSeek-VL2-tiny",
|
66 |
+
"DeepSeek-VL2-small",
|
67 |
+
"DeepSeek-VL2",
|
68 |
+
|
69 |
+
"deepseek-ai/deepseek-vl2-tiny",
|
70 |
+
"deepseek-ai/deepseek-vl2-small",
|
71 |
+
"deepseek-ai/deepseek-vl2",
|
72 |
+
]
|
73 |
+
|
74 |
+
DEPLOY_MODELS = dict()
|
75 |
+
IMAGE_TOKEN = "<image>"
|
76 |
+
|
77 |
+
examples_list = [
|
78 |
+
# visual grounding - 1
|
79 |
+
[
|
80 |
+
["images/visual_grounding_1.jpeg"],
|
81 |
+
"<|ref|>The giraffe at the back.<|/ref|>",
|
82 |
+
],
|
83 |
+
|
84 |
+
# visual grounding - 2
|
85 |
+
[
|
86 |
+
["images/visual_grounding_2.jpg"],
|
87 |
+
"找到<|ref|>淡定姐<|/ref|>",
|
88 |
+
],
|
89 |
+
|
90 |
+
# visual grounding - 3
|
91 |
+
[
|
92 |
+
["images/visual_grounding_3.png"],
|
93 |
+
"Find all the <|ref|>Watermelon slices<|/ref|>",
|
94 |
+
],
|
95 |
+
|
96 |
+
# grounding conversation
|
97 |
+
[
|
98 |
+
["images/grounding_conversation_1.jpeg"],
|
99 |
+
"<|grounding|>I want to throw out the trash now, what should I do?",
|
100 |
+
],
|
101 |
+
|
102 |
+
# in-context visual grounding
|
103 |
+
[
|
104 |
+
[
|
105 |
+
"images/incontext_visual_grounding_1.jpeg",
|
106 |
+
"images/icl_vg_2.jpeg"
|
107 |
+
],
|
108 |
+
"<|grounding|>In the first image, an object within the red rectangle is marked. Locate the object of the same category in the second image."
|
109 |
+
],
|
110 |
+
|
111 |
+
# vqa
|
112 |
+
[
|
113 |
+
["images/vqa_1.jpg"],
|
114 |
+
"Describe each stage of this image in detail",
|
115 |
+
],
|
116 |
+
|
117 |
+
# multi-images
|
118 |
+
[
|
119 |
+
[
|
120 |
+
"images/multi_image_1.jpeg",
|
121 |
+
"images/multi_image_2.jpeg",
|
122 |
+
"images/multi_image_3.jpeg"
|
123 |
+
],
|
124 |
+
"能帮我用这几个食材做一道菜吗?",
|
125 |
+
]
|
126 |
+
|
127 |
+
]
|
128 |
+
|
129 |
+
|
130 |
+
def fetch_model(model_name: str, dtype=torch.bfloat16):
|
131 |
+
global args, DEPLOY_MODELS
|
132 |
+
|
133 |
+
if args.local_path:
|
134 |
+
model_path = args.local_path
|
135 |
+
else:
|
136 |
+
model_path = model_name
|
137 |
+
|
138 |
+
if model_name in DEPLOY_MODELS:
|
139 |
+
model_info = DEPLOY_MODELS[model_name]
|
140 |
+
print(f"{model_name} has been loaded.")
|
141 |
+
else:
|
142 |
+
print(f"{model_name} is loading...")
|
143 |
+
DEPLOY_MODELS[model_name] = load_model(model_path, dtype=dtype)
|
144 |
+
print(f"Load {model_name} successfully...")
|
145 |
+
model_info = DEPLOY_MODELS[model_name]
|
146 |
+
|
147 |
+
return model_info
|
148 |
+
|
149 |
+
|
150 |
+
def generate_prompt_with_history(
|
151 |
+
text, images, history, vl_chat_processor, tokenizer, max_length=2048
|
152 |
+
):
|
153 |
+
"""
|
154 |
+
Generate a prompt with history for the deepseek application.
|
155 |
+
|
156 |
+
Args:
|
157 |
+
text (str): The text prompt.
|
158 |
+
images (list[PIL.Image.Image]): The image prompt.
|
159 |
+
history (list): List of previous conversation messages.
|
160 |
+
tokenizer: The tokenizer used for encoding the prompt.
|
161 |
+
max_length (int): The maximum length of the prompt.
|
162 |
+
|
163 |
+
Returns:
|
164 |
+
tuple: A tuple containing the generated prompt, image list, conversation, and conversation copy. If the prompt could not be generated within the max_length limit, returns None.
|
165 |
+
"""
|
166 |
+
global IMAGE_TOKEN
|
167 |
+
|
168 |
+
sft_format = "deepseek"
|
169 |
+
user_role_ind = 0
|
170 |
+
bot_role_ind = 1
|
171 |
+
|
172 |
+
# Initialize conversation
|
173 |
+
conversation = vl_chat_processor.new_chat_template()
|
174 |
+
|
175 |
+
if history:
|
176 |
+
conversation.messages = history
|
177 |
+
|
178 |
+
if images is not None and len(images) > 0:
|
179 |
+
|
180 |
+
num_image_tags = text.count(IMAGE_TOKEN)
|
181 |
+
num_images = len(images)
|
182 |
+
|
183 |
+
if num_images > num_image_tags:
|
184 |
+
pad_image_tags = num_images - num_image_tags
|
185 |
+
image_tokens = "\n".join([IMAGE_TOKEN] * pad_image_tags)
|
186 |
+
|
187 |
+
# append the <image> in a new line after the text prompt
|
188 |
+
text = image_tokens + "\n" + text
|
189 |
+
elif num_images < num_image_tags:
|
190 |
+
remove_image_tags = num_image_tags - num_images
|
191 |
+
text = text.replace(IMAGE_TOKEN, "", remove_image_tags)
|
192 |
+
|
193 |
+
# print(f"prompt = {text}, len(images) = {len(images)}")
|
194 |
+
text = (text, images)
|
195 |
+
|
196 |
+
conversation.append_message(conversation.roles[user_role_ind], text)
|
197 |
+
conversation.append_message(conversation.roles[bot_role_ind], "")
|
198 |
+
|
199 |
+
# Create a copy of the conversation to avoid history truncation in the UI
|
200 |
+
conversation_copy = conversation.copy()
|
201 |
+
logger.info("=" * 80)
|
202 |
+
logger.info(get_prompt(conversation))
|
203 |
+
|
204 |
+
rounds = len(conversation.messages) // 2
|
205 |
+
|
206 |
+
for _ in range(rounds):
|
207 |
+
current_prompt = get_prompt(conversation)
|
208 |
+
current_prompt = (
|
209 |
+
current_prompt.replace("</s>", "")
|
210 |
+
if sft_format == "deepseek"
|
211 |
+
else current_prompt
|
212 |
+
)
|
213 |
+
|
214 |
+
if torch.tensor(tokenizer.encode(current_prompt)).size(-1) <= max_length:
|
215 |
+
return conversation_copy
|
216 |
+
|
217 |
+
if len(conversation.messages) % 2 != 0:
|
218 |
+
gr.Error("The messages between user and assistant are not paired.")
|
219 |
+
return
|
220 |
+
|
221 |
+
try:
|
222 |
+
for _ in range(2): # pop out two messages in a row
|
223 |
+
conversation.messages.pop(0)
|
224 |
+
except IndexError:
|
225 |
+
gr.Error("Input text processing failed, unable to respond in this round.")
|
226 |
+
return None
|
227 |
+
|
228 |
+
gr.Error("Prompt could not be generated within max_length limit.")
|
229 |
+
return None
|
230 |
+
|
231 |
+
|
232 |
+
def to_gradio_chatbot(conv):
|
233 |
+
"""Convert the conversation to gradio chatbot format."""
|
234 |
+
ret = []
|
235 |
+
for i, (role, msg) in enumerate(conv.messages[conv.offset:]):
|
236 |
+
if i % 2 == 0:
|
237 |
+
if type(msg) is tuple:
|
238 |
+
msg, images = msg
|
239 |
+
|
240 |
+
if isinstance(images, list):
|
241 |
+
for j, image in enumerate(images):
|
242 |
+
if isinstance(image, str):
|
243 |
+
with open(image, "rb") as f:
|
244 |
+
data = f.read()
|
245 |
+
img_b64_str = base64.b64encode(data).decode()
|
246 |
+
image_str = (f'<img src="data:image/png;base64,{img_b64_str}" '
|
247 |
+
f'alt="user upload image" style="max-width: 300px; height: auto;" />')
|
248 |
+
else:
|
249 |
+
image_str = pil_to_base64(image, f"user upload image_{j}", max_size=800, min_size=400)
|
250 |
+
|
251 |
+
# replace the <image> tag in the message
|
252 |
+
msg = msg.replace(IMAGE_TOKEN, image_str, 1)
|
253 |
+
|
254 |
+
else:
|
255 |
+
pass
|
256 |
+
|
257 |
+
ret.append([msg, None])
|
258 |
+
else:
|
259 |
+
ret[-1][-1] = msg
|
260 |
+
return ret
|
261 |
+
|
262 |
+
|
263 |
+
def to_gradio_history(conv):
|
264 |
+
"""Convert the conversation to gradio history state."""
|
265 |
+
return conv.messages[conv.offset:]
|
266 |
+
|
267 |
+
|
268 |
+
def get_prompt(conv) -> str:
|
269 |
+
"""Get the prompt for generation."""
|
270 |
+
system_prompt = conv.system_template.format(system_message=conv.system_message)
|
271 |
+
if conv.sep_style == SeparatorStyle.DeepSeek:
|
272 |
+
seps = [conv.sep, conv.sep2]
|
273 |
+
if system_prompt == "" or system_prompt is None:
|
274 |
+
ret = ""
|
275 |
+
else:
|
276 |
+
ret = system_prompt + seps[0]
|
277 |
+
for i, (role, message) in enumerate(conv.messages):
|
278 |
+
if message:
|
279 |
+
if type(message) is tuple: # multimodal message
|
280 |
+
message, _ = message
|
281 |
+
ret += role + ": " + message + seps[i % 2]
|
282 |
+
else:
|
283 |
+
ret += role + ":"
|
284 |
+
return ret
|
285 |
+
else:
|
286 |
+
return conv.get_prompt()
|
287 |
+
|
288 |
+
|
289 |
+
def transfer_input(input_text, input_images):
|
290 |
+
print("transferring input text and input image")
|
291 |
+
|
292 |
+
return (
|
293 |
+
input_text,
|
294 |
+
input_images,
|
295 |
+
gr.update(value=""),
|
296 |
+
gr.update(value=None),
|
297 |
+
gr.Button(visible=True)
|
298 |
+
)
|
299 |
+
|
300 |
+
|
301 |
+
@wrap_gen_fn
|
302 |
+
def predict(
|
303 |
+
text,
|
304 |
+
images,
|
305 |
+
chatbot,
|
306 |
+
history,
|
307 |
+
top_p,
|
308 |
+
temperature,
|
309 |
+
repetition_penalty,
|
310 |
+
max_length_tokens,
|
311 |
+
max_context_length_tokens,
|
312 |
+
model_select_dropdown,
|
313 |
+
):
|
314 |
+
"""
|
315 |
+
Function to predict the response based on the user's input and selected model.
|
316 |
+
|
317 |
+
Parameters:
|
318 |
+
user_text (str): The input text from the user.
|
319 |
+
user_image (str): The input image from the user.
|
320 |
+
chatbot (str): The chatbot's name.
|
321 |
+
history (str): The history of the chat.
|
322 |
+
top_p (float): The top-p parameter for the model.
|
323 |
+
temperature (float): The temperature parameter for the model.
|
324 |
+
max_length_tokens (int): The maximum length of tokens for the model.
|
325 |
+
max_context_length_tokens (int): The maximum length of context tokens for the model.
|
326 |
+
model_select_dropdown (str): The selected model from the dropdown.
|
327 |
+
|
328 |
+
Returns:
|
329 |
+
generator: A generator that yields the chatbot outputs, history, and status.
|
330 |
+
"""
|
331 |
+
print("running the prediction function")
|
332 |
+
try:
|
333 |
+
tokenizer, vl_gpt, vl_chat_processor = fetch_model(model_select_dropdown)
|
334 |
+
|
335 |
+
if text == "":
|
336 |
+
yield chatbot, history, "Empty context."
|
337 |
+
return
|
338 |
+
except KeyError:
|
339 |
+
yield [[text, "No Model Found"]], [], "No Model Found"
|
340 |
+
return
|
341 |
+
|
342 |
+
if images is None:
|
343 |
+
images = []
|
344 |
+
|
345 |
+
# load images
|
346 |
+
pil_images = []
|
347 |
+
for img_or_file in images:
|
348 |
+
try:
|
349 |
+
# load as pil image
|
350 |
+
if isinstance(images, Image.Image):
|
351 |
+
pil_images.append(img_or_file)
|
352 |
+
else:
|
353 |
+
image = Image.open(img_or_file.name).convert("RGB")
|
354 |
+
pil_images.append(image)
|
355 |
+
except Exception as e:
|
356 |
+
print(f"Error loading image: {e}")
|
357 |
+
|
358 |
+
conversation = generate_prompt_with_history(
|
359 |
+
text,
|
360 |
+
pil_images,
|
361 |
+
history,
|
362 |
+
vl_chat_processor,
|
363 |
+
tokenizer,
|
364 |
+
max_length=max_context_length_tokens,
|
365 |
+
)
|
366 |
+
all_conv, last_image = convert_conversation_to_prompts(conversation)
|
367 |
+
|
368 |
+
stop_words = conversation.stop_str
|
369 |
+
gradio_chatbot_output = to_gradio_chatbot(conversation)
|
370 |
+
|
371 |
+
full_response = ""
|
372 |
+
with torch.no_grad():
|
373 |
+
for x in deepseek_generate(
|
374 |
+
conversations=all_conv,
|
375 |
+
vl_gpt=vl_gpt,
|
376 |
+
vl_chat_processor=vl_chat_processor,
|
377 |
+
tokenizer=tokenizer,
|
378 |
+
stop_words=stop_words,
|
379 |
+
max_length=max_length_tokens,
|
380 |
+
temperature=temperature,
|
381 |
+
repetition_penalty=repetition_penalty,
|
382 |
+
top_p=top_p,
|
383 |
+
chunk_size=args.chunk_size
|
384 |
+
):
|
385 |
+
full_response += x
|
386 |
+
response = strip_stop_words(full_response, stop_words)
|
387 |
+
conversation.update_last_message(response)
|
388 |
+
gradio_chatbot_output[-1][1] = response
|
389 |
+
|
390 |
+
# sys.stdout.write(x)
|
391 |
+
# sys.stdout.flush()
|
392 |
+
|
393 |
+
yield gradio_chatbot_output, to_gradio_history(conversation), "Generating..."
|
394 |
+
|
395 |
+
if last_image is not None:
|
396 |
+
# TODO always render the last image's visual grounding image
|
397 |
+
vg_image = parse_ref_bbox(response, last_image)
|
398 |
+
if vg_image is not None:
|
399 |
+
vg_base64 = pil_to_base64(vg_image, f"vg", max_size=800, min_size=400)
|
400 |
+
gradio_chatbot_output[-1][1] += vg_base64
|
401 |
+
yield gradio_chatbot_output, to_gradio_history(conversation), "Generating..."
|
402 |
+
|
403 |
+
print("flushed result to gradio")
|
404 |
+
torch.cuda.empty_cache()
|
405 |
+
|
406 |
+
if is_variable_assigned("x"):
|
407 |
+
print(f"{model_select_dropdown}:\n{text}\n{'-' * 80}\n{x}\n{'=' * 80}")
|
408 |
+
print(
|
409 |
+
f"temperature: {temperature}, "
|
410 |
+
f"top_p: {top_p}, "
|
411 |
+
f"repetition_penalty: {repetition_penalty}, "
|
412 |
+
f"max_length_tokens: {max_length_tokens}"
|
413 |
+
)
|
414 |
+
|
415 |
+
yield gradio_chatbot_output, to_gradio_history(conversation), "Generate: Success"
|
416 |
+
|
417 |
+
|
418 |
+
# @wrap_gen_fn
|
419 |
+
def retry(
|
420 |
+
text,
|
421 |
+
images,
|
422 |
+
chatbot,
|
423 |
+
history,
|
424 |
+
top_p,
|
425 |
+
temperature,
|
426 |
+
repetition_penalty,
|
427 |
+
max_length_tokens,
|
428 |
+
max_context_length_tokens,
|
429 |
+
model_select_dropdown,
|
430 |
+
):
|
431 |
+
if len(history) == 0:
|
432 |
+
yield (chatbot, history, "Empty context")
|
433 |
+
return
|
434 |
+
|
435 |
+
chatbot.pop()
|
436 |
+
history.pop()
|
437 |
+
text = history.pop()[-1]
|
438 |
+
if type(text) is tuple:
|
439 |
+
text, image = text
|
440 |
+
|
441 |
+
yield from predict(
|
442 |
+
text,
|
443 |
+
images,
|
444 |
+
chatbot,
|
445 |
+
history,
|
446 |
+
top_p,
|
447 |
+
temperature,
|
448 |
+
repetition_penalty,
|
449 |
+
max_length_tokens,
|
450 |
+
max_context_length_tokens,
|
451 |
+
model_select_dropdown,
|
452 |
+
args.chunk_size
|
453 |
+
)
|
454 |
+
|
455 |
+
|
456 |
+
def preview_images(files):
|
457 |
+
if files is None:
|
458 |
+
return []
|
459 |
+
|
460 |
+
image_paths = []
|
461 |
+
for file in files:
|
462 |
+
# 使用 file.name 获取文件路径
|
463 |
+
# image = Image.open(file.name)
|
464 |
+
image_paths.append(file.name)
|
465 |
+
return image_paths # 返回所有图片路径,用于预览
|
466 |
+
|
467 |
+
|
468 |
+
def build_demo(args):
|
469 |
+
# fetch model
|
470 |
+
if not args.lazy_load:
|
471 |
+
fetch_model(args.model_name)
|
472 |
+
|
473 |
+
with open("deepseek_vl2/serve/assets/custom.css", "r", encoding="utf-8") as f:
|
474 |
+
customCSS = f.read()
|
475 |
+
|
476 |
+
with gr.Blocks(theme=gr.themes.Soft()) as demo:
|
477 |
+
history = gr.State([])
|
478 |
+
input_text = gr.State()
|
479 |
+
input_images = gr.State()
|
480 |
+
|
481 |
+
with gr.Row():
|
482 |
+
gr.HTML(title)
|
483 |
+
status_display = gr.Markdown("Success", elem_id="status_display")
|
484 |
+
gr.Markdown(description_top)
|
485 |
+
|
486 |
+
with gr.Row(equal_height=True):
|
487 |
+
with gr.Column(scale=4):
|
488 |
+
with gr.Row():
|
489 |
+
chatbot = gr.Chatbot(
|
490 |
+
elem_id="deepseek_chatbot",
|
491 |
+
show_share_button=True,
|
492 |
+
bubble_full_width=False,
|
493 |
+
height=600,
|
494 |
+
)
|
495 |
+
with gr.Row():
|
496 |
+
with gr.Column(scale=4):
|
497 |
+
text_box = gr.Textbox(
|
498 |
+
show_label=False, placeholder="Enter text", container=False
|
499 |
+
)
|
500 |
+
with gr.Column(
|
501 |
+
min_width=70,
|
502 |
+
):
|
503 |
+
submitBtn = gr.Button("Send")
|
504 |
+
with gr.Column(
|
505 |
+
min_width=70,
|
506 |
+
):
|
507 |
+
cancelBtn = gr.Button("Stop")
|
508 |
+
with gr.Row():
|
509 |
+
emptyBtn = gr.Button(
|
510 |
+
"🧹 New Conversation",
|
511 |
+
)
|
512 |
+
retryBtn = gr.Button("🔄 Regenerate")
|
513 |
+
delLastBtn = gr.Button("🗑️ Remove Last Turn")
|
514 |
+
|
515 |
+
with gr.Column():
|
516 |
+
upload_images = gr.Files(file_types=["image"], show_label=True)
|
517 |
+
gallery = gr.Gallery(columns=[3], height="200px", show_label=True)
|
518 |
+
|
519 |
+
upload_images.change(preview_images, inputs=upload_images, outputs=gallery)
|
520 |
+
|
521 |
+
with gr.Tab(label="Parameter Setting") as parameter_row:
|
522 |
+
top_p = gr.Slider(
|
523 |
+
minimum=-0,
|
524 |
+
maximum=1.0,
|
525 |
+
value=0.9,
|
526 |
+
step=0.05,
|
527 |
+
interactive=True,
|
528 |
+
label="Top-p",
|
529 |
+
)
|
530 |
+
temperature = gr.Slider(
|
531 |
+
minimum=0,
|
532 |
+
maximum=1.0,
|
533 |
+
value=0.1,
|
534 |
+
step=0.1,
|
535 |
+
interactive=True,
|
536 |
+
label="Temperature",
|
537 |
+
)
|
538 |
+
repetition_penalty = gr.Slider(
|
539 |
+
minimum=0.0,
|
540 |
+
maximum=2.0,
|
541 |
+
value=1.1,
|
542 |
+
step=0.1,
|
543 |
+
interactive=True,
|
544 |
+
label="Repetition penalty",
|
545 |
+
)
|
546 |
+
max_length_tokens = gr.Slider(
|
547 |
+
minimum=0,
|
548 |
+
maximum=4096,
|
549 |
+
value=2048,
|
550 |
+
step=8,
|
551 |
+
interactive=True,
|
552 |
+
label="Max Generation Tokens",
|
553 |
+
)
|
554 |
+
max_context_length_tokens = gr.Slider(
|
555 |
+
minimum=0,
|
556 |
+
maximum=8192,
|
557 |
+
value=4096,
|
558 |
+
step=128,
|
559 |
+
interactive=True,
|
560 |
+
label="Max History Tokens",
|
561 |
+
)
|
562 |
+
model_select_dropdown = gr.Dropdown(
|
563 |
+
label="Select Models",
|
564 |
+
choices=[args.model_name],
|
565 |
+
multiselect=False,
|
566 |
+
value=args.model_name,
|
567 |
+
interactive=True,
|
568 |
+
)
|
569 |
+
|
570 |
+
# show images, but not visible
|
571 |
+
show_images = gr.HTML(visible=False)
|
572 |
+
# show_images = gr.Image(type="pil", interactive=False, visible=False)
|
573 |
+
|
574 |
+
def format_examples(examples_list):
|
575 |
+
examples = []
|
576 |
+
for images, texts in examples_list:
|
577 |
+
examples.append([images, display_example(images), texts])
|
578 |
+
|
579 |
+
return examples
|
580 |
+
|
581 |
+
gr.Examples(
|
582 |
+
examples=format_examples(examples_list),
|
583 |
+
inputs=[upload_images, show_images, text_box],
|
584 |
+
)
|
585 |
+
|
586 |
+
gr.Markdown(description)
|
587 |
+
|
588 |
+
input_widgets = [
|
589 |
+
input_text,
|
590 |
+
input_images,
|
591 |
+
chatbot,
|
592 |
+
history,
|
593 |
+
top_p,
|
594 |
+
temperature,
|
595 |
+
repetition_penalty,
|
596 |
+
max_length_tokens,
|
597 |
+
max_context_length_tokens,
|
598 |
+
model_select_dropdown,
|
599 |
+
]
|
600 |
+
output_widgets = [chatbot, history, status_display]
|
601 |
+
|
602 |
+
transfer_input_args = dict(
|
603 |
+
fn=transfer_input,
|
604 |
+
inputs=[text_box, upload_images],
|
605 |
+
outputs=[input_text, input_images, text_box, upload_images, submitBtn],
|
606 |
+
show_progress=True,
|
607 |
+
)
|
608 |
+
|
609 |
+
predict_args = dict(
|
610 |
+
fn=predict,
|
611 |
+
inputs=input_widgets,
|
612 |
+
outputs=output_widgets,
|
613 |
+
show_progress=True,
|
614 |
+
)
|
615 |
+
|
616 |
+
retry_args = dict(
|
617 |
+
fn=retry,
|
618 |
+
inputs=input_widgets,
|
619 |
+
outputs=output_widgets,
|
620 |
+
show_progress=True,
|
621 |
+
)
|
622 |
+
|
623 |
+
reset_args = dict(
|
624 |
+
fn=reset_textbox, inputs=[], outputs=[text_box, status_display]
|
625 |
+
)
|
626 |
+
|
627 |
+
predict_events = [
|
628 |
+
text_box.submit(**transfer_input_args).then(**predict_args),
|
629 |
+
submitBtn.click(**transfer_input_args).then(**predict_args),
|
630 |
+
]
|
631 |
+
|
632 |
+
emptyBtn.click(reset_state, outputs=output_widgets, show_progress=True)
|
633 |
+
emptyBtn.click(**reset_args)
|
634 |
+
retryBtn.click(**retry_args)
|
635 |
+
|
636 |
+
delLastBtn.click(
|
637 |
+
delete_last_conversation,
|
638 |
+
[chatbot, history],
|
639 |
+
output_widgets,
|
640 |
+
show_progress=True,
|
641 |
+
)
|
642 |
+
|
643 |
+
cancelBtn.click(cancel_outputing, [], [status_display], cancels=predict_events)
|
644 |
+
|
645 |
+
return demo
|
646 |
+
|
647 |
+
|
648 |
+
if __name__ == "__main__":
|
649 |
+
parser = ArgumentParser()
|
650 |
+
parser.add_argument("--model_name", type=str, required=True, choices=MODELS, help="model name")
|
651 |
+
parser.add_argument("--local_path", type=str, default="", help="huggingface ckpt, optional")
|
652 |
+
parser.add_argument("--ip", type=str, default="0.0.0.0", help="ip address")
|
653 |
+
parser.add_argument("--port", type=int, default=37913, help="port number")
|
654 |
+
parser.add_argument("--root_path", type=str, default="", help="root path")
|
655 |
+
parser.add_argument("--lazy_load", action='store_true')
|
656 |
+
parser.add_argument("--chunk_size", type=int, default=-1,
|
657 |
+
help="chunk size for the model for prefiiling. "
|
658 |
+
"When using 40G gpu for vl2-small, set a chunk_size for incremental_prefilling."
|
659 |
+
"Otherwise, default value is -1, which means we do not use incremental_prefilling.")
|
660 |
+
args = parser.parse_args()
|
661 |
+
|
662 |
+
demo = build_demo(args)
|
663 |
+
demo.title = "DeepSeek-VL2 Chatbot"
|
664 |
+
|
665 |
+
reload_javascript()
|
666 |
+
demo.queue(concurrency_count=CONCURRENT_COUNT, max_size=MAX_EVENTS).launch(
|
667 |
+
# share=False,
|
668 |
+
share=True,
|
669 |
+
favicon_path="deepseek_vl2/serve/assets/favicon.ico",
|
670 |
+
inbrowser=False,
|
671 |
+
server_name=args.ip,
|
672 |
+
server_port=args.port,
|
673 |
+
root_path=args.root_path
|
674 |
+
)
|