ValueError: InternVLRewardModel has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.

#2
by JH9 - opened

I downloaded this model and launched it with vllm(v0.8.4).
But I see this error.
ValueError: InternVLRewardModel has no vLLM implementation and the Transformers implementation is not compatible with vLLM. Try setting VLLM_USE_V1=0.
How can I solve this problem?

OpenGVLab org

Thank you for your interest in our work.
VisualPRM outputs step scores in one forward step and do not need vllm acceleration.
We evaluate our model with VLMEvalkit, you can refer to the evaluation code for how to use VisualPRM to select the best response.

Sign up or log in to comment