Qwen: Qwen2.5-VL 7B Instruct
Qwen2.5 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements: - SoTA understanding of images of various resolution & ratio: Qwen2.5-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. - Understanding videos of 20min+: Qwen2.5-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. - Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2.5-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. - Multilingual Support: to serve global users, besides English and Chinese, Qwen2.5-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. For more details, see this [blog post](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub repo](https://github.com/QwenLM/Qwen2-VL). Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).
Pricing per 1M Tokens
| Input (Prompt) | $0.20 |
| Output (Completion) | $0.20 |
| Cache Read | Free |
| Cache Write | Free |
| Image | N/A |
Specifications
| Context Length | 33K |
| Max Output Tokens | N/A |
| Input Modalities | Text + Image |
| Output Modalities | Text |
| Tokenizer | Qwen |
| Instruct Type | N/A |
| Top Provider Context | 33K |
| Top Provider Max Output | N/A |
| Moderated | No |
Compare this model
See how Qwen: Qwen2.5-VL 7B Instruct stacks up against other models.
More from Qwen
Last updated: March 23, 2026
First tracked: March 23, 2026