NVIDIA: Nemotron Nano 12B 2 VL (free)
FreeNVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension. Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost. Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.
Pricing per 1M Tokens
| Input (Prompt) | Free |
| Output (Completion) | Free |
| Cache Read | Free |
| Cache Write | Free |
| Image | N/A |
Specifications
| Context Length | 128K |
| Max Output Tokens | 128K |
| Input Modalities | Image + Text + Video |
| Output Modalities | Text |
| Tokenizer | Other |
| Instruct Type | N/A |
| Top Provider Context | 128K |
| Top Provider Max Output | 128K |
| Moderated | No |
Compare this model
See how NVIDIA: Nemotron Nano 12B 2 VL (free) stacks up against other models.
More from NVIDIA
Last updated: March 23, 2026
First tracked: March 23, 2026