LiquidAI: LFM2-24B-A2B

Liquid AIID: liquid/lfm-2-24b-a2b

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

Pricing per 1M Tokens

Input (Prompt)$0.03
Output (Completion)$0.12
Cache ReadFree
Cache WriteFree
ImageN/A

Specifications

Context Length33K
Max Output TokensN/A
Input ModalitiesText
Output ModalitiesText
TokenizerOther
Instruct TypeN/A
Top Provider Context33K
Top Provider Max OutputN/A
ModeratedNo

More from Liquid AI

Last updated: March 23, 2026

First tracked: March 23, 2026