Liquid AI Releases World’s Fastest and Best-Performing Open-Source Small Foundation Models

Liquid AI announced today the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds...

Autore: Business Wire

Next-generation edge models outperform top global competitors; now available open source on Hugging Face

CAMBRIDGE, Mass.: Liquid AI announced today the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds on Liquid AI’s first-principles approach to model design. Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference and better generalization – especially in long-context or resource-constrained scenarios.

Liquid AI open-sourced its LFM2, introducing the novel architecture in full transparency to the world. LFM2’s weights can now be downloaded from Hugging Face and are also available through the Liquid Playground for testing. Liquid AI also announced that the models will be integrated into its Edge AI platform and an iOS-native consumer app for testing in the following days.

“At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,” said Ramin Hasani, co-founder and CEO of Liquid AI. “LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.”

The release of LFM2 marks a milestone in global AI competition and is the first time a U.S. company has publicly demonstrated clear efficiency and quality gains over China’s leading open-source small language models, including those developed by Alibaba and ByteDance.

In head-to-head evaluations, LFM2 models outperform state-of-the-art competitors across speed, latency and instruction-following benchmarks. Key highlights:

Shifting large generative models from distant clouds to lean, on‑device LLMs unlocks millisecond latency, offline resilience, and data‑sovereign privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must reason in real time. Aggregating high‑growth verticals such as edge AI stack in consumer electronics, robotics, smart appliances, finance, e-commerce, and education, before counting defense, space, and cybersecurity allocations, pushes the TAM for compact, private foundation models toward the $1 trillion mark by 2035.

Liquid AI is engaged with a large number of Fortune 500 companies in these sectors. They offer ultra‑efficient small multimodal foundation models with a secure enterprise-grade deployment stack that turns every device into an AI device, locally. This gives Liquid AI the opportunity to obtain an outsized share on the market as enterprises pivot from cloud LLMs to cost-efficient, fast, private and on‑prem intelligence.

About Liquid AI:

Liquid AI is at the forefront of artificial intelligence innovation, developing foundation models that set new standards for performance and efficiency. With the mission to build efficient, general-purpose AI systems at every scale, Liquid AI continues to push the boundaries of how much intelligence can be packed into phones, laptops, cars, satellites, and other devices. Learn more at www.liquid.ai.

Fonte: Business Wire


Visualizza la versione completa sul sito

Informativa
Questo sito o gli strumenti terzi da questo utilizzati si avvalgono di cookie necessari al funzionamento ed utili alle finalità illustrate nella cookie policy. Se vuoi saperne di più o negare il consenso a tutti o ad alcuni cookie, consulta la cookie policy. Chiudendo questo banner, acconsenti all’uso dei cookie.