▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | Italian Channel Awards | Italian Project Awards | Italian Security Awards | ...
InnovationOpenLab

Cerebras and Neural Magic Unlock the Power of Sparse LLMs for Faster, More Power Efficient, Lower Cost AI Model Training and Deployment

Cerebras Systems, the pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, today announced the groundbreaking results of their collaborat...

Business Wire

70% Sparse Models Are 3x Faster with No Loss of Accuracy

SUNNYVALE, Calif. & CAMBRIDGE, Mass.: Cerebras Systems, the pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, today announced the groundbreaking results of their collaboration for sparse training and deployment of large language models (LLMs). Achieving an unprecedented 70% parameter reduction with full accuracy recovery, training on Cerebras CS-3 systems and deploying on Neural Magic inference server solutions enables significantly faster, more efficient, and lower cost LLMs, making them accessible to a broader range of organizations and industries.

“For the first time ever, we achieved up to 70% sparsity for a foundational model, such as Llama, with full accuracy recovery for challenging downstream tasks,” said Sean Lie, CTO and co-founder of Cerebras. “This breakthrough enables scalable training and accelerated inference – our CS-3 system provides near theoretical acceleration for training sparse LLMs, and Neural Magic’s inference server, DeepSparse, delivers up to 8.6x faster inference than dense, baseline models.”

With native hardware support for unstructured sparsity, the Cerebras CS-3 system accelerates training for 70% and higher sparse models – far exceeding the yet unrealized peak on GPUs like H100 and B100. This is because GPU sparsity is limited and rigid – with only 50% support using a fixed ratio. With the CS-3 system, purpose-built for sparse models with the industry's highest memory bandwidth, AI practitioners can employ novel techniques from Neural Magic, such as sparse pretraining and sparse fine-tuning to their datasets, to create highly sparse LLMs without sacrificing accuracy. The results are faster, smaller models which retain the full accuracy of their slower, dense counterparts.

“Together with Cerebras and their purpose-built AI hardware, we created sparse, foundational models that deliver lightning-fast inference through our sparsity-aware software platform,” said Mark Kurtz, CTO of Neural Magic. “This paradigm shift provides enterprises and researchers alike with much more efficient, cost-effective, and accessible deployment of LLMs across a wide range of industries and real-world applications.”

To facilitate the adoption and further development of sparse LLMs, Cerebras and Neural Magic have released the models, recipes, implementations, and documentation of this sparsity breakthrough. For more information, please visit https://neuralmagic.com/blog/unlocking-affordable-and-sustainable-ai-through-sparse-foundational-llms/.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit https://www.cerebras.net.

About Neural Magic

Neural Magic accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As a software-delivered solution, Neural Magic optimizes open-source models, like large language models, to run efficiently on commodity hardware. Organizations can spend less to advance AI initiatives to production, without sacrificing performance and accuracy with their models. Founded by a MIT professor and an AI research scientist, challenged by the constraints of existing hardware, Neural Magic enables a future where developers and IT can tap into the power of state-of-the-art, open-source AI with none of the friction.

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

Sparkle works on environmentally sustainable content distribution

The Italian company partners with MainStreaming for high-performance, energy-efficient video streaming

Libraesva: being specialized is ok again in cybersecurity

Software vendors developing vertical solutions against specific attack vectors are 'cool' again. And when it comes to email security, all companies now…

Fintech: Links tests the use of exponential technologies in the banking…

Links Management and Technology just concluded the testing phase of a research project focused on banking transformation

Axyon AI: Italian Artificial Intelligence for Finance applications

Axyon AI offers an AI platform specifically designed for asset management, with several interesting strengths for those approaching machine/deep learning…

Most read

Transact Campus Rolls Out Mobile Credential Technology at the University…

Transact Campus, (“Transact”) the award winning leader in innovative mobile credential and payment solutions for a connected campus, today announced the…

New Emburse Research Finds Strong Link Between Managing Travel Spend and…

Emburse, whose innovative travel and expense (T&E) solutions power forward-thinking organizations, today launched survey findings revealing an even…

Keysight and University of Malaga’s MobileNet join forces to Accelerate…

$KEYS #5G--Keysight Technologies, Inc. (NYSE: KEYS) announces that the University of Malaga, specifically the MobileNet: Mobile & Aerospace Networks…

Emburse Unveils Next Generation of Travel and Expense Technology at GBTA…

Emburse, whose leading travel and expense solutions power forward-thinking organizations, continues its innovative track record by unveiling its vision…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!