▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | GreenCity | Italian Channel Awards | Italian Project Awards | ...
InnovationOpenLab

Cerebras and Neural Magic Unlock the Power of Sparse LLMs for Faster, More Power Efficient, Lower Cost AI Model Training and Deployment

Cerebras Systems, the pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, today announced the groundbreaking results of their collaborat...

Business Wire

70% Sparse Models Are 3x Faster with No Loss of Accuracy

SUNNYVALE, Calif. & CAMBRIDGE, Mass.: Cerebras Systems, the pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, today announced the groundbreaking results of their collaboration for sparse training and deployment of large language models (LLMs). Achieving an unprecedented 70% parameter reduction with full accuracy recovery, training on Cerebras CS-3 systems and deploying on Neural Magic inference server solutions enables significantly faster, more efficient, and lower cost LLMs, making them accessible to a broader range of organizations and industries.

“For the first time ever, we achieved up to 70% sparsity for a foundational model, such as Llama, with full accuracy recovery for challenging downstream tasks,” said Sean Lie, CTO and co-founder of Cerebras. “This breakthrough enables scalable training and accelerated inference – our CS-3 system provides near theoretical acceleration for training sparse LLMs, and Neural Magic’s inference server, DeepSparse, delivers up to 8.6x faster inference than dense, baseline models.”

With native hardware support for unstructured sparsity, the Cerebras CS-3 system accelerates training for 70% and higher sparse models – far exceeding the yet unrealized peak on GPUs like H100 and B100. This is because GPU sparsity is limited and rigid – with only 50% support using a fixed ratio. With the CS-3 system, purpose-built for sparse models with the industry's highest memory bandwidth, AI practitioners can employ novel techniques from Neural Magic, such as sparse pretraining and sparse fine-tuning to their datasets, to create highly sparse LLMs without sacrificing accuracy. The results are faster, smaller models which retain the full accuracy of their slower, dense counterparts.

“Together with Cerebras and their purpose-built AI hardware, we created sparse, foundational models that deliver lightning-fast inference through our sparsity-aware software platform,” said Mark Kurtz, CTO of Neural Magic. “This paradigm shift provides enterprises and researchers alike with much more efficient, cost-effective, and accessible deployment of LLMs across a wide range of industries and real-world applications.”

To facilitate the adoption and further development of sparse LLMs, Cerebras and Neural Magic have released the models, recipes, implementations, and documentation of this sparsity breakthrough. For more information, please visit https://neuralmagic.com/blog/unlocking-affordable-and-sustainable-ai-through-sparse-foundational-llms/.

About Cerebras Systems

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types. We have come together to accelerate generative AI by building from the ground up a new class of AI supercomputer. Our flagship product, the CS-3 system, is powered by the world’s largest and fastest AI processor, our Wafer-Scale Engine-3. CS-3s are quickly and easily clustered together to make the largest AI supercomputers in the world, and make placing models on the supercomputers dead simple by avoiding the complexity of distributed computing. Leading corporations, research institutions, and governments use Cerebras solutions for the development of pathbreaking proprietary models, and to train open-source models with millions of downloads. Cerebras solutions are available through the Cerebras Cloud and on premise. For further information, visit https://www.cerebras.net.

About Neural Magic

Neural Magic accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As a software-delivered solution, Neural Magic optimizes open-source models, like large language models, to run efficiently on commodity hardware. Organizations can spend less to advance AI initiatives to production, without sacrificing performance and accuracy with their models. Founded by a MIT professor and an AI research scientist, challenged by the constraints of existing hardware, Neural Magic enables a future where developers and IT can tap into the power of state-of-the-art, open-source AI with none of the friction.

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

Gyala: a new "Made in Italy" cybersecurity

With a proven track record in the defence field, Gyala now also brings its cybersecurity technologies to the wider enterprise audience

ITALIAN PROJECT AWARDS 2023: the best IT projects of the year, awarded

Now in its third edition, the initiative is targeted at the ICT professional world and honours projects based on innovative ideas and technologies, realised…

I3P launches the Cybersecurity Incubation Program

The I3P's initiative is promoted together with the National Cybersecurity Agency in collaboration with Leonardo and C*Sparks.

iVis Technologies enables remotely-controlled corneal telesurgery

It's based on Italian technologies the first successful intercontinental telesurgery intervention for keratoconus carried out remotely, connecting Bari…

Most read

Argentina Prepaid Card and Digital Wallet Business Report 2024: Market…

The "Argentina Prepaid Card and Digital Wallet Business and Investment Opportunities Databook - Market Size and Forecast, Consumer Attitude & Behaviour,…

MDS Property Management Software Achieves SOC 2 Compliance

#newyorkcity--MDS Property Management Software is pleased to announce the successful completion of its System and Organization Controls (SOC) 2 audit,…

University of Phoenix Leadership Joins Proceedings of 2024 PESC-A4L Spring…

University of Phoenix is pleased to share that Vice Provost of Strategy, Marc Booker, Ph.D., Senior Director of Program Deployment; Hillary Halpern, M.B.A.;…

Ault Alliance Declares Monthly Cash Dividend of $0.2708333 Per Share of…

$AGREE #13_Percent_Cumulative_Preferred--Ault Alliance, Inc. (NYSE American: AULT), a diversified holding company (“Ault Alliance,” or the “Company”),…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!