▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | Italian Channel Awards | Italian Project Awards | Italian Security Awards | ...
InnovationOpenLab

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

#AI--Cerebras Systems, makers of the fastest AI infrastructure, today announced that Cerebras AI Inference has been named Demo of the Year at 2025 TSMC North America Technology Symposium. Voted on by ...

Immagine

World’s Leading AI Inference Selected by Innovation Zone Attendees at TSMC’s North America Technology Symposium

SUNNYVALE, Calif.: #AI--Cerebras Systems, makers of the fastest AI infrastructure, today announced that Cerebras AI Inference has been named Demo of the Year at 2025 TSMC North America Technology Symposium. Voted on by attendees from TSMC’s customer and partners, the award recognizes the most compelling and impactful innovation demonstrated in the Innovation Zone at TSMC’s annual Technology Symposium.

“Wafer-scale computing was considered impossible for fifty years, and together with TSMC we proved it could be done,” said Dhiraj Mallick, COO, Cerebras Systems. “Since that initial milestone, we’ve built an entire technology platform to run today’s most important AI workloads more than 20x faster than GPUs, transforming a semiconductor breakthrough into a product breakthrough used around the world.”

“At TSMC, we support all our customers of all sizes-from pioneering startups to established industry leaders-with industry-leading semiconductor manufacturing technologies and capacities, helping turn their transformative idea into realities,” said Lucas Tsai, Vice President of Business Management, TSMC North America. “We are glad to work with industry innovators likes Cerebras to enable their semiconductor success and drive advancements in AI.”

In 2019, Cerebras introduced the industry’s first functional wafer-scale processor-a single-die chip 50 times larger than conventional processors-breaking a half-century of semiconductor assumptions through its partnership with TSMC. The Cerebras CS-3 extends this lineage and continues a scaling law unique to Cerebras.

A Showcase of Innovation and Partnership

Cerebras demonstrated CS-3 inference in TSMC North America Technology Symposium’s Innovation Zone, a curated exhibition area highlighting breakthrough technologies from across TSMC’s emerging customers. Cerebras AI Inference received the highest number of votes from attendees at the North America event, reflecting both the technical achievement and the excitement it generated among event attendees.

Cerebras AI Inference Leading the Industry

Cerebras AI Inference is now used across the world’s most demanding environments. It is available through AWS, IBM, Hugging Face, and other cloud platforms. It supports cutting-edge national scientific research at U.S. Department of Energy laboratories and the Department of Defense, and global enterprises across healthcare, biotech, finance, and design have adopted Cerebras to accelerate their most complex AI workloads with real-time performance that GPUs cannot deliver.

Cerebras is also the fastest platform for AI coding-one of the fastest growing and most strategic AI verticals. It generates code more than 20 times faster than competing solutions.

Cerebras has been a pioneer in supporting open-source models from OpenAI, Meta, G42 and others, consistently achieving the fastest inference speeds as verified by independent benchmarking firm Artificial Analysis.

Cerebras now serves trillions of tokens per month across the Cerebras Cloud, on-premises deployments, and leading partner platforms.

For more information on Cerebras AI Inference, please visit www.cerebras.ai.

About Cerebras Systems

Cerebras Systems builds the fastest AI infrastructure in the world. We are a team of pioneering computer architects, computer scientists, AI researchers, and engineers of all types. We have come together to make AI blisteringly fast through innovation and invention because we believe that when AI is fast it will change the world. Our flagship technology, the Wafer Scale Engine 3 (WSE-3) is the world’s largest and fastest AI processor. 56 times larger than the largest GPU, the WSE uses a fraction of the power per unit compute while delivering inference and training more than 20 times faster than the competition. Leading corporations, research institutes and governments on four continents chose Cerebras to run their AI workloads. Cerebras solutions are available on premise and in the cloud, for further information, visit cerebras.ai or follow us on LinkedIn, X and/or Threads.

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

RSA at Cybertech Europe 2024

Alaa Abdul Nabi, Vice President, Sales International at RSA presents the innovations the vendor brings to Cybertech as part of a passwordless vision for…

Italian Security Awards 2024: G11 Media honours the best of Italian cybersecurity

G11 Media's SecurityOpenLab magazine rewards excellence in cybersecurity: the best vendors based on user votes

How Austria is making its AI ecosystem grow

Always keeping an European perspective, Austria has developed a thriving AI ecosystem that now can attract talents and companies from other countries

Sparkle and Telsy test Quantum Key Distribution in practice

Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…

Most read

Deepgram Brings Low-Latency Speech Recognition and TTS to Amazon Connect

Deepgram, the world’s most realistic and real-time Voice AI platform, today announced integration of its enterprise-grade speech-to-text (STT) and text-to-speech…

Deepgram Launches Streaming Speech, Text, and Voice Agents on Amazon SageMaker…

Deepgram, the world’s most realistic and real-time Voice AI platform, today announced native integration with Amazon SageMaker AI, delivering streaming,…

MathWorks Showcases AI for Safety-Critical Systems at NeurIPS 2025

NeurIPS 2025, Booth #732 – MathWorks, the leading developer of mathematical computing software, will showcase how engineers and scientists can use MATLAB®…

CORRECTING and REPLACING Block Processes 124 Million Transactions as Americans…

In Lower West Side, Chicago section, first bullet point should read: Total BFCM Weekend Volume: $8.4M (instead of Total BFCM Weekend Volume: $49M). The…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!