▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | Italian Channel Awards | Italian Project Awards | Italian Security Awards | ...
InnovationOpenLab

Shanghai Stonehill Technology Unveils the First Non-Attention-Based Large Model in China: Faster, Stronger, More Economical

On January 24th, at the "New Architecture of Large Language Model", Rock AI (a subsidiary of Shanghai Stonehill Technology Co., Ltd.) officially unveiled the first domestic general-purpose large langu...

Business Wire

SHANGHAI: On January 24th, at the "New Architecture of Large Language Model", Rock AI (a subsidiary of Shanghai Stonehill Technology Co., Ltd.) officially unveiled the first domestic general-purpose large language model without an Attention mechanism—the Yan Model. It is also one of the rare large models in the industry that does not rely on a Transformer architecture. The Yan Model offers a training efficiency that is 7 times higher than that of Transformer models with equivalent parameters, 5 times the inference throughput, and 3 times the memory capacity. Additionally, it supports lossless operation on CPUs, reduced hallucination in expressions, and 100% support for private deployment applications.

At the meeting, Liu Fanping, the CEO of Rock AI delivered a speech: "We hope that the Yan architecture can serve as the infrastructure for the artificial intelligence field, and to establish a developer ecosystem in the AI domain. Ultimately, we aim to enable anyone to use general-purpose large models on any device, providing more economical, convenient, and secure AI services, and to promote the construction of an inclusive artificial intelligence future."

The Transformer, as the foundational architecture for large models such as ChatGPT, has achieved significant success, but it still has many shortcomings, including high computational power consumption, extensive memory usage, high costs, and difficulties in processing long sequence data. To address these issues, the Yan Model replaces the Transformer architecture with a newly developed generative "Yan Architecture" of its own. This architecture enables lossless inference of infinitely long sequences on consumer-grade CPUs, achieving the performance effects of a large model with hundreds of billions of parameters using only tens of billions of parameters, and meets the practical needs of enterprises for low-cost, easy deployment of large models.

At the press conference, the research team presented a wealth of empirical comparisons between the Yan Model and a Transformer model of the same parameter scale. The experimental data showed that under the same resource conditions, the Yan architecture's model has a training efficiency and inference throughput that are respectively 7 times and 5 times higher than those of the Transformer architecture, and its memory capacity is improved by 3 times. In response to the long-sequence challenge faced by the Transformer, the Yan Model also performs excellently, theoretically capable of achieving inference of unlimited length.

Additionally, the research team has pioneered a reasonable associative feature function and memory operator, combined with linear computation methods, to reduce the complexity of the model's internal structure. The newly architected Yan Model will attempt to open up the previously "uninterpretable black box" of natural language processing, aiding the widespread application of large models in high-risk areas such as healthcare, finance, and law. At the same time, the hardware advantage of the Yan Model, which can run on mainstream consumer-grade CPUs without compression or pruning, also significantly broadens the possibilities for large models to be deployed across various industries.

Liu Fanping stated, "In the next phase, Rock AI aims to create a full-modality real-time human-computer interaction system, achieve end-side training, and integrate training and inference. We plan to fully connect perception, cognition, decision-making, and action to construct an intelligent loop for general artificial intelligence. This will provide more options for the foundational platform of large models in research areas such as general-purpose robots and embodied intelligence."

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

Sparkle works on environmentally sustainable content distribution

The Italian company partners with MainStreaming for high-performance, energy-efficient video streaming

Libraesva: being specialized is ok again in cybersecurity

Software vendors developing vertical solutions against specific attack vectors are 'cool' again. And when it comes to email security, all companies now…

Fintech: Links tests the use of exponential technologies in the banking…

Links Management and Technology just concluded the testing phase of a research project focused on banking transformation

Axyon AI: Italian Artificial Intelligence for Finance applications

Axyon AI offers an AI platform specifically designed for asset management, with several interesting strengths for those approaching machine/deep learning…

Most read

Transact Campus Rolls Out Mobile Credential Technology at the University…

Transact Campus, (“Transact”) the award winning leader in innovative mobile credential and payment solutions for a connected campus, today announced the…

New Emburse Research Finds Strong Link Between Managing Travel Spend and…

Emburse, whose innovative travel and expense (T&E) solutions power forward-thinking organizations, today launched survey findings revealing an even…

Keysight and University of Malaga’s MobileNet join forces to Accelerate…

$KEYS #5G--Keysight Technologies, Inc. (NYSE: KEYS) announces that the University of Malaga, specifically the MobileNet: Mobile & Aerospace Networks…

Emburse Unveils Next Generation of Travel and Expense Technology at GBTA…

Emburse, whose leading travel and expense solutions power forward-thinking organizations, continues its innovative track record by unveiling its vision…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!