▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | GreenCity | Italian Channel Awards | Italian Project Awards | ...
InnovationOpenLab

Shanghai Stonehill Technology Unveils the First Non-Attention-Based Large Model in China: Faster, Stronger, More Economical

On January 24th, at the "New Architecture of Large Language Model", Rock AI (a subsidiary of Shanghai Stonehill Technology Co., Ltd.) officially unveiled the first domestic general-purpose large langu...

Business Wire

SHANGHAI: On January 24th, at the "New Architecture of Large Language Model", Rock AI (a subsidiary of Shanghai Stonehill Technology Co., Ltd.) officially unveiled the first domestic general-purpose large language model without an Attention mechanism—the Yan Model. It is also one of the rare large models in the industry that does not rely on a Transformer architecture. The Yan Model offers a training efficiency that is 7 times higher than that of Transformer models with equivalent parameters, 5 times the inference throughput, and 3 times the memory capacity. Additionally, it supports lossless operation on CPUs, reduced hallucination in expressions, and 100% support for private deployment applications.

At the meeting, Liu Fanping, the CEO of Rock AI delivered a speech: "We hope that the Yan architecture can serve as the infrastructure for the artificial intelligence field, and to establish a developer ecosystem in the AI domain. Ultimately, we aim to enable anyone to use general-purpose large models on any device, providing more economical, convenient, and secure AI services, and to promote the construction of an inclusive artificial intelligence future."

The Transformer, as the foundational architecture for large models such as ChatGPT, has achieved significant success, but it still has many shortcomings, including high computational power consumption, extensive memory usage, high costs, and difficulties in processing long sequence data. To address these issues, the Yan Model replaces the Transformer architecture with a newly developed generative "Yan Architecture" of its own. This architecture enables lossless inference of infinitely long sequences on consumer-grade CPUs, achieving the performance effects of a large model with hundreds of billions of parameters using only tens of billions of parameters, and meets the practical needs of enterprises for low-cost, easy deployment of large models.

At the press conference, the research team presented a wealth of empirical comparisons between the Yan Model and a Transformer model of the same parameter scale. The experimental data showed that under the same resource conditions, the Yan architecture's model has a training efficiency and inference throughput that are respectively 7 times and 5 times higher than those of the Transformer architecture, and its memory capacity is improved by 3 times. In response to the long-sequence challenge faced by the Transformer, the Yan Model also performs excellently, theoretically capable of achieving inference of unlimited length.

Additionally, the research team has pioneered a reasonable associative feature function and memory operator, combined with linear computation methods, to reduce the complexity of the model's internal structure. The newly architected Yan Model will attempt to open up the previously "uninterpretable black box" of natural language processing, aiding the widespread application of large models in high-risk areas such as healthcare, finance, and law. At the same time, the hardware advantage of the Yan Model, which can run on mainstream consumer-grade CPUs without compression or pruning, also significantly broadens the possibilities for large models to be deployed across various industries.

Liu Fanping stated, "In the next phase, Rock AI aims to create a full-modality real-time human-computer interaction system, achieve end-side training, and integrate training and inference. We plan to fully connect perception, cognition, decision-making, and action to construct an intelligent loop for general artificial intelligence. This will provide more options for the foundational platform of large models in research areas such as general-purpose robots and embodied intelligence."

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

A capital increase for the Italian venture builder FoolFarm

FoolFarm launches €9 million capital increase targeting IPO and unveils new investment committee.

ShopFully goes shopping in Spain (again)

ShopFully, the Italian company leader in Europe in drive-to-store solutions, acquired the Spanish Ofertia.

Sicily is more and more digitally connected

Thanks to EXA Infrastructure investments, more routes provide rapid access to mainland Italy and beyond

Italian startup Audio Innova combines AI and metaverse for art preservation

Audio Innova won the first prize in the "Creative AI" category at WAICF in Cannes

Most read

A capital increase for the Italian venture builder FoolFarm

FoolFarm launches €9 million capital increase targeting IPO and unveils new investment committee.

ShopFully goes shopping in Spain (again)

ShopFully, the Italian company leader in Europe in drive-to-store solutions, acquired the Spanish Ofertia.

Adtran confirms compliance with final BABA ruling and continues US investment

#5G--Adtran today announced its readiness to support the Broadband Equity Access and Deployment (BEAD) program, welcoming the clarity and guidance provided…

CIBC Innovation Banking Provides Growth Capital to InsightRX

CIBC Innovation Banking announced today that it has provided growth financing to InsightRX, a San Francisco-based software company leveraging quantitative…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!