rolv, LLC today announced breakthrough benchmarks for rolvsparse©, a patent‑pending software compute primitive delivering 20–177× AI inference speedups and up to 99.5% energy reduction on unmodi...

FORT LAUDERDALE, Fla.: rolv, LLC today announced breakthrough benchmarks for rolvsparse©, a patent‑pending software compute primitive delivering 20–177× AI inference speedups and up to 99.5% energy reduction on unmodified models, compared to vendor‑optimized dense and sparse libraries. All energy numbers use real hardware power readings (50 ms polling). No hardware changes, retraining, or precision loss. Most results are independently validated by the University of Miami Frost Institute (some benchmarks are very recent), with bit‑identical SHA‑256 hashes across all platforms.
rolvsparse© achieves 20–177× speedups and 98–99.5% energy savings on real Hugging Face production models. Even fully dense workloads - 0% sparsity - reach 63× acceleration, beating vendor-optimized GPU libraries on CPU alone. Five patents are pending.
“The core idea for rolvsparse came to me on a bike ride in May 2025. Everything since - patents, prototypes, self‑taught stacks, benchmarks, and university validation - has been relentless execution. Why build more data centers when your existing ones can achieve 83× faster performance and 99% greener operations?” - Rolv E. Heggenhougen, Founder & CEO, rolv.ai
Benchmark Highlights (Real Hardware Power Readings), Speedup and Energy savings:
Frontier‑Scale LLMs & MoEs (GPU)
Qwen Family (GPU + TPU)
Specialized & Classical Models
CPU Benchmarks (Real FFNs)
All outputs are SHA‑256 verified for deterministic correctness.
Note: Most CPU benchmarks w/rolv beat GPU w/o rolv.
How rolvsparse© Works
Vendor libraries waste cycles on zeros - the “zero‑FLOP bottleneck.” rolvsparse© restructures arithmetic at the primitive level, skipping meaningless operations while guaranteeing exact outputs (e.g., deterministic hash 8dbe5f139f…dd8dd). It deploys as a drop‑in software layer across all hardware. Users run the open verifier at rolv.ai to generate baselines; rolv returns personalized comparison reports with real power readings.
Market Impact & Applications
AI data centers may reach 9% of U.S. electricity by 2030; hyperscalers have committed $700B+ in AI capex. rolvsparse© reduces energy use by 98–99.5%, boosts throughput on existing infrastructure, and enables edge viability (mobile processors: 70× sparse). Applications include LLMs/MoEs, agents, mobile inference, engineering simulation, RecSys/finance, sustainability, and sovereign AI.
Recent benchmarks on Substack (inl. Json, flops, tokens): rolv.substack.com
Fonte: Business Wire
Alaa Abdul Nabi, Vice President, Sales International at RSA presents the innovations the vendor brings to Cybertech as part of a passwordless vision for…
G11 Media's SecurityOpenLab magazine rewards excellence in cybersecurity: the best vendors based on user votes
Always keeping an European perspective, Austria has developed a thriving AI ecosystem that now can attract talents and companies from other countries
Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…
Genspark.ai today announced the launch of Genspark Claw, introduced as users’ first “AI employee.” Genspark Claw allows users to delegate work via a simple…
Juicebox, the AI recruiting platform, today announced $80 million in Series B funding at an $850 million valuation led by DST Global, with meaningful…
United Rentals, Inc. (NYSE: URI) today announced the launch of the Equipment Agent, a first-of-its-kind AI-powered equipment recommendation solution designed…
The OpenFold Consortium today announced a major OpenFold3 update and the public release of training datasets and full-stack tooling for reproducible biomolecular…