Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Cloud Inference. This new serverless platform revolutionizes AI scalability and reach by offer...
New serverless Inference-as-a-Service offering available from Vultr across six continents and 32 locations worldwide
WEST PALM BEACH, Fla.: Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Cloud Inference. This new serverless platform revolutionizes AI scalability and reach by offering global AI model deployment and AI inference capabilities. Leveraging Vultr’s global infrastructure spanning six continents and 32 locations, Vultr Cloud Inference provides customers with seamless scalability, reduced latency, and enhanced cost efficiency for their AI deployments.
Today's rapidly evolving digital landscape has challenged businesses across sectors to deploy and manage AI models efficiently and effectively. This has created a growing need for more inference-optimized cloud infrastructure platforms with both global reach and scalability, to ensure consistent high performance. This is driving a shift in priorities as organizations increasingly focus on inference spending as they move their models into production. But with bigger models comes increased complexity. Developers are being challenged to optimize AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency.
With that in mind, Vultr created Cloud Inference. Vultr Cloud Inference will accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions. Users can simply bring their own model, trained on any platform, cloud, or on-premises, and it can be seamlessly integrated and deployed on Vultr’s global NVIDIA GPU-powered infrastructure. With dedicated compute clusters available on six continents, Vultr Cloud Inference ensures that businesses can comply with local data sovereignty, data residency, and privacy regulations by deploying their AI applications in regions that align with legal requirements and business objectives.
“Training provides the foundation for AI to be effective, but it's inference that converts AI’s potential into impact. As an increasing number of AI models move from training into production, the volume of inference workloads is exploding, but the majority of AI infrastructure is not optimized to meet the world’s inference needs,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant. “The launch of Vultr Cloud Inference enables AI innovations to have maximum impact by simplifying AI deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach.”
With the capability to self-optimize and auto-scale globally in real-time, Vultr Cloud Inference ensures AI applications provide consistent, cost-effective, low-latency experiences to users worldwide. Moreover, its serverless architecture eliminates the complexities of managing and scaling infrastructure, delivering unparalleled impact, including:
“Demand is rapidly increasing for cutting-edge AI technologies that can power AI workloads worldwide,” said Matt McGrigg, director of global business development, cloud partners at NVIDIA. “The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally.”
As AI continues to push the limits of what’s possible and change the way organizations think about cloud and edge computing, the scale of infrastructure needed to train large AI models and to support globally-distributed inference needs has never been greater. Following the recent launch of Vultr CDN to scale media and content delivery worldwide, Vultr Cloud Inference will provide the technological foundation to enable innovation, increase cost efficiency, and expand global reach for organizations around the world, across industries, making the power of AI accessible to all.
Vultr Cloud Inference is now available for early access via registration here. Learn more about Vultr Cloud Inference at NVIDIA GTC and contact sales to get started.
About Constant and Vultr
Constant, the creator and parent company of Vultr, is on a mission to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers worldwide. Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Founded by David Aninowsky and completely bootstrapped, Vultr has become the world’s largest privately-held cloud computing company without ever raising equity financing. Learn more at: www.vultr.com.
Fonte: Business Wire
The Italian company partners with MainStreaming for high-performance, energy-efficient video streaming
Software vendors developing vertical solutions against specific attack vectors are 'cool' again. And when it comes to email security, all companies now…
Links Management and Technology just concluded the testing phase of a research project focused on banking transformation
Axyon AI offers an AI platform specifically designed for asset management, with several interesting strengths for those approaching machine/deep learning…
The "Switzerland Existing & Upcoming Data Center Portfolio" database has been added to ResearchAndMarkets.com's offering. This database product covers…
Jenna Wells, former Marine Corps Captain and Chief Customer & Product Officer of real-time risk intelligence platform Supply Wisdom, is available…
Intelligent power management company Eaton (NYSE:ETN) today announced that Tiffany Hanisch has been named senior vice president, Internal Audit, effective…
Transact Campus, (“Transact”) the award winning leader in innovative mobile credential and payment solutions for a connected campus, today announced the…