#SnowflakeBUILD--Snowflake (NYSE: SNOW), the AI Data Cloud company, today announced at its annual developer conference, BUILD 2024, new advancements that accelerate the path for organizations to deliv...
No-Headquarters/BOZEMAN, Mont.: #SnowflakeBUILD--Snowflake (NYSE: SNOW), the AI Data Cloud company, today announced at its annual developer conference, BUILD 2024, new advancements that accelerate the path for organizations to deliver easy, efficient, and trusted AI into production with their enterprise data. With Snowflake’s latest innovations, developers can effortlessly build conversational apps for structured and unstructured data with high accuracy, efficiently run batch large language model (LLM) inference for natural language processing (NLP) pipelines, and train custom models with GPU-powered containers — all with built-in governance, access controls, observability, and safety guardrails to help ensure AI security and trust remain at the forefront.
“For enterprises, AI hallucinations are simply unacceptable. Today’s organizations require accurate, trustworthy AI in order to drive effective decision-making, and this starts with access to high-quality data from diverse sources to power AI models,” said Baris Gultekin, Head of AI, Snowflake. "The latest innovations to Snowflake Cortex AI and Snowflake ML enable data teams and developers to accelerate the path to delivering trusted AI with their enterprise data, so they can build chatbots faster, improve the cost and performance of their AI initiatives, and accelerate ML development.”
Snowflake Enables Enterprises to Build High-Quality Conversational Apps, Faster
Thousands of global enterprises leverage Cortex AI to seamlessly scale and productionize their AI-powered apps, with adoption more than doubling¹ in just the past six months alone. Snowflake’s latest innovations make it easier for enterprises to build reliable AI apps with more diverse data sources, simplified orchestration, and built-in evaluation and monitoring — all from within Snowflake Cortex AI, Snowflake’s fully managed AI service that provides a suite of generative AI features. Snowflake’s advancements for end-to-end conversational app development enable customers to:
Snowflake Empowers Users to Run Cost-Effective Batch LLM Inference for Natural Language Processing
Batch inference allows businesses to process massive datasets with LLMs simultaneously, as opposed to the individual, one-by-one approach used for most conversational apps. In turn, NLP pipelines for batch data offer a structured approach to processing and analyzing various forms of natural language data, including text, speech, and more. To help enterprises with both, Snowflake is unveiling more customization options for large batch text processing, so data teams can build NLP pipelines with high processing speeds at scale, while optimizing for both cost and performance.
Snowflake is adding a broader selection of pre-trained LLMs, embedding model sizes, context window lengths, and supported languages to Cortex AI, providing organizations increased choice and flexibility when selecting which LLM to use, while maximizing performance and reducing cost. This includes adding the multi-lingual embedding model from Voyage, multimodal 3.1 and 3.2 models from Llama, and huge context window models from Jamba for serverless inference. To help organizations choose the best LLM for their specific use case, Snowflake is introducing Cortex Playground (now in public preview), an integrated chat interface designed to generate and compare responses from different LLMs so users can easily find the best model for their needs.
When using an LLM for various tasks at scale, consistent outputs become even more crucial to effectively understand results. As a result, Snowflake is unveiling the new Cortex Serverless Fine-Tuning (generally available soon), allowing developers to customize models with proprietary data to generate results with more accurate outputs. For enterprises that need to process large inference jobs with guaranteed throughput, the new Provisioned Throughput (public preview soon) helps them successfully do so.
Customers Can Now Expedite Reliable ML with GPU-Powered Notebooks and Enhanced Monitoring
Having easy access to scalable and accelerated compute significantly impacts how quickly teams can iterate and deploy models, especially when working with large datasets or using advanced deep learning frameworks. To support these compute-intensive workflows and speed up model development, Snowflake ML now supports Container Runtime (now in public preview on AWS and public preview soon on Microsoft Azure), enabling users to efficiently execute distributed ML training jobs on GPUs. Container Runtime is a fully managed container environment accessible through Snowflake Notebooks (now generally available) and preconfigured with access to distributed processing on both CPUs and GPUs. ML development teams can now build powerful ML models at scale, using any Python framework or language model of their choice, on top of their Snowflake data.
“As an organization connecting over 700,000 healthcare professionals to hospitals across the US, we rely on machine learning to accelerate our ability to place medical providers into temporary and permanent jobs. Using GPUs from Snowflake Notebooks on Container Runtime turned out to be the most cost-effective solution for our machine learning needs, enabling us to drive faster staffing results with higher success rates,” said Andrew Christensen, Data Scientist, CHG Healthcare. “We appreciate the ability to take advantage of Snowflake's parallel processing with any open source library in Snowflake ML, offering flexibility and improved efficiency for our workflows.”
Organizations also often require GPU compute for inference. As a result, Snowflake is providing customers with new Model Serving in Containers (now public preview on AWS). This enables teams to deploy both internally and externally-trained models, including open source LLMs and embedding models, from the Model Registry into Snowpark Container Services (now generally available on AWS and Microsoft Azure) for production using distributed CPUs or GPUs — without complex resource optimizations.
In addition, users can quickly detect model degradation in production with built-in monitoring with the new Observability for ML Models (now in public preview), which integrates technology from TruEra to monitor performance and various metrics for any model running inference in Snowflake. In turn, Snowflake’s new Model Explainability (now in public preview) allows users to easily compute Shapley values — a widely-used approach that helps explain how each feature is impacting the overall output of the model — for models logged in the Model Registry. Users can now understand exactly how a model is arriving at its final conclusion, and detect model weaknesses by noticing unintuitive behavior in production.
Supporting Customer Quotes:
Learn More:
¹As of October 31, 2024.
Forward Looking Statements
This press release contains express and implied forward-looking statements, including statements regarding (i) Snowflake’s business strategy, (ii) Snowflake’s products, services, and technology offerings, including those that are under development or not generally available, (iii) market growth, trends, and competitive considerations, and (iv) the integration, interoperability, and availability of Snowflake’s products with and on third-party platforms. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, including those described under the heading “Risk Factors” and elsewhere in the Quarterly Reports on Form 10-Q and the Annual Reports on Form 10-K that Snowflake files with the Securities and Exchange Commission. In light of these risks, uncertainties, and assumptions, actual results could differ materially and adversely from those anticipated or implied in the forward-looking statements. As a result, you should not rely on any forward-looking statements as predictions of future events.
© 2024 Snowflake Inc. All rights reserved. Snowflake, the Snowflake logo, and all other Snowflake product, feature and service names mentioned herein are registered trademarks or trademarks of Snowflake Inc. in the United States and other countries. All other brand names or logos mentioned or used herein are for identification purposes only and may be the trademarks of their respective holder(s). Snowflake may not be associated with, or be sponsored or endorsed by, any such holder(s).
About Snowflake
Snowflake makes enterprise AI easy, efficient and trusted. Thousands of companies around the globe, including hundreds of the world’s largest, use Snowflake’s AI Data Cloud to share data, build applications, and power their business with AI. The era of enterprise AI is here. Learn more at snowflake.com (NYSE: SNOW).
Fonte: Business Wire
Alaa Abdul Nabi, Vice President, Sales International at RSA presents the innovations the vendor brings to Cybertech as part of a passwordless vision for…
G11 Media's SecurityOpenLab magazine rewards excellence in cybersecurity: the best vendors based on user votes
Always keeping an European perspective, Austria has developed a thriving AI ecosystem that now can attract talents and companies from other countries
Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…
Mindbreeze, a leading provider of AI-based knowledge management solutions, has identified four AI trends for 2025 where AI will drive business transformation.…
Veikul, a leading company in mobility solutions for independent workers and logistics services, has achieved a significant milestone in its commitment…
The "Japan RegTech Business and Investment Opportunities Databook - 50+ KPIs on RegTech Market Size, By Industry, By Technology, By Type of Product, By…
The "U.S. Automatic Flush Market - Focused Insights 2024-2029" report has been added to ResearchAndMarkets.com's offering. The U.S. Automatic Flush Market…