▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | GreenCity | Italian Channel Awards | Italian Project Awards | ...
InnovationOpenLab

Responsible AI Institute Launches the AI Policy Template to Help Organizations Build Foundational Responsible AI Policies and Governance

Responsible AI Institute (RAI Institute), a prominent non-profit organization dedicated to facilitating the responsible use of AI worldwide, has launched the AI Policy Template to help businesses deve...

Business Wire

Informed by NIST AI RMF and ISO/IEC 42001, the AI Policy Template is Available Now Inside RAI Institute's Responsible AI Hub

AUSTIN, Texas: Responsible AI Institute (RAI Institute), a prominent non-profit organization dedicated to facilitating the responsible use of AI worldwide, has launched the AI Policy Template to help businesses develop their own enterprise-wide responsible AI policies. This initial release was developed with RAI Institute's deep expertise and understanding of emerging business and assurance environments for AI use cases and is informed by evolving global and local standards, including the NIST AI Risk Management Framework (RMF) and ISO/IEC 42001.

The AI Policy Template is available inside RAI Institute’s recently launched Responsible AI Hub, a comprehensive portal for individual and corporate members. The RAI Hub gives members access to cutting-edge assessments to benchmark their responsible AI maturity, in-depth guidebooks to help them navigate the evolving AI governance landscape, as well as curated educational resources to keep members apprised of the latest AI regulations and policies.

Organizations Need a Starting Point for Developing Responsible AI Guidelines

AI technology adoption is already widespread — 97% of organizations are actively engaging with AI and 74% already incorporating generative AI (GenAI) technologies in production. However, responsible AI frameworks consistently lag behind the speed of innovation. Amidst the surge of complex AI solutions, businesses struggle with executing a critical step in their AI journeys: developing and establishing AI guardrails and ethical policies within their organization. Notably, 74% of organizations admit they still lack a comprehensive, organization-wide approach to responsible AI and only 44% of companies using AI are developing ethical AI policies.

To address this, RAI Institute’s AI Policy Template is an industry-agnostic and detailed “plug-and-play” policy document that organizations can adapt to quickly establish foundational responsible AI policies aligned with their business needs and risks. The Template also helps enterprises calibrate organizational policies according to their objectives to procure, develop, use and/or sell AI systems.

“RAI Institute recognizes that the development of responsible AI management processes is both an ethical imperative and a strategic business priority,” said Hadassah Drukarch, director of policy and delivery at RAI Institute. “Our new AI Policy Template provides organizations with a comprehensive framework for establishing robust internal AI governance policies and practices from the ground up that align with evolving global and local AI policies and recommendations. Our team looks forward to iterating on the current version of the Template with members and plans to release an updated version in late August.”

Leaders can utilize the AI Policy Template to compose their organization’s AI principles, objectives and overall management strategy. The Template also covers common policy and governance functions and processes, including data management, risk management and procurement, enabling enterprises to more readily integrate AI-specific guidance from the NIST AI Risk Management Framework (RMF) and ISO/IEC 42001 into their existing corporate policy structures. The policy elements in the Template align with RAI Institute’s Organizational Maturity Assessment framework and are inclusive of the majority of RAI Institute’s recommendations for building baseline responsible AI organizational maturity. As informed by RAI Institute and other authoritative sources, organizations that use the Template, either directly or as inspiration, early in their RAI journey can accelerate their progress to a fully-developed and effective RAI organizational strategy.

The RAI Institute team will be accepting feedback from members on the current version of the AI Policy Template until the end of July.

Become a RAI Institute Member

RAI Institute invites new members to join in driving innovation and advancing responsible AI. Collaborating with esteemed organizations, RAI Institute develops practical approaches to mitigate AI-related risks and fosters the growth of responsible AI practices. Members gain exclusive benefits and direct access to the RAI Hub. To download the AI Policy Template and learn more about member benefits, visit this page.

About Responsible AI Institute (RAI Institute)

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as Amazon Web Services, Boston Consulting Group, Booz Allen Hamilton, Shell, Chevron, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

Follow RAI Institute on Social Media

LinkedIn
X
Slack

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

Axyon AI: Italian Artificial Intelligence for Finance applications

Axyon AI offers an AI platform specifically designed for asset management, with several interesting strengths for those approaching machine/deep learning…

Italian Artificial Intelligence tackles medical emergencies at sea

Mermaid-AI is a telehealth platform at sea based on a visor equipped with an AR, medical Artificial Intelligence algorithms, satellite communications…

Gyala: a new "Made in Italy" cybersecurity

With a proven track record in the defence field, Gyala now also brings its cybersecurity technologies to the wider enterprise audience

ITALIAN PROJECT AWARDS 2023: the best IT projects of the year, awarded

Now in its third edition, the initiative is targeted at the ICT professional world and honours projects based on innovative ideas and technologies, realised…

Most read

Sheba Microsystems Welcomes MEMS Technology Leader and Entrepreneur Matt…

Sheba Microsystems Inc. (Sheba) a global leader in MEMS technologies, today announced the appointment of Matt Crowley as Senior Strategic Advisor. Matt…

Worldwide Public Cloud Services Revenues Grew 19.9% Year Over Year in…

#AWS--Worldwide revenue for the public cloud services market totaled $669.2 billion in calendar year 2023, an increase of 19.9% compared to 2022, according…

Maximus Supports Provider Module Certification for Ohio Department of…

Maximus, a leading employer and provider of government services worldwide, today announced that its state client, the Ohio Department of Medicaid, has…

Large Language Model (LLM) Markets 2024-2034 with OpenAI, Google, Meta,…

The "Large Language Model (LLM) Market - A Global and Regional Analysis: Focus on Application, Architecture, Model Size, and Region - Analysis and Forecast,…

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!