Watch the webinar
Connect with the presenters
Mike Cullen, CISA, CISSP, CIPP/US
Principal, Risk Advisory
Baker Tilly
Samantha Boterman, CISA, CCSFP
Senior Manager, Risk Advisory
Baker Tilly
Jeremy Huval
Chief Innovation Officer
HITRUST
Since OpenAI publicly released their ChatGPT tool in the fall of 2022, artificial intelligence (AI)—and, specifically, generative AI—has proliferated. With ever-increasing speed and complexity, new AI systems, vendors, platforms and capabilities have burst onto the scene.
With its promise to reduce inefficiencies, perfect operations and improve customer-facing products and services, organizations are employing AI in bolder and more creative ways each day.
By 2026, more than 80% of organizations will have used generative AI enabled systems—a dramatic increase from less than 5% in 2023. "Real Power of Generative AI: Bringing Knowledge to All.” Gartner, 17 Oct. 2023.
AI can accelerate decision-making processes, increase employee productivity, interact with customers, collect and organize vast amounts of data, make accurate predictions/forecasts and more.
Generative AI has already come a long way. In its comparatively short existence, it can already produce text, images, videos, presentations and even music in mere seconds. With all this creative ability, the question is not SHOULD your organization use AI, but WHEN and HOW.
Through these capabilities, generative AI can hopefully help organizations achieve their top objectives—reducing costs and increasing revenue in their core business, creating new business and/or sources of revenue, increasing the value of offerings by integrating AI-based features or insights and, in essence, creating more value without extraordinary effort.
And as with most technological innovations, this new creative power comes with new challenges and risks. Even if your organization is not implementing bleeding edge tools and techniques with AI, such capabilities are likely already in use by the software and service vendors that your organization relies on to accomplish its goals.
AI has brought about new—and elevated existing—risks and compliance challenges that are increasingly specific to AI platforms. Organizations need to be proactive in their considerations of leveraging AI in a timely, efficient and responsible manner. Organizations should treat AI risk considerations as more than just afterthoughts and should appropriately adapt practices and approaches from existing control frameworks.
At the forefront of these risk considerations are two main themes: cybersecurity and data governance. As you consider the relevant risks facing your organization, ask these questions:
At the most fundamental level, these questions will illuminate the need to update your organization’s approach to cybersecurity and data governance. But where do you start?
Your organization should pursue a certain level of risk management by employing certain controls that are appropriate and adequate given the level of AI adoption and your organization’s risk tolerance.
To do that effectively you should first establish an AI governance framework. While you may already be using an ad hoc approach that works for the first few AI use cases, however, such an approach won’t scale as your organization’s use of AI grows.
A robust approach to AI governance incorporates people, processes and technology—for each possible AI use case or scenario—in accordance with your organization’s risk tolerance. A proper approach to AI governance seeks to ensure transparency and risk management (including cybersecurity, privacy, compliance and operational risks), fairness and inclusiveness (by avoiding detrimental bias and discrimination) and accountability (especially for the safety of humans).
This approach should include a few key activities:
HITRUST has developed a way to approach AI and data governance through its existing focus on cybersecurity and regulatory compliance.
HITRUST’s AI assurance program delivers practical and scalable assurance for AI risk and security management. It helps organizations who use AI to shape their AI risk management efforts, understand best practices in the areas of AI security and AI governance, evaluate their AI control environment through self and validated assessments and achieve AI security certification that can be shared with internal and external stakeholders. The HITRUST AI assurance program is built around five components: industry collaboration, harmonization of AI resources, shared AI responsibilities, AI security certification and AI risk management reporting.
HITRUST is actively engaging with AI industry leaders and external assessors (professional services firms like Baker Tilly) to identify new AI risks and threats, refine the safeguards needed to manage identified risks and continually improve AI assurance offerings.
HITRUST is incorporating over four dozen AI-centric authoritative sources—i.e., externally developed, information-protection-focused frameworks, standards, guidelines, regulations or laws—into one harmonized framework (the HITRUST CSF). These sources include ISO standards, NIST special publications, the Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR) and even state-level items like the California Consumer Privacy Act (CCPA). Incorporating all these sources into one harmonized omni-framework allows organizations to focus on implementation instead of monitoring updates and combing through the fine print themselves.
Coming in Q4 of 2024, HITRUST will leverage their proven shared responsibility model and inheritance program to:
Also coming in Q4 of 2024, HITRUST will offer an AI security certification to enable organizations to demonstrate the strength of their AI security program. The entire HITRUST assessment and certification portfolio is expanding to support the addition of AI controls and the issuance of an accompanying AI security certification to proactively address questions and concerns over AI security.
Coming in Q3 of 2024, HITRUST’s AI risk management insights report will provide key inputs to specific conversations related to an organization’s AI risk management program. Adding a new “AI risk management” authoritative source into a HITRUST CSF assessment will add roughly 50 new AI risk management-focused requirements and includes scorecards, assessment results and narratives about the organization’s AI risk program against both of the following:
In the ever-evolving world of AI, your organization has many elements to consider. And the considerations of tomorrow will likely differ substantially from those of today. Where should your organization start?
Baker Tilly recommends the following initial steps to establish AI governance:
As organizations integrate AI, it becomes imperative to navigate the complex landscape with a comprehensive governance program. Baker Tilly offers a full scope of AI services, from strategy and governance to design and implementation, to help your organization navigate AI complexity and embrace a proactive approach to risk management so that your organization can harness the transformative power of AI.
Mike Cullen, CISA, CISSP, CIPP/US
Principal, Risk Advisory
Baker Tilly
Samantha Boterman, CISA, CCSFP
Senior Manager, Risk Advisory
Baker Tilly
Jeremy Huval
Chief Innovation Officer
HITRUST