Embracing AI in Internal Audit: An Audit, Risk and Compliance Officer's Guide

Feb 6, 2025 9:00 AM ET

Authored by Baker Tilly’s Mike Cullen, Himanshu Sharma

Audit, risk and compliance officers understand that artificial intelligence is here to stay. Regardless of your industry, location, organization size or structure, AI is going to be part of your business for the foreseeable future.

The global AI market exceeded $184 billion in 2024, according to Statista, “AI market size worldwide from 2020-2030". In fact, AI is developing so rapidly that if we included AI use cases in this article, those examples would be outdated by the time you read it. We’re clearly in the midst of an AI revolution.

While not all organizations or individuals will be at the forefront of AI development, every professional in the audit, risk and compliance fields must acknowledge its impacts. We must embrace the ways AI can be used to our advantage. We must acknowledge the extent to which our co-workers, vendors and competitors are using it.

Even if your organization is not currently utilizing AI – and roughly 50% of organizations have yet to go down that path (according to AccountancyAge, “Internal auditors 'flying blind' on AI risks”) – you need to be prepared to address AI from a risk and audit perspective in the near future. And if you’re a newcomer to the world of AI, know that you’re not alone: More than 60% of internal audit leaders cite a lack of AI expertise, as reported in AuditBoard’s “2025 Focus on the Future” report.

At the very least, you need to understand how AI generally works, where to find it, how to examine it, and what questions to ask along the way.

With that in mind, the first challenge is: Where do you even start?

How to view AI through an internal audit lens?

When it comes to evaluating the intersection of AI and internal audit, the recommended place to start is to develop an understanding of the areas within an organization that are likely to use AI in some fashion. Generally speaking, organizations are using or experimenting with AI in the following primary areas:

  • Customer service
  • Cybersecurity
  • Fraud detection
  • Human resources
  • Marketing
  • Predictive analytics
  • Supply chain management

Obviously, there are others – and the list is ever-expanding. However, there is no AI guidebook. There is no universal app that you can download. There are no guarantees on where you can find AI within your organization or how to audit the use of AI from the lowest levels of your company to the highest ranks of leadership.

What we can say for certain is this: Organizations should start to address AI risks now. Pertaining to organizations’ current usage and future plans for AI, Baker Tilly recommends the following assessments to get started.

AI readiness assessment

  • What is it? An assessment of an organization's preparedness for implementing AI technologies. It helps identify strengths, weaknesses and opportunities, providing a clear roadmap for AI implementation and integration.
  • What does it review? Thetechnological infrastructure, organizational culture and the skillset of the existing workforce. It also assesses how aligned leadership is with the organization’s AI existing integration and long-term strategy.
  • What are we hearing? AI may be more prevalent than people realize. Many companies have AI embedded within certain existing systems that have been used for years, but they might not call it “AI,” or they don’t even realize that it is a form of AI.
  • How can Baker Tilly help? By conducting an AI readiness assessment, we can analyze your existing systems and processes and help the organization develop a strategy to effectively integrate/implement AI, if not already in place. Additionally, we help identify and recommend the necessary resources (e.g., skills/workforce training, technologies) and risk management strategies needed for successful AI implementation. This process also helps ensure that internal audits can assess AI’s role in your business.

AI governance assessment

  • What is it? An assessment of an organization's practices, people and systems for governing AI technologies. It helps identify practices needed to support an effective risk-managed rollout of AI across the organization that is aligned to strategy and goals.
  • What does it review? The organization’s practices for embedding appropriate and right-sized risk management activities into operations. It assesses how risk framing, risk measurement, risk tolerance and risk prioritization are used with AI systems/technologies. This review is typically based on emerging frameworks, such as the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF).
  • What are we hearing? AI is already in the environment, whether it exists purposely or as users experimenting with free tools. However, governance is unclear or fragmented across the organization. Many leaders and stakeholders are unsure of how to best use AI technologies without running afoul of existing policies and standards.
  • How can Baker Tilly help? By conducting an AI governance assessment, we can identify and review the appropriateness and adequacy of existing practices, then recommend prioritized actions to mature AI governance following a reasonable roadmap. The recommendations can support risk management without stifling innovation.

Data governance assessment

  • What is it? An assessment of how well an organization structures and manages data to generate additional value. Data governance is a foundational component for effectively using AI systems/technologies, as these new tools must have good, clean data to effectively achieve strategy.
  • What does it review? The organizational practices for areas such as strategy, policy, standards, oversight and compliance. These practices support data quality – the accuracy, consistency and completeness of data used in AI models – as well as the security and privacy measures. This assessment is typically based on leading frameworks, such as the Data Management Body of Knowledge (DMBOK).
  • What are we hearing? Organizations are realizing they can’t take full advantage of AI because their data is siloed, inconsistent and/or incomplete. Using public AI models trained on the Internet can only benefit organizations so much; the real power of these new systems/technologies is leveraging the organization’s data. Another important consideration is how much data governance can vary based on industry. For instance, a healthcare organization’s governance process will be far more stringent due to operational needs (e.g., life safety), and regulatory challenges (e.g., security and privacy regulations).
  • How can Baker Tilly help? By conducting data governance assessments, we can confirm that organizational practices support the best use of data. Additionally, we can recommend improvements to activities that support the reliable, secure and ethical use of data. Along the way, we consider what regulatory requirements are pertinent for each specific industry and, in some cases, for a specific jurisdiction.

Model validation process assessment

  • What is it? An assessment of management’s processes for validating the accuracy, reliability and ethics of AI implementations used by the organization, especially for high-risk automated decision-making technology (ADMT). The goal is to validate management’s work by thoroughly testing AI implementations for accuracy, fairness, transparency and compliance, helping foster stakeholder trust.
  • What does it review? This assessment reviews management’s controls for testing and approving AI systems or technologies before implementing into production environments. The review could assess performance (i.e., accuracy, precision, recall and other performance metrics), transparency (i.e., the explainability of results), fairness (i.e., bias detection), and compliance (i.e., adherence to legal standards).
  • What are we hearing? AI is a double-edged sword, as it presents nearly unlimited possibilities for use, but it also is new and complicated, leading to inherent risks. Additionally, technology vendors are supercharging their deployment of these AI systems/technologies, sometimes deploying to customers without appropriate testing or validation. Therefore, model validation practices are critical for organizations to confirm AI is serving its intended purpose effectively and ethically.
  • How can Baker Tilly help? By conducting model validation process assessments, we help confirm that the organization deploys AI that is safe, secure, explainable, privacy-enhanced, fair, valid and accountable. Additionally, we are helping organizations identify potential biases, data integrity issues or performance flaws, allowing organizations to correct problems before AI systems/technologies are deployed. This ultimately improves model reliability and protects against negative consequences.

Internal audits and AI: An organizational checklist

As AI continues to evolve, organizations that fail to address its impacts and risks may fall behind their competitors or face unforeseen challenges. The complexities of AI make it crucial to act now – before the technology becomes too embedded in your operations to assess thoroughly or too costly to modify.

With any complicated topic, the smartest way to get answers is to begin by asking questions. When it comes to your organization’s use of AI, you should consider asking probing questions around AI readiness, data governance and model validation.

We created an initial checklist to get you started:

AI readiness/governance

  • Do we have a clearly defined AI governance structure within our organization?
  • Do we have a defined AI strategy or specific AI adoption goals and outcomes for use cases?
  • Do we have the necessary internal expertise or access to external AI specialists to guide our AI strategy and governance?
  • Do we have any vendors already using AI or new AI contracts and, if so, did we agree to any notable terms and conditions?
  • Do we have leadership commitment to AI adoption, and how is that commitment communicated across the organization?
  • Do we know which technologies in our current technology stack are already deploying AI?
  • Do we have a mapped flow of AI connections and/or data across departments to ensure appropriate access controls are in place?
  • Do we have a process and personnel to stay updated on relevant AI regulations and compliance requirements specific to our industry or geographic location?
  • Do we have processes to ensure ongoing employee training and awareness of AI risks and ethical concerns?
  • Do we have a method to consider the potential impact of AI on job roles, employee relations and workplace dynamics?

Data governance

  • Does our organization’s data infrastructure support AI, including scalability, data integrity and security?
  • Does our organization have clear data ownership and accountability protocols in place, particularly for data used in AI systems?
  • Does our organization have data ready for ethical use, particularly in areas like bias detection and privacy protection?
  • Does our organization have a completed risk assessment related to the data used in AI models (e.g., accuracy, completeness, timeliness)?

Model validation

  • Are AI systems subject to continuous monitoring and validation, even after deployment into operational settings?
  • Are AI systems undergoing periodic independent audits to assess their fairness, accuracy and potential biases?
  • Are AI systems transparent and can we explain the decision-making processes?
  • Are AI system decisions documented to track any potential deviations from expected behavior?

As you examine this checklist and begin thinking about your organization’s responses, it is imperative to keep an open mind regarding the realities of AI within your business environment. Remember that you don’t need to have all the answers today. You just need to connect with a team of AI and internal audit specialists who can help you prepare for tomorrow. To discuss these questions, your answers and all the possibilities, contact a Baker Tilly specialist.