AI and Compliance

May 5, 2026
-
Alexandria Poulin

This past weekend, I was skiing with my family, and something struck me.

AI wasn’t just being discussed at a conference or in the tech field; it is part of everyday conversation.

On the lift, someone was talking about using AI to analyze skiing metrics and performance data. Later that day, I was chatting with a restaurant owner who was using a tablet with AI to help them shape a new restaurant concept and draft business plans. None of these conversations were about “this new technology.” They were about practical tools helping people move faster.

That’s when it became clear that AI is no longer experimental. It is here, and it has embedded itself into our daily lives and business operations.

And that’s exactly why security and compliance conversations matter.

AI is Here to Stay

The question is no longer “Should we adopt AI?”; it's “How do we embrace it?”.

The question we, at Cloud Security Partners, prefer is “How do we embrace AI without creating unmanaged security and compliance risks?”

We believe organizations should lean into AI, but as Uncle Ben in SpiderMan said “with great power comes great responsibility”. AI is not something to block, but something to use and manage securely.

AI is Changing the Risk Surface

A major concern is that companies will feel the pressure to quickly adopt AI and will skip the critical implementation steps like employee training, security implementation, security reviews, or governance.  Like a house you need to build a solid foundation before you build a secure environment. If you don’t, there will be costly problems down the road.

Now, what we are seeing is an increased risk surface. It's so easy to copy data into AI tools without realizing the potential security implications. Does anyone really know where the data goes, or who can access it?

Shadow AI is something we are also starting to see more frequently. Departments are picking up AI tools on their own with no security review and no vendor assessments. Leadership may endorse the use of AI, but with a blanket statement that does not use clear guidelines. This leaves the door wide open for employees to use unsafe public models and upload sensitive data. The tools get implemented, they get access to data, and then the hard questions start. Once you give it the data, how do you get it back? Can you? Where did it go? Most organizations don't have a good answer to these questions, and that is exactly the problem. 

Security researchers have already documented a range of risks tied to AI adoption, from data leakage and prompt injection attacks to model manipulation and the unintended exposure of sensitive information. (PurpleSec, AI Security Risks)

When organizations input information into a public AI model, that data may be used to further train the model, passed along to third parties, or used in unclear ways. Most organizations assume there is a straightforward answer to where their data goes. However, this is generally not the case. Additionally, there are some concerns around bias and hallucinations, but these models are quickly getting more sophisticated, and we are seeing those issues reduce as the technology matures. The output risk is becoming more manageable. The data risk is not.

You may notice the same questions or facts are repeated, and for good reason, the organizations do not have visibility. Many public AI vendors provide limited transparency into how their models are trained, how long they retain your data, what data they retain, and who has access to it.


A 2025 study from researchers at Stanford, Berkeley, Princeton, and MIT found that major AI companies are becoming less transparent, not more. Scoring companies on a 100-point scale, the Foundation Model Transparency Index found an average score of just 40, with most companies disclosing little about how their models are trained, what data they use, or what impact they have on society.

(Stanford HAI, Transparency in AI Is on the Decline)

What about Compliance

AI is moving fast, and compliance frameworks weren’t all written with generative AI in mind. This rush and gap in guidelines create uncertainty.

The reality is that most major standards governing security standards (e.g., SOC 2, PCI DSS, GDPR) were not written with generative AI in mind.  However, that doesn’t put AI out of scope.

AI is now:

  • Processing personal data
  • Handling health records
  • Touching credit card information
  • Reviewing credentials
  • Influencing business decisions
  • Connecting directly to production systems

Therefore, compliance obligations still apply, they may not be under an AI-specific standard though. That guidance is likely to come, and it's only a matter of time before regulations catch up with the technology. 

This concern is not theoretical. According to IBM's 2025 security research, 13% of organizations have already reported breaches involving AI models or AI-powered applications, and 97% of those organizations reported lacking proper AI access controls.

(IBM Security Report)

What now?

Build your foundation around the use of AI! Don’t let your house crumble.

  • Get eyes on it: You can’t manage what you can’t see. Inventory your AI tools.
  • Set the Guardrails: Decide what data can and cannot be entered into AI tools. Put it in writing. A formal AI usage policy removes ambiguity and sets expectations before problems arise.
  • Trust, But Verify: AI vendors should go through the same security and compliance scrutiny as any other third party.
  • Give AI a Captain: Someone should be accountable for AI risk oversight, vendor coordination, monitoring, and executive visibility.
  • Watch It Like You Mean It: If AI systems are interacting with sensitive data or production workflows, their activity should be logged and reviewed just like any other critical system in your environment.
  • All aboard: Train your employees on usage and guidelines. Policy without training is just another document.
  • Don’t Set It and Forget It: AI is evolving quickly. The structure you build today should be reviewed and refined regularly. Governance is ongoing.

What comes next?

In the next parts of this series, we will cover:

  • Outlines an AI guidance framework
  • How to map AI systems to governance standards
  • How to manage shadow AI and data leakage risk
  • Practical AI security and compliance guidance

The goal of security is not to slam the brakes on AI adoption. It is to ensure that when you move fast, you do not leave your organization exposed. Companies that treat AI as a gray area will eventually feel it in their audits, their compliance reviews, and their incident response calls. As stated earlier, you need to build the foundation before you build the house. The companies that take the time to build a real structure around AI now are the ones that will move efficiently. This is not about slowing down. It is about making sure the progress you make today does not become a problem you have to fix tomorrow.

About the Author

Alexandria is the Director of Service Delivery at Cloud Security Partners, where she oversees the professional services division, leading project management and operations. She joined CSP in 2021 as a Project Coordinator and has grown alongside the company, advancing through multiple roles to her current position. Prior to entering the tech industry, Alexandria spent nearly a decade in higher education, where she served as a Laboratory Coordinator and Lecturer, coordinating multiple courses, managing student employees, overseeing graduate teaching assistants, and teaching in the classroom.

Outside of work, Alexandria is passionate about her community and the cybersecurity industry. She is actively involved in running OWASP Maine and BangorBeerSec, helping to foster local security communities in Maine. In her free time, she enjoys spending time with her family, traveling, skiing, and playing in adult volleyball and softball leagues.

Stay in the loop.
Subscribe for the latest in AI, Security, Cloud, and more—straight to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Back to blogs