Don’t Let GenAI Regulations Catch Your Business Off Guard
The swift adoption of GenAI technologies has far outpaced the implementation of data governance and security measures around its use, but governing bodies are closing the gap. Enterprises that are ill-prepared for the onslaught of new regulations face the prospect of considerable work – and the possibility of hefty fines – in the very near future.
The Global Patchwork: Where GenAI Regulation Stands
The global landscape of AI regulation is fragmented, but common themes are emerging. There is widespread recognition of the need for robust data governance, often building upon existing data protection laws, such as the European Union’s GDPR and China’s PIPL. The need for security and risk management is also a point of agreement, reflected in the rise of AI Safety/Security Institutes and the influence of guidelines like the NIST AI Risk Management Framework (RMF).
Today’s companies – especially those operating in multiple countries where cross-border controls are essential – must build guardrails to extend security and data governance to the use of GenAI or risk compromising their own policies as well as their compliance with governing bodies.
By 2027, more than 40% of AI-related data breaches will be caused by the improper use of generative AI (GenAI) across borders, according to Gartner, Inc.1
Why GenAI Compliance Matters
The cost of noncompliance can be steep, leading to significant financial, reputational, and legal consequences for businesses. GDPR violations, for example, can result in fines reaching up to four percent of a company’s global annual revenues or €20 million, whichever is higher.
Any enterprise that does business across regions has to take compliance seriously, as unintended cross-border data transfers can occur, particularly when GenAI is integrated in existing products. Organizations must ensure compliance with international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to include guidelines for AI-processed data.
GenAI Regulation Landscape
The global landscape of AI regulation in 2025 is characterized by significant fragmentation. While jurisdictions like the EU and South Korea have enacted comprehensive, legally binding frameworks, many others, including major players like the U.S. (at the federal level), the UK, Canada, and Australia, as well as developing economies in the Middle East and Africa, are pursuing more flexible, principles-based, sector-specific, or guideline-driven approaches. National strategies are proliferating, tailored to specific economic ambitions (UAE, KSA, Japan, Singapore), developmental needs (India, African nations), or balancing acts between innovation and perceived risks (UK, South Korea, Australia).
Despite the diversity, common themes are emerging. There is widespread recognition of the need for robust data governance, often building upon existing data protection laws (including GDPR, PIPL, PDPL, and POPIA). Principles of transparency and fairness/non-discrimination are almost universally acknowledged as important, although their implementation varies from binding requirements in high-risk contexts (EU, South Korea) to voluntary ethical guidelines (Australia, Japan, India).
However, areas like intellectual property (especially concerning training data and AI-generated outputs) and liability/accountability for AI-induced harms remain largely unresolved globally, marked by significant legal uncertainty and divergent national stances. The enforcement mechanisms also vary dramatically, from dedicated AI regulators with substantial fining powers (EU) to reliance on existing sectoral bodies (UK, Australia), or relatively light-touch approaches (South Korea’s low penalties, voluntary codes in Canada).
The influence of the “big three” models – the EU’s comprehensive regulation, the U.S.’s market-driven approach (currently deregulatory at the federal level), and China’s state-controlled adaptive system – is evident, with other nations often positioning their strategies in relation to these poles.
Key trends to watch:
- Continued Divergence vs. Harmonization: While international forums (G7, AI Safety Summit) and standards bodies (ISO, OECD) promote dialogue, achieving binding global consensus is unlikely in the near term. National interests and differing legal/political systems will perpetuate fragmentation. Regional efforts, such as the Australian strategy, may foster some alignment, but national specificities will remain dominant.
- Generative AI Focus: Expect more specific regulations targeting generative AI, particularly concerning transparency (labeling), content moderation, IP issues (training data), and potential misuse (deepfakes, misinformation).
- Maturation of Enforcement: As regulations come into full effect (such as the EU’s phased-in AI Act), focus will shift towards practical enforcement, interpretation of rules, and the development of case law, clarifying ambiguities.
- IP and Liability Resolution: These remain the most critical unresolved areas. Pressure from rights holders and the reality of AI-related harms will likely force legislatures and courts globally to develop clearer rules, though consensus may be slow to emerge.
- The Flexibility vs. Certainty Trade-off: Jurisdictions will continue to grapple with balancing the need for adaptable regulations that can keep pace with technology against the need for legal certainty to encourage investment and compliance.
- Impact of US Policy: The longevity and impact of the U.S. federal shift towards deregulation remain to be seen. It could spur faster innovation domestically but potentially create compliance challenges for American firms operating globally and accelerate state-level regulatory activity.
What You Can Do Now
Global regulations are of particular concern if your company operates in multiple countries. Unfortunately, the lack of consistent global best practices and standards for AI and data governance exacerbates compliance challenges by forcing enterprises to develop region-specific strategies. But it’s important to establish governance frameworks that not only comply with new and emerging requirements but also enable the responsible and accelerated adoption of AI.
Due to the complexity of aligning AI with the evolving regulatory landscape, businesses generally anticipate a minimum of 18 months to effectively implement AI governance models.2 But it’s important to move swiftly, as Gartner predicts that by 2027, AI governance will become a requirement of all sovereign AI laws and regulations worldwide.
How Menlo Can Help
One of the biggest issues with maintaining any type of regulatory compliance has been the difficulty of proving that your policies are in place and that they are working. Because so much of GenAI activity happens in the browser, we recommend taking these steps now to be ready for audits later:
- Create and maintain a register of approved AI browser apps and extensions.
- Ensure that users are aware of these tools, the danger of free-tier tools, and why sanctioned tools have been specified. Any block or redirect page can be configured with a link to more specific guidance; Menlo customers can take advantage of Adaptive Web Modules to configure GenAI-specific block and redirect pages.
- Ensure that users’ browser policies follow corporate guidelines, as well as any regulatory compliance required, such as CIS. Menlo Benchmarks includes CIS Level 1 and 2 guidelines, as well as new features introduced by browser vendors themselves.
- Isolate all traffic categorized as GenAI, as well as all traffic running on the TLD “.ai.” Menlo enables these actions with just a few clicks from our central admin dashboard.
- Enable Browsing Forensics to get a view of user actions. Recordings of browser sessions can be triggered by traffic or by DLP rules.
- Determine how to analyze traffic to GenAI subdomains that are not classified as GenAI. Menlo Insights can give you a close look at this non-categorized traffic.
- Ensure that you have deployed DLP protections preventing copy/paste and upload actions, as well as watermarking to emphasize the sensitivity of GenAI use. Use content dictionaries or build something custom for your use case. Menlo deploys these rules automatically, inline.
- Deploy Menlo HEAT Shield AI to keep users away from fake AI impersonation sites.
To find out more about GenAI in the workspace and how effective telemetry and visibility can help you stay compliant, read our 2025 Report: How AI is Shaping the Modern Workspace here.
————-
1 Gartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027, February 17, 2025
2 From Hype to Impact: How Enterprises Can Unlock Real Business Value with AI, EPAM Systems, April 2025