AI Risks Are Multiplying. Last Year’s Rulebook Might as Well Be a Rotary Phone
March 31, 2025 – Corporate Counsel Business JournalPublished in Corporate Counsel Business Journal
Authors: Ronald J. Levine, Herrick senior litigator, and Richard Torrenzano of The Torrenzano Group
Deepfakes, phishing attacks, algorithmic bias, backdoors, trojans, data poisoning, supply chain vulnerabilities, side-channel attacks, and reputation attacks through artificial intelligence enabled manipulation.
These aren’t plot points in the latest dystopian thriller—they’re boardroom-level threats, happening in real-time.
Artificial intelligence (AI) isn’t just transforming business; it’s ballooning risk and exposing organizations to new, complex vulnerabilities that demand fresh approaches to crisis management.
Companies that treat AI risks as an IT problem are sleepwalking towards disaster. The question isn’t whether AI will create governance challenges—it’s whether leadership will get ahead of them or get steamrolled.
The old model -- lawyers quietly in the background, reviewing contracts and ensuring compliance with glacially evolving regulations—is obsolete.
Legal counsel now chases AI risk like a firefighter sprinting toward a five-alarm blaze.
Putting out fires isn’t a strategy—it’s a last resort. Organizations need active and prepared crisis plans that are not just battle-tested, ensuring they’re ready when AI-driven risks explode. If last year’s plan is still in the digital file, delete it now. Technology has already made it obsolete.
What’s at stake? Everything.
Who is responsible? Is it software engineers, data scientists or compliance teams? Or is it a game of corporate musical chairs—where the last one standing gets stuck holding the regulatory grenade?
If an AI-driven lending system suddenly starts rejecting mortgage applications for an entire zip code, whose inbox does that problem land in?
The answer is almost always, "We’ll get back to you on that." Without a clear crisis response plan, these issues spiral into reputational disasters.
Imagine an AI-powered medical diagnosis tool that suggests different treatments for patients with the same symptoms—without providing a clear explanation. Or a fraud detection system that flags transactions from one neighborhood while ignoring identical patterns in another. Or a hiring review program that systematically rejects equally qualified candidates based solely on race or national origin.
If these systems function like a chef’s secret recipe—delivering results without revealing the ingredients—good luck explaining the “secret sauce” when regulators or media come calling.
Companies need real oversight—not, feel-good mission statements about responsible AI.
Legal teams should ensure AI models are auditable, explainable and capable of passing regulatory scrutiny.
Because if no one inside the organization understands how an AI system makes decisions… how can they explain it to stakeholders, media or a judge during a crisis?
Compliance meets AI—and the fine print just got smaller
If companies think AI regulations are a headache now, they’re in for a migraine.
Regulators aren’t buying “but the algorithm did it” excuse. Global and regional agencies, eager to make a name for themselves, are sharpening their regulatory knives. If AI is running the show, someone will be held accountable for its decisions.
Elected and regulatory bodies worldwide are scrambling to rein in AI, rolling out sweeping accountability, bias and privacy laws at breakneck speed.
But while policymakers are drafting rules, lawsuits are piling up—actions over AI-generated content, discrimination and misinformation are making their way through agencies and courts.
Meanwhile, AI is busy writing news articles, designing logos and producing music that sounds suspiciously like existing artists' work. Battles are, and will be, raging with copyright holders, data privacy regulators and entire industries demanding clarity…and compensation.
We learned from countless examples that multi-million-dollar regulatory fines and damage awards can and do create reputation and market meltdowns overnight. In short, AI doesn’t automate processes -- it accelerates risk.
AI’s Wild Wild West days are fading into the sunset -- say hello to the new sheriffs in town.
The legal environment is shifting at ludicrous speed—with tightening data privacy laws, expanding antitrust cases and a growing body of deepfake and cybersecurity regulations.
In the past two years, the U.S. Securities and Exchange Commission (SEC) levied $1.5 billion in fines against financial firms for compliance failures, including monitoring private messaging, social media and unauthorized devices and the European Commission also imposed heavy fines for data breaches.
Domestically, Congress is still advancing consumer privacy and disclosure legislation which seeks to establish comprehensive federal standards to protect consumer data. but no comprehensive federal law has been enacted yet.
These moves echo the European Union’s (EU) General Data Protection Regulation (GDPR), the California Consumer Privacy Act and the recently enacted Texas Data Privacy and Security Act, among others.
And it looks like they are just in the opening act.
As AI technology rockets ahead like a spacecraft slingshotting around a star, regulators scramble to chart its trajectory, sketching out policies like last-minute blueprints for a ship that’s already left orbit.
Smart companies aren’t waiting for impact. They’re rewriting their governance playbooks now—embedding legal oversight, enforcing AI ethics and making sure they don’t wake up to a social media or news scandal about how their chatbot just insulted an entire demographic.
And then there’s the data problem.
Existing privacy and intellectual property laws weren’t designed for an AI-driven global business arena. Who actually owns data AI scrapes from the internet? What happens when an AI generates false, damaging information about a customer?
The legal gray areas are multiplying—and regulators are sharpening their pens. Fasten your seatbelts. AI compliance is about to become one of the biggest headaches of the decade.
Vendor Dilemma: Buying AI Is Easy. Governing It Is Hard.
Most companies aren’t building AI from scratch—they’re buying it from third-party vendors… and that’s where risks multiply.
If a company implements an AI-driven HR system that accidentally violates employment laws, who takes the fall—the vendor or the company using it? Regulators won’t care that the AI was outsourced.
Crisis response teams must be prepared to manage such challenges before they escalate into legal and communications nightmares.
Legal and communications teams must scrutinize AI contracts more rigorously than ever. If the vendor’s terms don’t include liability protections, indemnities and compliance guarantees, then congratulations—the company just signed up to take the legal hit when their AI malfunctions.
"The price of greatness is responsibility."— Winston Churchill, October 16, 1943, after receiving an honorary degree from Harvard University.
In the age of AI, that distinction was never more relevant.
The smartest companies will get ahead of AI risks early. They’ll integrate legal oversight from day one, ensuring transparency, accountability, and regulatory compliance. Their employees won’t just marvel at AI’s potential—they’ll be trained to manage its dangers.
The rest? They’ll learn the hard way. They’ll be the ones scrambling when regulators come knocking and issuing public apologies when AI goes rogue. The ones watching their market cap nosedive as customers lose trust.
In an AI-driven world, governance is not a checkbox—it determines whether a company leads the industry or becomes its next cautionary tale. Any organization ignoring AI governance today can be sure someone else isn’t. Most likely a regulator. Probably with a subpoena.
AI isn’t just another innovation—it’s a legal and reputational battleground.
Companies that thrive will be those that recognize it as a high-stakes risk, a regulatory flashpoint and a potential reputation liability nightmare.