From Tactical Tool to Strategic Imperative: Why PR Must Guide an Intelligent AI Revolution

The generative AI market is poised to create trillions in economic value. Yet, a crucial disconnect is emerging from the C-suite, captured in McKinsey’s recent report: while generative AI adoption is soaring, most organisations are yet to see a meaningful impact on their bottom line. This isn't just a technology gap; it's a strategic one. It reveals a rush to adopt AI as a tactical tool for efficiency without embedding it intelligently and ethically into the core of the business. This oversight presents one of the most significant reputational risks of the next decade.

As public relations leaders, we are the stewards of corporate reputation. We are uniquely positioned at the intersection of innovation, communication, and trust. The question is no longer if we will use AI, but whether we will guide its implementation with strategic foresight or simply manage the fallout from its misuse.

Let's talk about what's really at stake here.

The True Cost of Unchecked Innovation

The reputational risks of poorly governed AI are no longer hypothetical. From DPD’s chatbot being taught to criticise its own company to Google’s search tool advising users to put glue on pizza, these are not isolated glitches. They are symptoms of deploying powerful technology without adequate human oversight and ethical guardrails.

This rush to innovate ignores two systemic costs. The first is algorithmic bias. When AI models are trained on flawed or unrepresentative data, they learn to perpetuate and even amplify societal biases. We have seen this in hiring tools that discriminate against female candidates and facial recognition systems that are less accurate for people of colour. For any organisation with a public commitment to Diversity, Equity, and Inclusion (DEI), deploying biased technology is an act of reputational hypocrisy.

The second is the profound environmental debt. While the carbon cost of training a single model is high, the ongoing energy consumption of the data centres that power AI is staggering. The International Energy Agency (IEA) reports that data centres already account for roughly 1-1.5% of global electricity use and, with the boom in generative AI, their total consumption could double by 2026—consuming as much electricity as Japan. As PR counsellors crafting ESG narratives, we cannot ignore this glaring contradiction.

Governance as a Competitive Advantage

The solution lies in robust, C-suite-led governance. McKinsey’s report also found a strong correlation between CEO-level oversight of AI and higher financial returns. Yet, only 28% of organisations have this in place. This leadership vacuum is where the PR function must step in, framing ethical governance not as a constraint, but as a competitive advantage.

Reputation is built on trust, a principle often at odds with the opaque, "black box" nature of AI. Research from institutions like the USC Annenberg Center for Public Relations confirms that practitioners are grappling daily with AI’s ethical dilemmas. Proactively addressing issues like bias, misinformation, and privacy is how we turn a potential vulnerability into a powerful differentiator.

Reclaiming Our Authority: A Framework for Action

 We’ve been working with AI companies that are really pushing the boundaries of what the technology can and should do. Combine these experiences with being in this business for nearly two decades, and here's what I believe we need to do:

Get in the room where it happens: We need mandatory AI governance councils with senior comms people at the table—not as an afterthought, but as decision-makers. Before any AI tool touches a customer, it needs a reputational risk assessment. Full stop.

Come clean about AI use: Your customers aren't stupid. They know when they're talking to a machine, and pretending otherwise insults their intelligence. We should be leading the push for clear standards on disclosing AI use. If you're using AI to write your CEO's LinkedIn posts, say so.

Keep humans in charge: The best AI outcomes I've seen come from augmenting human judgment, not replacing it. Every critical communication workflow needs breakpoints where a human being—preferably one who understands reputational risk—reviews what's happening.

Do the maths properly: We need to broaden the ROI calculation for AI. Before adopting a new tool, the assessment must include its full environmental footprint and potential ethical risks, such as bias, alongside projected productivity gains. If the reputational or environmental cost outweighs the efficiency benefit, it is not a smart investment.

 The organisations we advise are at a crossroads. The temptation to chase the efficiencies promised by AI is immense. However, true leadership lies not in the speed of adoption, but in the intelligence of its integration. Our role is to ensure that in the rush to innovate, we do not sacrifice the very principles of trust, transparency, and humanity that underpin a strong and enduring reputation.

Next
Next

The Rise of AI in Brand Discovery: How Fintechs Can Optimise for AI Tools