Communication Without Communicating: The Peril of AI Outpacing Strategic Judgment
By Ishola N. Ayodele
In the digital age, where images can
be conjured, enhanced, or manipulated with a few keystrokes, a single
photograph has ignited a firestorm of debate in Nigeria and beyond.
On January 4, 2026, the Nigerian
Presidency shared a picture on X (formerly Twitter) depicting President Bola
Tinubu in a private lunch meeting with Rwandan President Paul Kagame in Paris.
The image showed the two leaders seated amicably, discussing global affairs and
Africa's future.
Yet, what should have been a routine
diplomatic snapshot quickly spiraled into controversy due to a subtle but
unmistakable detail: a "Grok" watermark embedded in the corner,
hinting at AI involvement.
Social
media erupted with hashtags like #ForgerInChief and #ParisLunchHoax, one angry
user wrote, "What they cannot forge does not exist, artful forgers."
X posts from critics question the
very occurrence of the meeting and linking it to broader allegations of
deception in governance.
Even as the Presidency swiftly
clarified through Senior Special Assistant Temitope Ajayi that the photo was
authentic, captured on a phone in poor lighting and merely enhanced using Grok
AI for clarity, the damage was done.
Reactions remained polarized: some
accepted the explanation as a harmless tech boost, while others dismissed it as
a cover-up, fueling cynicism about government transparency.
This incident isn't isolated; it's a microcosm of the ethical quagmire
surrounding AI in public relations (PR). In my 2025 article, "AI, Ethics
and the Soul of Public Relations," I delved into how responsible AI use
upholds the integrity of communication, drawing on frameworks like the
International Public Relations Association's (IPRA) Five AI and PR Guidelines which
emphasize honesty, transparency, and avoiding harm and the Global Alliance for
Public Relations and Communication Management’s Seven Responsible AI Guiding
Principles, including human-led governance, accountability, and ethical
innovation. These principles, born from global consultations, underscore that
AI should augment, not undermine communication.
The
Tinubu-Kagame photo saga vividly illustrates why ignoring them invites chaos.
Here are two quick lessons from this self-inflicting paracrisis.
1. Transparency is the Antidote
to Suspicion
A simple observance of transparency guidelines could have averted this
hullabaloo. The Global Alliance's principle of "Transparency and
Explainability" mandates clear disclosure when AI touches content, much
like labeling genetically modified foods. In this case, had the Presidency preemptively
noted the enhancement perhaps with a caption like "Photo enhanced for
clarity using Grok AI," the watermark might have been a non-issue.
Instead, its unannounced presence bred doubt.
PR practitioners should disclose AI interventions to "inform" the
public. Without it, trust erodes. Statistics bear this out: A 2024 Getty Images
study found that 98% of global consumers view authentic visuals as crucial for
building trust, with 87% demanding transparency on AI-generated or altered
images. In Nigeria, where misinformation already plagues politics, such opacity
only amplifies skepticism.
Borrowing from psychology, this ties into "confirmation
bias"—people interpret ambiguous evidence to confirm preexisting beliefs.
Critics of the administration saw the watermark as proof of forgery, while
supporters rationalized it away. As AI ethicist Timnit Gebru warns, "If we
don't build transparency into AI systems from the start, we risk amplifying
existing inequalities and eroding societal trust."
2. Strategy
Must Lead Technology: The Value of Certified Communication Experts
Too often, organizations adopt tools
before they adopt frameworks for ethical use. AI can enhance imagery, but without
strategic judgment, it can also undermine messaging. This is where professional
communication expertise becomes indispensable.
Trained communicators bring two
essential assets:
- Contextual judgment:
The
ability to anticipate how a message will be interpreted, not merely how
it is constructed. This requires situational awareness cultural, political, and
psychological. For example, the Nigerian Institute of Public Relations’
adoption of the Global Alliance Responsible AI Guidelines emphasizes “Expertise and Professionalism,”
urging practitioners to understand not only AI’s capabilities but also its
limitations, risks, and ethical implications within sensitive communication
environments.
- Ethical foresight:
The
discipline of applying ethical standards proactively to protect credibility and
public trust. This includes early disclosure of AI involvement and ensuring
narrative alignment. A certified PR professional trained under frameworks such
as the IPRA and Global Alliance AI guidelines would likely have flagged the
reputational risk of an unexplained AI watermark and recommended either its
removal or transparent disclosure, thereby averting the backlash before it
escalated. This is why I emphasis the principle of Human-in-the-Loop
(HITL) in my article “Strategic Use
of AI Tools in PR Campaigns and Reputation Management”
Conclusion:
When Strategy Lags Behind Technology, Credibility Becomes
Collateral Damage
The Tinubu–Kagame photo did not fail
because it was enhanced; it failed because its creators ignored how it would be
perceived. In public communication, perception is not secondary, it is
decisive.
The danger today is not AI, but communication
without judgment. Technology magnifies both message and mistake. Without
strategic oversight, AI doesn’t clarify truth, it multiplies doubt.
Credibility cannot be edited back
in.
This Grok image saga reminds us that
AI cannot confer trust; only authenticity can. The digital public is not
irrational, its skepticism is earned. In an ecosystem crowded with manipulated
media, ambiguity becomes the currency of doubt.

Comments
Post a Comment