Reputation Management In the Age of AI and Viral Storms: Lessons from the Oshiomole Jet Saga.
By Ishola N. Ayodele
Warren Buffett once said, "It takes 20 years to build a reputation and
5 minutes to ruin it". Now, in this age of AI and social media, it takes 30-SECOND to destroy a reputation.
This is because a 30-second video can now do what years of political
opposition could not.
It can
i. End a career.
ìí. Damage a reputation.
iii. Rewrite a public narrative.
No press conference.
No court ruling.
No investigation.
Just one clip. One upload. One share.
And it’s everywhere.
Recently, Nigerians watched this play out.
A grainy private-jet video allegedly showing a man resembling Senator Adams
Oshiomhole inside a luxury private jet cabin massaging the leg of a beautiful
lady (Not his wife) (The Whistler Newspaper, 2026). Shared rapidly on platforms
including SaharaReporters’ Facebook, it sparked outrage, memes, and calls for
accountability, amplified against the backdrop of economic frustrations (The
Whistler Newspaper, 2026).
Senator Oshiomhole’s media office, through aide Oseni Momodu, issued a
strong denial, describing the clip as “a poorly crafted and edited fake AI
video,” a “fabricated contraption” created by “malicious actors” for blackmail,
reputation damage, or clicks. They highlighted alleged technical
irregularities, particularly around the 0–0.6-second mark involving the woman
boarding via airstairs versus the remaining interior footage, and urged social
media regulation (Daily Post Nigeria, 2026; The Whistler Newspaper, 2026b).
The woman has been widely identified in media reports as South African
lifestyle influencer and adult-content creator Leshaan Dagama (also spelled
Leeshan Da Gama). On her Instagram Stories, she reportedly responded amid
trolling: “Your senator is the problem, go be mad at him, not me.” Her
statement, which did not deny the encounter, added fuel to speculation without
resolving authenticity questions (Premium Times Nigeria, 2026).
A detailed fact-check by The Whistler applied multiple professional deepfake detection tools Deepware Scanner, Zhuque AI Detection Assistant, AU Video Detector, Sight Engine, and Hive AI Deepfake Detection finding no clear hallmarks of generative AI (e.g., unnatural blending, lighting inconsistencies, or lip-sync errors). Leading to a conclusion that the video is not AI-generated (The Whistler Newspaper, 2026a).
Another investigation by Edo broadcasting service (EBS) flagged potential
deepfake indicators and leaned synthetic.
No independent, court-admissible forensic analysis has been publicly
released by any party.
Therefore, Authenticity remains disputed.
This saga exemplifies a broader shift: social media has dismantled traditional
gatekeepers. Once, spin doctors, paid journalists, or political muscle could
“kill the story” or sweep it under the carpet. Today, one leaked clip reaches
millions instantly. A landmark randomised controlled trial demonstrated this
power: Facebook mobilisation messages from friends increased voter turnout far
more effectively than impersonal ads
showing how personal networks amplify reach fourfold compared to
controls (Bond et al., 2012).
Case studies worldwide
illustrate the peril and the new accountability
A. Hong Kong deepfake
financial fraud (2024) In early 2024, a finance worker at a multinational company in Hong Kong
was tricked into transferring approximately $25 million (some reports cite up
to $39 million) during a video conference call. The scammers used deepfake
technology to impersonate the company’s chief financial officer and other
senior executives, convincing the employee the transfer was legitimate and
urgent. The fraud was only discovered after the money had been moved to
multiple overseas accounts. The case highlighted the growing risk of real-time
deepfake video being used for high-value corporate fraud. (Source: Hong Kong
Police, CNN, Bloomberg, 2024)
B. Indian politician
Palanivel Thiagarajan audio leak (2023) In April 2023, leaked audio recordings emerged allegedly showing Tamil
Nadu DMK lawmaker and former Finance Minister Palanivel Thiagarajan making
critical remarks about his own party, praising the rival BJP, and discussing
internal corruption. Thiagarajan immediately denied the authenticity of the
clips, calling them “fabricated” and suggesting they were AI-generated deepfake
audio created by a blackmail group to damage his reputation. Independent audio
forensic analysis by journalists and experts found no signs of synthetic
manipulation (consistent waveforms, natural background noise, absence of AI
artifacts), confirming the recordings were genuine. The incident became a
widely cited example of the “liar’s dividend” tactic. (Source: Rest of World,
Indian media investigations, 2023)
Lessons for Public Officials & Leaders
In the age of smartphones, instant
sharing, and increasingly sophisticated AI tools, the old rules of political
survival have been permanently rewritten. Spin, delay, denial, and distraction
no longer outrun reality. The only reliable defense left is character itself.
- The safest, most bulletproof strategy is ethical living Whatever you do not want exposed, do not do it. No crisis team, no PR firm, no rapid-response unit can outpace a single 30-second authentic clip in today’s world. The moment the phone records, the moment the file leaves the room the story is already viral while the denial is still being typed.
- High moral standards, integrity, and visible transparency are now non-negotiable They are not optional extras for leaders; they are the minimum requirement for preserving public respect and political survival. As the striking African proverb declares: “When peeling groundnut for a blind man, you must keep whistling so he knows you’re not eating it.” Today there is no blind man. There is only a global audience that never sleeps, cameras always ready.
- When crisis hits, how you respond determines how much survives
- Transparent communication builds credibility faster
than any polished apology.
- Evidence-backed rebuttals (metadata, forensic
analysis, timestamps, source verification) carry real weight.
- Genuine accountability when appropriate can restore
respect where denial destroys it.
- Blanket, evidence-free cries of “it’s AI-generated” in the face of mounting verification usually accelerate the damage.
- Society
must also rise to the challenge Beyond
individual leaders, this moment demands collective responsibility:
- Citizens urgently need stronger digital and media
literacy to separate real from synthetic content.
- Platforms must practice responsible moderation without
crossing into censorship.
- Governments and societies may need smarter cyber laws
that punish malicious deepfakes while fiercely protecting free expression
and press freedom.
Conclusion
In the end, no firewall, be it legal,
technological, or rhetorical can protect what should never have been created.
The timeless truth, now amplified by cameras in every pocket and AI that
exposes lies faster than they can spread, is brutally clear: live so that if
every hidden moment were laid bare tomorrow, nothing could break youbecause
nothing worth hiding ever happened. In 2026 and beyond, integrity is no longer
just a virtue or a nice-to-have; it has become the only currency that cannot be
deepfaked, the only shield that still holds when the viral storm arrives.
Whatever you cannot afford to see trending, simply do not do.
©Ishola, N. Ayodele is a distinguished and multiple award-winning strategic communication expert who specializes in ‘Message Engineering’. He helps Organizations, Brands and Leaders Communicate in a way that yields the desired outcome. He is the author of the seminal work, 'PR Case Studies; Mastering the Trade,' and Dean, the School of Impactful Communication (TSIC). He can be reached via ishopr2015@gmail.com or 08077932282.
References
Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C.,
Settle, J. E., & Fowler, J. H. (2012). A 61-million-person experiment in
social influence and political mobilization. Nature, 489(7415),
295–298. https://doi.org/10.1038/nature11421
Bradshaw, S., Bailey, H., & Howard, P. N. (2021). Industrialized
disinformation: 2020 global inventory of organized social media manipulation.
Oxford Internet Institute, University of Oxford. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2021/01/CyberTroop-Report-2020-v.2.pdf
Daily Post Nigeria. (2026, February 4). Fake AI footage – Oshiomhole debunks
viral private jet video. https://dailypost.ng/2026/02/04/fake-ai-footage-oshiomhole-debunks-viral-private-jet-video
McLuhan, M. (1964). Understanding media: The extensions of man.
McGraw-Hill.
Premium Times Nigeria. (2026, February). Viral private jet clip was
AI-generated, Oshiomhole says, as South African model shades Nigerians. https://www.premiumtimesng.com/entertainment/naija-fashion/854385-viral-private-jet-clip-was-ai-generated-oshiomhole-says-as-south-african-model-shades-nigerians.html
The Whistler Newspaper. (2026a). FACT-CHECK: Viral Oshiomhole foot-massage
video not AI-generated. https://thewhistler.ng/fact-check-viral-oshiomhole-foot-massage-video-not-ai-generated
The Whistler Newspaper. (2026b). Oshiomhole denies viral private jet video,
claims clip is AI-generated. https://thewhistler.ng/oshiomhole-denies-viral-private-jet-video-claims-clip-is-ai-generated/
WIRED AI Elections Project. (2024). Election deepfakes dataset & related
coverage. https://www.wired.com/story/generative-ai-global-elections/
Comments
Post a Comment