From Principles to Practice: Internalizing IPRA and Global Alliance’ AI Guiding Principles Through the 3H Model
By Ishola N. Ayodele
As I navigate the closing days of 2025, I see a digital landscape
pulsing with unprecedented energy. Artificial intelligence has not merely
entered public relations and communication; it has woven itself into its very
fabric, reshaping how narratives are created, analysed,
amplified, and consumed. According to the Global Alliance for Public Relations and
Communication Management, a staggering 91% of organisations worldwide now permit
the use of AI in communication practice.
Yet a sobering reality that shadows this innovation: only 39.4% of these organisations have
implemented any responsible framework for the use of AI. Adoption
has raced ahead; governance has limped behind.
This imbalance triggered an awakening across professional bodies especially
among communication professionals who understand that trust is the
currency of our craft. The central question became unavoidable: How
do we embrace this powerful technology without eroding credibility, damaging
relationships, or surrendering our ethical soul?
According to Spokespersonsdigest, a defining moment came in October 2023,
when the International
Public Relations Association (IPRA) unveiled a comprehensive
framework for navigating AI in professional practice, what it called the Five IPRA AI and PR Guidelines. These were not
technological prescriptions; they were ethical anchors.
1.
To act with honesty and integrity, by
determining in advance when AI-generated content will be used in external
communication, and ensuring that such use aligns strictly with professional
guidelines.
2.
To be open and transparent, by clearly
disclosing AI-generated content and complying with all regulatory and style
guidelines relating to AI identification and attribution.
3.
To
honour confidential and copyrighted information, through
deliberate staff training on what constitutes sensitive data, avoiding the
entry of confidential information into AI tools, and refraining from using
AI-generated content derived from copyrighted material.
4.
To
ensure truth and accuracy, by subjecting all AI-generated content
to rigorous human fact-checking, correction, and bias removal by professionals
with relevant expertise.
5.
To avoid
the dissemination of misleading information, by exercising utmost
care to prevent misinformation and ensuring that any inadvertent errors are
corrected promptly and transparently.
These guidelines signalled an important truth: AI may be
fast, but ethics must be deliberate.
That global momentum reached a decisive inflection point in May 2025,
when the Global
Alliance for Public Relations and Communication Management
advanced the conversation further. At the historic Venice Symposium,
the Alliance introduced the Seven
Responsible AI Guiding Principles, later ratified through the Venice Pledge
and co-signed by 24 member organisations, including Nigeria’s NIPR.
These principles elevated the discourse from how to use AI to how
to govern it responsibly.
It was against this global backdrop that the final
Mandatory Continuous Professional Development (MCPD) programme of 2025
of the Nigerian Institute of Public Relations became far more than a routine
professional gathering. It was a seismic rumble across our professional landscape. Titled “PR Power Lunch: Advancing Responsible AI Practice,” the
session spotlighted the Global Alliance’s Seven Responsible AI Guiding Principles
not as abstract doctrine, but as an urgent call to action.
Here
are the 7 Principles from my perspective as an African
1. Ethics
First:
AI must adhere to unwavering
ethical standards, aligning with Global Alliance codes. In Africa, this echoes
the Akan proverb: "The ruin of a nation begins in the homes of its
people." Ethical lapses, like AI perpetuating colonial-era biases in media
narratives, ruin trust from within. Nigerian PR professionals, through NIPR,
champion this by prioritizing integrity over hasty innovation.
2. Human-Led
Governance:
Human oversight must govern AI,
addressing privacy, bias, and disinformation. As Africa's data scarcity breeds
opaque models, glass-box transparency open to scrutiny like a village elder's
counsel becomes essential. Governance here means communal deliberation,
ensuring AI respects diverse stakeholder voices.
3. Personal
and Organizational Responsibility:
Professionals own AI outputs, demanding
rigorous fact-checking and education. In high-stakes environments like Kenya's
vibrant media landscape, where misinformation can ignite unrest, this principle
calls for diligence akin to the Maasai warrior's vigilance owning every action
to protect the community.
4. Awareness,
Openness, and Transparency:
Disclose AI involvement openly,
with attribution. In Africa's oral cultures, transparency mirrors the griot's
truthful storytelling: "A lie may travel for a moon but truth will
overtake it." PR teams must declare AI's role in campaigns, building trust
amid rising deepfake threats.
5. Education
and Professional Development:
Continuous learning is a core
competency. With Africa's youth bulge driving innovation, associations like the
African Public Relations Association must lead up-skilling, collaborating with
bodies like FERPI to create curricula that blend global tools with local
wisdom.
6. Active
Global Voice:
PR professionals advocate for
equitable AI, shaping governance. African voices from Nigeria to South Africa
must amplify in international forums, ensuring AI addresses continental
challenges like digital divides, turning advocates into architects of inclusive
futures.
7. Human-Centered
AI for the Common Good:
Champion AI that promotes societal
well-being and equity. In Africa, this means tools tackling unemployment,
climate resilience, or health equity augmenting intention toward ubuntu, where
AI fosters prosperity for all, mindful of environmental impacts in vulnerable
ecosystems.
Principles without practice are powerless.
These seven principles are like the baobab tree; vast, ancient, and
life-giving. But no one benefits from its shade by admiring it from a distance.
To gain from it, we must gather under it. Responsible AI must therefore move
beyond awareness into practice, and practice begins with internalization.
The real question before us is no longer whether AI will shape public
relations, it already has. The defining question is whether we will shape AI with intention, ethics, and responsibility,
or allow it to reshape our profession in
ways that erode trust. And this is where my 3H Model becomes essential.
AI
as Augmented Intentions
The Core Philosophy behind my 3H Model as
articulated in my article in July this year (2025) titled ‘Strategic Use of AI Tools in PR Campaigns and Reputation Management’ was
that AI is a double-edge sword which extends human purpose, amplifies
creativity and equity when ethically directed but diverted from its human
source, it floods like a wayward river, eroding trust.
As a 2025 PRWeek and Boston University survey of 719
professionals reveals, while 71% of PR pros leverage AI for innovation, ethical
lapses like bias and misinformation persist in 55% of firms lacking policies. The
Ishola’s 3H Model counters this by grounding AI in human essence.
It is only by grounding AI in human essence that we
can internalize the Global Alliance’s Seven Responsible AI Guiding Principles
for implementation. No matter the conviction of a driver on his driving
prowess, the car will not move without fuel. Therefore, pledge and commitment
alone will not drive implementation of these seven Responsible AI Guiding
Principles but internalization. And this is why the 3H Model is so
crucial.
Ishola’s
3H Model
Here is a short explanation of the 3H Model (I
recommend reading the article for in-depth understanding)
HEAD:
The Mind Before the Machine
Human before algorithm. That is before we prompt AI,
we must prompt ourselves with intentionality. Here is how to do it:
1.
Human Intelligence First
Obama’s speeches didn’t begin with AI; they began
with empathy. His team identified collective anxieties and aspirations before
digital refinement.
AI
is the chisel, but you must be the sculptor, visualizing the form before the
first cut.
2.
AI as Compass, Not Captain
Coca-Cola uses AI to scan sentiment, but human
strategists interpret why a campaign resonates in Manila but fails in Mumba.
AI
identifies trends; humans give them meaning.
3.
Psychology Over Processing
Nike’s “You Can’t Stop Us” was emotionally
architected using Maslow and identity theory, AI stitched the footage, but
humans built the soul.
As neuroscientist Antonio Damasio reminds us: “We
are not thinking machines that feel, we are feeling machines that think.”
Thus,
use AI as a draftsman, not as the architect.
4.
Test, Taste, Trust
Like a chef seasoning a stew, use AI to A/B test
headlines, but taste every iteration. As the African proverb warns, “You do not
test the depth of a river with both feet.”
Test
incrementally, AI is your tool, not your truth.
5.
Literacy as Armor
Your team’s discernment is your greatest strategic
asset.
HEART:
The Soul in the System
AI processes data; humans process dignity. This is
where ethics cease to be a policy and become a practice.
1.
Guardrails of Grace
During the Qatar World Cup, Adidas used AI to track
fan sentiment, but human reviewers ensured visuals honoured local codes.
Innovation
without cultural sensitivity is arrogance.
2.
Transparency as Trust
The WHO labeled its AI chatbots openly during
COVID-19, a small disclosure that preserved global credibility. As Kevin Plank,
founder of ‘Under Armour’ once said, “Trust is built in drops and lost in
buckets.”
Secrecy
erodes trust; transparency rebuilds it.
3.
Soulful Inputs
Duolingo’s AI tutors are trained on idioms, humor,
and cultural nuance, making learning human. Our elders say, “Until the lion
learns to write, every story will glorify the hunter.”
Feed
the machine with stories only humans can tell.
HAND:
The Human in the Loop
Execution without ethics is automation.
Action without accountability is recklessness.
1.
Co-Creation, Not Automation
LinkedIn’s AI drafts messages but wise professionals
infuse warmth, humour, cultural touchpoints.
AI operates on System 1 (fast, instinctive); PR
requires System 2 (deliberate, nuanced). Our forefathers in Africa understood
this well when they said, "Words are sweet, but they can’t replace
food."
An
AI message lacks the nourishment of human presence.
2.
The Human Gatekeeper
The UK’s NHS uses AI for symptom checks, but
escalates complex cases to doctors.
Why? Because a misinterpreted symptom can cost a
life.
In PR, a misworded statement can cost a reputation.
The Facebook-Cambridge Analytica scandal was not a
tech failure, it was a human oversight failure.
No
AI should ever be left alone in the control room.
In
summary:
The HEAD plans.
The HEART guides.
The HAND executes.
In
Conclusion
A principle without practice is mere rhetoric.
Therefore, signing the Venice Pledge without implementing the Responsible AI
principles in our daily work is an echo in an empty hall, sound without
substance, and promise without presence.
The 3H Model transforms that echo into action. It
ensures AI remains Augmented Intention, not artificial replacement. This
human-centered discipline, thinking with clarity, leading with empathy, and
acting with integrity is what will propel PR professionals from passive adopters
to strategic architects.
Consequently, the 3H Model is inevitable in helping
Communication professionals move the Global Alliance’s Seven Responsible AI
Guiding Principles from principle to practice, from pledge to performance, and
from ethics on paper to excellence in action.
I
deployed AI for proofreading, grammar checks, and fact-checking in this
article, but it did not replace judgment. There were several moments when I
rejected its suggested corrections, particularly where structure and language
risked diluting meaning. I consciously retained certain expressions because
words are not neutral; they are strategic tools in communication, deliberately
positioned to convey specific meanings. This is a practical demonstration of Human-in-the-Loop
AI usage. AI can fly fast and far, but without a human holding the reins, speed
becomes directionless.
The future of PR is not written by algorithms, it is
authored by professionals who remember that the most intelligent tool is still
only as wise as the human who wields it.
Ishola, N. Ayodele is a
distinguished and multiple award-winning strategic communication expert who
specializes in ‘Message Engineering’. He helps Organizations, Brands and
Leaders Communicate in a way that yields the desired outcome. He is the author
of the seminal work, 'PR Case Studies; Mastering the Trade,' and Dean, the
School of Impactful Communication (TSIC). He can be reached via
ishopr2015@gmail.com or 08077932282.

Comments
Post a Comment