“All of us knows, not what is expedient, not what is going to make us popular, not what the policy is, or the company policy - but in truth each of us knows what is the right thing to do. And that's how I am guided.” Maya Angelou
Um.
I have high regard for Maya Angelou. She is for many an American hero.
But this. Well, she would not have intended for it to be misunderstood. But it can be. Because each of us doesn’t always know the right thing to do. And self-guided can be misguided, or worse, unguarded. The harm to self and others can be a mess.
Hello AI governance.
We are not afforded the right to do what we believe is the right thing when it comes to areas we do not have enough information or experience in. That is why communication of AI governance is critical.
The question is: How do we communicate AI governance in a way that matters to employees and the company so that the message sticks and is sticky?
The Policy
Let’s start with a sample policy (generated by ChatGPT).
AI in the Workplace: Governance Policy
Effective Date: November 1, 2023
Purpose: This policy sets forth guidelines and procedures for the responsible deployment and use of Artificial Intelligence (AI). Our aim is to use AI ethically, transparently, and to the benefit of all stakeholders, while recognizing and mitigating potential risks.
Scope: This policy applies to all employees, contractors, and third-party partners.
1. Ethical Use of AI
1.1. AI shall be used to augment human capabilities and improve operational efficiencies, not to promote or support activities that are illegal, discriminatory, or harmful.
1.2. AI will not be deployed in manners that lead to unfair biases, or perpetuate existing societal biases.
2. Transparency
2.1. Any decision-making process that utilizes AI should be transparent. Stakeholders should be able to understand the criteria on which AI-based decisions are made.
2.2. Employees affected by AI-driven decisions have the right to request human review.
3. Privacy and Data Protection
3.1. AI systems that process personal data must adhere to global data protection regulations, such as GDPR.
3.2. Employees and stakeholders should be informed when their data is being collected and processed by AI.
3.3. Strict measures should be in place to anonymize and protect data to prevent misuse.
4. Continuous Learning and Improvement
4.1. AI models should be regularly updated to adapt to new data and to avoid becoming obsolete or biased.
4.2. Feedback mechanisms will be established so that employees can report concerns or anomalies in AI outputs.
5. Accountability and Oversight
5.1. A dedicated AI Ethics Committee has been established to oversee the implementation of AI systems and ensure compliance with this policy.
5.2. Any misuse or violation of this policy will result in disciplinary action.
6. Training and Awareness
6.1. Employees working directly with AI shall receive appropriate training and resources to understand and manage the technology.
6.2. All employees will receive awareness training to understand the impact and implications of AI in their respective domains.
7. External Collaborations
7.1. Any third-party AI solutions or collaborations must adhere to the same ethical and transparency standards set forth in this policy.
7.2. Vendors and partners will be regularly audited for compliance.
8. Regular Review
8.1. This policy will be reviewed annually, or as needed based on the evolution of AI technology and its implications.
Not a bad, basic policy to be able to build off of.
Also, boring for many. You know as an executive, it’s not about the policy, it’s about the presentation.
How To Transform A Boring Policy Into A Memorable Workplace Habit
Priority 1: Connect
How you start a presentation is everything. You must be relatable, relational and real. Let’s break down each:
Relatable:
Connection is more about relatability than likability. Relatability is based on need. An audience needs to think “This person gets me.”
Need is the sum of a shared need, a common enemy and a mutual benefit.
When it comes to AI, what is a shared need you have with employees? For instance, saving time, reducing duplication, double-checking work. Shared need is what defines a desired state (work more efficiently) vs. the current state (pulling data from multiple sources). Choose a positive reality people would want you to move them to and then double down on their current pain.
Name an enemy (not within the company, but a concept or outside reality). Perhaps time-waste or redundancy or redoing work. Whatever causes you and your audience to instinctively react with “I despise that!”
Focus on a mutual benefit: time for more important matters or energy or no longer feeling overwhelmed.
Example: “What if you could save time and reduce your workload by weeks, not just hours? I despise wasting hours on a project knowing more projects are being added to my plate, don’t you. The mountain grows. But what if you could turn that mountain into a valley. What if you could have more time to do what is important and not just urgent.”
Relational
Now relate yourself to it.
“I know exactly how you feel. I’m writing a play about Sherlock Holmes. I painstakingly researched what his living quarters looked like. I showed a friend. Instead of complimenting my research, and sympathizing about the hours it took, he opened his laptop, prompted ChatGTP, and it kicked out in seconds what I had spent hours constructing. I remember what he said as my mouth hung open: “There is a prompt for that.”
Real
Now connect your example to the audience and shift to “we” language.
“We want to get away from those moments where we groan in disbelief at how easy something could have been. We want to do our best work as efficiently as possible. We want time to work for us, not us slaving under the dictatorship of time.”
Priority 2: Hook
You and your audience are now on the same side of the table. You are not across from each other, negotiating for attention. They have crossed over. Now you need to keep them engaged. That is accomplished by the Hook.
The Hook appeals to their heart. They have to feel something about the issue. Here is the rule: Tension Gets Attention. Create tension. Tension doesn’t need to be dramatic or negative or a downer (though it can be). Tension can even be created with humor (it’s tricky, but basically you come off the punch line with a reality dose (talk to me about that later).
Tension is created by use of story and question. Story can be a tale or a quote or media; anything that creates an emotional response.
Example: “We can do our best work and give ourselves the gift of time by utilizing AI. There is just one problem. 24/7, every day of the year, people are trying to ruin you. What is disaster for you is victory for them. Your tears are their shouts of success. What is for us a tool for great work is for them a tool for greed.”
Then you lead to a question. Bonus points if you can add a second half to it:
First half: “How do we use AI as a tool to save us time and give us more time”
Second half: “so that we do our best work in the least amount of time?” OR “even if others target our usage for evil?” OR “without falling prey to misuse” OR “while still learning how to use it effectively?”
Priority 3: Engage
This is where you present the information, and in our case the policy. The key is to get them to think. I’m going to show you the policy and insert “engagement tools.” These are for illustrative purposes to get you to think. You can also use them if you like them.
You need to lead from the Hook to Engagement with “we” language that answers the question. For instance, “We can do our best work even if someone is targeting our usage by working within a governance policy that is in place to protect you and me. None of us want to be a victim or an entry point for bad actors. So here’s what we do.”
AI in the Workplace: Governance Policy
Effective Date: November 1, 2023
Purpose: This policy sets forth guidelines and procedures for the responsible deployment and use of Artificial Intelligence (AI). Our aim is to use AI ethically, transparently, and to the benefit of all stakeholders, while recognizing and mitigating potential risks.
Scope: This policy applies to all employees, contractors, and third-party partners.
1. Ethical Use of AI
1.1. AI shall be used to augment human capabilities and improve operational efficiencies, not to promote or support activities that are illegal, discriminatory, or harmful.
1.2. AI will not be deployed in manners that lead to unfair biases, or perpetuate existing societal biases.
Engage: A story of when you were biased against. A question: “How many of you have felt bias? What was the primary emotion you felt?” A sticky quote: “Bias is BS.”
2. Transparency
2.1. Any decision-making process that utilizes AI should be transparent. Stakeholders should be able to understand the criteria on which AI-based decisions are made.
2.2. Employees affected by AI-driven decisions have the right to request human review.
Engage: Emphasis on keeping the human connection. Sticky quote: “AI doesn’t replace the human, it re-places the human to do what they do best.”
3. Privacy and Data Protection
3.1. AI systems that process personal data must adhere to global data protection regulations, such as GDPR.
3.2. Employees and stakeholders should be informed when their data is being collected and processed by AI.
3.3. Strict measures should be in place to anonymize and protect data to prevent misuse.
Engage: Horror story of privacy breach. Sticky quote: “Who let the dogs in?” Or “Keep the door locked.”
4. Continuous Learning and Improvement
4.1. AI models should be regularly updated to adapt to new data and to avoid becoming obsolete or biased.
4.2. Feedback mechanisms will be established so that employees can report concerns or anomalies in AI outputs.
Engage: Visual of rusted car, etc. “See it, say it.”
5. Accountability and Oversight
5.1. A dedicated AI Ethics Committee has been established to oversee the implementation of AI systems and ensure compliance with this policy.
5.2. Any misuse or violation of this policy will result in disciplinary action.
Engage: Introduction of animated disciplinarian commonly brought up on screens, etc. Sticky quote “Compliance builds Reliance.”
6. Training and Awareness
6.1. Employees working directly with AI shall receive appropriate training and resources to understand and manage the technology.
6.2. All employees will receive awareness training to understand the impact and implications of AI in their respective domains.
Engage: Question: “What are your biggest questions so that we ensure training is relevant to you?”
7. External Collaborations
7.1. Any third-party AI solutions or collaborations must adhere to the same ethical and transparency standards set forth in this policy.
7.2. Vendors and partners will be regularly audited for compliance.
Engage: Story about friendly fire. Question: When was the last time someone let you down and you thought they should have known better? Statement: We not only protect you from bad actors, we protect you from supportive actors.”
8. Regular Review
8.1. This policy will be reviewed annually, or as needed based on the evolution of AI technology and its implications.
Engage: “You can only expect what you inspect.” “We review with your review in mind.”
Priority 4: Call To Action
End your presentation well and on a high note. The most effective way is to present a two-fold message: If we don’t, When we do.
“If we don’t work together on working within these guardrails, we will fail to use AI responsibly or effectively, we will risk that our best work went to waste, and we could very well be the one who opened the door to grave consequences.
When we bring this policy to life, we will work with confidence and work with trust in each other. We will save time, overcome waste and get back to doing what we do best.
A governance policy is essential. And a complete waste if not memorably embraced.