In this episode, Chip and Gini review the recently released PR Council guidelines for the use of generative AI by agencies.
They explore which elements small agencies might consider adopting, as well as identifying others that may warrant more of a “wait and see” approach.
As the use of AI by communicators continues to expand, there will be many thorny issues to navigate. The PR Council guidelines provide a useful starting point for some of these discussions.
- Chip Griffin: “To sum up my guidelines, the Chip Griffin guidelines for using AI, just use some common sense and understand your own risk tolerance and the risk tolerance of your clients.”
- Gini Dietrich: “We shouldn’t let this scare us. Let’s just use the tool to make us more effective.”
- Chip Griffin: “Ethics are never going away here.”
- Gini Dietrich: “Our jobs as the protectors of reputation…AI is not going to change that. We will still have jobs.”
The following is a computer-generated transcript. Please listen to the audio to confirm accuracy.
Chip Griffin: Hello, and welcome to another episode of the Agency Leadership Podcast. I’m Chip Griffin.
Gini Dietrich: And I’m Gini Dietrich.
Chip Griffin: Today we’re gonna talk about, yeah, we’re gonna just gonna talk about the PR Council guidelines on AI. I mean, that’s, that’s –
Gini Dietrich: you got nothing. That’s nothing.
Chip Griffin: I just, no, like not creative. It is what it is.
All right. If only I had some AI, maybe it could have helped me do a better intro. After this,
definitely not one of my better openings.
Gini Dietrich: Nope. But it was honest and it is what we’re talking about.
Chip Griffin: It is. Honesty, transparency, ethics, all those kinds of things come together because as we are all thinking about artificial intelligence and a particularly generative AI, we’re considering not just how we use it as an agency from a business perspective.
How does it fit into our client needs? How does it impact our business models? We also need to be thinking about how do we use it ethically, appropriately? What kind of disclosures do we need to make? All of those kinds of things. And the PR Council, Has taken a stab at a a, a, essentially, I, I view it as a first draft of how you should approach generative AI as an agency. And they put together a task force that took a look at it and they, my understanding is that the task force had people of differing views of AI, some who thought it was great, and some who thought it, you know, I dunno whether they thought it was evil, but at least they weren’t nearly as keen on it as, as I think, the way that it was described.
And so it, it does bring together a lot of different ideas. It does have a bit of a feel of something written by committee as these things do because it was. Right. Yeah. Great. So, so it’s, it’s hard to hide that fact and everybody kind of gets their own, their own checkbox in there I think. But there’s still a lot of useful things in there and I think particularly for small agencies being able to take advantage of some of these resources.
Obviously the PR Council mostly serves larger agencies, but a lot of those things still spill over and can be useful thinking for those of us in the smaller agency world as well.
Gini Dietrich: Yeah, it’s really interesting. I had a conversation with a client probably about a week and a half ago, and we were talking about ChatGPT in particular, and he’s like, I just feel like if I use it, I’m plagiarizing.
And I was like, what? What are you plagiarizing? And he goes, well, it’s not my own work. And so we had, I mean, we had a pretty robust conversation and we debated this whole plagiarism idea. And I don’t know that I changed his mind, but you know, my point is, and I’ve said this all along, it’s not a final draft.
It’s not something that you are going to submit to a client the very, the very first draft, right? You still have to use your common sense and your creativity. You have to provide context. You have to check the facts to make sure they are accurate because we’ve talked about before that sometimes they’re not.
But it’s, it’s not plagiarizing. It’s not like you’re taking something from somebody else’s website and copying and pasting it into your document. That it’s totally different from that. So I think there’s a lot of confusion. I think there’s a lot of fear. I think there’s a lot of misunderstanding and people are just trying to get their arms around, is this going to be beneficial or is this gonna completely take me out of my career entirely?
Chip Griffin: Yeah. And, and I, I think there is, there’s a tremendous amount of fear mongering. And I, I, I don’t think that’s particularly beneficial. No, I think there, there are, there’s a lot of good. That can come out of generative AI. Yes, you need to be thoughtful about it. Yes, you need to understand what the risks are, and there does need to be a solid conversation about how these engines are acquiring all of the information, whether that’s text or imagery or other things in order to be able to, to do what they do.
But at the same time, it, if, if you only view it from a fear perspective, you’ll never be able to take full advantage of what’s really out there and, and everybody else will. So you’ll fall behind.
Gini Dietrich: Right. I saw that somebody, oh, Boston Children’s Hospital is hiring a prompt engineer to help teach their internal team how to use it.
So you know the, I think that’s the next job description, right? Is the prompt engineer. This is the next thing that you can do to, to go into an organization and help, help the organization figure out how to use AI and use it in an effective way that is ethical and honest and transparent and doesn’t take over the world in a bad way.
So that that’s what’s coming next. And we, I think that’s the challenge. Here is that we have to figure out as well, from my perspective as a communications professional, how I can use it and prompt it correctly to get what I need out of it. And so it, you know, I don’t view it as plagiarism and I don’t view it as taking over my job, but I do view it as a tool that’s going to make me more efficient and save me lots of time.
Chip Griffin: Yeah. See, I, I, I think I am, I am not keen on prompt engineers as a job role. I, I think that it’s coming that I, I think it is short-lived. Do you? I I, I, I think it’s, it’s like a a Clubhouse coordinator from, you know, a year and a half ago. I think it will last just about as long you know, for, for several reasons.
First of all, I think just, you know, do you, do individuals need to be trained on how to use it? Yes. But they should have to be trained and then move on. Right. Right. It’s not something that they should have to, because it’s, to me, learning how to use these tools is learning how to use Google or Gmail or any of these other things that we use on a day-to-day basis.
You absolutely need to learn about them. You absolutely need to understand the developments and how to use new things as they come along. But I think any kind of a, a. Substantial role as an employee doing that for your organization is, is not likely to last very long. And in, in addition, I think most of these tools will become, will become a whole lot more user friendly, sooner rather than later.
And I think part of the reason, oh, for, for sure why you, you need quote unquote prompt engineers today is because some of these things are pretty obtuse, particularly like Midjourney on the imagery side. I mean, that is. You gotta be kind of geeky in order to enjoy figuring out how to use that service. I mean, it’s, it’s every, I mean, you pass variables in your prompts and things like that.
I mean, it’s, it is not normal for most agency type employees to think in the way that it’s asking you to think. But I, it’s only a matter of time before this becomes much more… there’s a slider you can click on this, you know, here are some options you can pick from. And I think that as it becomes easier to use that then obliterates the need for someone who is truly a prompt engineer. Now I think organizations might want to have someone who is an AI specialist, but takes a broader view mm-hmm. Of the technology. Mm-hmm. And, and I think certainly within most agencies, someone should be your AI point person, someone who is understanding, you, know, where things are headed, and can share that information with the rest of the organization, test out tools and, and talk about that, you know, si similar to the way.
Most agencies have had other technologies developing. You’ve usually got that, that key stakeholder internally who is mm-hmm. The advocate for it and the educator about it.
Gini Dietrich: Yeah, absolutely. I, I think that’s actually correct. I just looked up the job, job description for Boston Children’s cause I wanted to see and it it, while it says the title is prompt engineer, it is larger.
It, I mean, the job description is larger, more to the fact that you have described where it’s really about the artificial intelligence based tools and how to use them to increase efficiency and reduce burnout and understand how it’s continuing to evolve and change and how to implement that in internally.
So they are calling it prompt engineer, but it does sound like it’s a larger role than that.
Chip Griffin: Yeah, because I mean, a, a prompt engineer in its purest form is basically like a teletype operator. Right, right. Fantastic. Right. Just bring back the teletype and they’ll be all set. You know, I mean, the tools just will become a lot easier to use and, and as people do it, they should be learning.
And if they’re not, that’s a problem already.
Gini Dietrich: Right. You know, this is, this is how I view it. You know, when I, I remember it wasn’t this long ago, but I remember, you know, Twitter became a household name and Facebook was, was available for businesses and LinkedIn launched and I was doing a lot, a lot of speaking at that time.
My book, my Spin Sucks, had come out and I was on the book tour, but I was also doing a lot of speaking to CEO organizations and the pushback that I got from CEOs. So this would be.10 years ago, 2013, 2014, the pushback I got about social media becoming part of their business lives was incredible. And every single one of them and every speak single speaking engagement I had, people were like, Nope, we don’t have to worry about it from a business perspective.
It’s for the kids. It’s never gonna hit us. And I kept saying, you are wrong. Right? But it was the same kind of thing where everybody was trying to figure out, and did we need to hire somebody to focus on this? They’re just tools. This is a tool and the strategy of building our businesses, the strategy of growing our businesses, the strategy of doing the work that we do for our clients, that doesn’t change.
The tools change in how we do it and become more efficient, but the way we do things doesn’t change. So we don’t, we shouldn’t let this scare us. Let’s just use the tool to make us more effective.
Chip Griffin: Right. And and I think a lot of the PR Council’s guidelines are… they’re applicable no matter whether you’re using AI or other things.
Right. So, yeah, I agree. As, as we start walking through some of the things that the, the council advises. The first one is protecting confidential information. Well, obviously, duh, right? I mean, we all know that. But it is a good reminder that anything that you put into these tools as a prompt can be used by the tools to further train their algorithms and, and build their own data sets.
So you certainly don’t want to be putting in there super sensitive information. Now, I wouldn’t go overboard and say, well, anything that’s even remotely confidential shouldn’t go in there. I think the, the council’s guidelines are a little bit more aggressive than I would be personally. I think you just need to use that common sense and, and not put in, you know, the secret formula for
You know, the Colonel’s chicken or whatever but.
Gini Dietrich: That, that was quite the…
Chip Griffin: I I just, I was grasping at something and I don’t know where that came, I, I guess it’s lunchtime, so I’m hungry. Maybe I don’t, I don’t even like KFC. Right. So
Gini Dietrich: I don’t think I’ve had it since I was a kid.
Chip Griffin: And, and I don’t think anybody’s talked about the colonel’s secret recipe since I was a kid.
Since so, right. So I’ve just dated myself and all you younguns were listening are like, The hell, what was he talking about?
But I, you get the idea.
Gini Dietrich: You wrote an article about this and in your article you said it really is about risk tolerance, right? Yeah. So I mean it, think about it from that perspective. I probably just from a client perspective, if it were confidential, like we do a lot of crisis work, if it was something we were working on in a crisis perspective, I wouldn’t put it in there.
I probably wouldn’t put product information in there until after it’s launched. So those kinds of things, just from a risk tolerance perspective, are the things that we would be thinking about. But you know, you, you may very well wanna put product information in as you’re working on the launch. That’s up to you.
So I think you’re, in your article, you, you talked a lot about risk tolerance and I think that’s exactly right.
Chip Griffin: Yeah. And, and I think, I mean, I, look, I, I think it’s, it’s different if you’re Apple or Pfizer than if you are a small business too, right? Sure. Because how many people are banging down the door to find out what your 10 person company is up to?
Not very many, so you probably don’t need to. But if I’m Apple or I’m Pfizer and I’m in a regular, a regulated industry, I’m gonna be much more careful absolutely about the kind of information that I’m putting in there because, you know, then you’ve got much more of an issue. You know, one of the other things they talked about is, is something that, that you addressed earlier from the business side of it, which is not just taking something from the AI tools and passing it along and saying, okay, cool, let’s, let’s go with this.
And there are all sorts of, of quality reasons not to do that. But the PR Council’s guidelines point out that there are potentially some legal reasons why you might want to not want to do that. And, and obviously we’re not lawyers. They’re not lawyers. We would encourage you to talk to your own legal counsel.
But one of the things that they point out is that it is unlikely that you could obtain copyright for anything that’s created by AI as the laws stand today. And so you can use it to contribute to the work that you’re doing and still get copyright, but you can’t if it’s acting on its own. And there and many agency contracts indicate that, that you are essentially, you hold the copyright to the material that you’re creating.
Mm-hmm. Mm-hmm. And providing a license to it. You cannot do that if it is not. If it doesn’t meet that standard. So that’s, that’s something you just need to look at and make sure that you understand. I, from a quality standpoint, I simply wouldn’t do it anyway, so I don’t think it makes sense. But you know, there’s also this legal and regulatory reason that you’d want to consider as well.
Gini Dietrich: Yeah. That’s super interesting. You know, I mean, so if you’re thinking about it, like, if I were creating PESO model content and I was using ChatGPT to, to do that, I might not be able to, I’ve not, might not be able to get the copyrights. As I went through that process, so, and having just gone through that process and finished that process, it’s pretty intense.
And they look at everything.
Chip Griffin: that that’s a, that’s a trademark. So that, that’s, that’s different. So, oh, sorry. So, so they are, they are two and, and I mean, a lot of people conflate trademark and, and copyright, but they are two different things. And trademark, at least as I, I mean, I think you could certainly use the AI to help you come up with ideas for a name or you know, like PESO model or something like that and still be able to trademark it effectively.
Because that really comes down more to, as I understand it as an non-lawyer, whether someone is already using that name in practice, but sort of how you dreamt it up is irrelevant. But from a copyright perspective, if the machine generates that, computers cannot hold copyrights. So since it was the one doing the work, Theoretically, and I, I suspect that the copyright laws will be adjusted to address, I would imagine.
Yeah. Some of this sort of thing as well, because to, to some degree, this is, you know, to me a bit like, you know, if, if you spell check the document, does that threaten its copyright ability? Doesn’t, but you gotta be careful how, how extreme you go on some of these things. And, and there are, you know, the, the document, the guidelines do discuss some of the extreme things like deep fakes and disinformation.
Not surprisingly, they say you shouldn’t do them. Duh, duh. Right? That’s, to me, that’s a throwaway part of the guidelines. If you gotta be told that you probably don’t care about the rest of the, the guidelines anyway. Right? Yeah. Fair. If that’s, that’s fair. If that’s where your, your line is drawn. Okay. And it talks about, again, the basics that we’ve talked about, checking accuracy.
But I think, you know, one area I’d like to talk a little bit about in there is they have an extensive bit on disclosure and I am of the mindset that their document goes overboard in what they suggest for disclosure, and they, and they do throw in flexibility can be applied, but what the heck does that mean after they’ve sort of spelled out?
Mm-hmm. Some, some pretty detailed where you’re supposed to tell your client if you use AI for any portion of the work that you’re doing, and let them know exactly, essentially how you’re using it. I think that’s rubbish. I, I don’t, I would not get into that game. Now if a client asked me, I’ll answer honestly.
Sure. But, but I would, I would no more feel the need to, to go tell them that I have an intern working on a project. Right. Unless my contract requires it. Right. If you, if your contract is very specific about what you do or do not need to, to report or absolutely do it, but in 99% of the cases, I don’t think there’s any need to tell ’em what tools you’re doing.
Yeah. I totally agree with that, how you’re doing it. Yeah. If, if I’m gonna send them something that, that’s entirely generated by AI, yes. I absolutely should be disclosing that. For the reasons that we’ve talked about as well as others, right? There are, there is a level of risk that comes with that. Mm-hmm.
And so disclosing that I think would be important there, but I, I don’t think that you need to, in a blow by blow way, go through with with the disclosure in the way that the council suggests. And I suspect that this is an area where probably the lawyers had a bit more of a hand in it than the, the practical people.
Gini Dietrich: Yeah, I totally agree with you. I mean, it’s, it’s the same to your point. It’s the same as saying, oh, well here’s this op-ed we, we drafted for you. The intern started it and then our AE took a look at it and, you know, improved it. And then I looked at it last and made sure that it was something that, that would be published.
Like we don’t, no one works that way. No.
Chip Griffin: Right there, there’s no reason to do that. And so as, as long as you’re following good practices in how you’re developing the content of whatever kind, I don’t think it matters that you’re using the AI to help you along. I mean, frankly, carry to its extreme. Their disclosure requirement would require you to, to disclose that you’re using, you know, Otter as a transcription instead of, you know, manually transcribing notes or you’re, you know, You’ve used Photoshop, which now has a lot of AI tools for upscaling mm-hmm.
And removing objects mm-hmm. And that kind of stuff. And geez, you know, so now we gotta, I mean, you would just be disclosing nonstop to your clients, right. And, and your clients will hate you. So, so just use, again, this is all about common sense. I, I mean, I, to me, if, if I were to sum up my guidelines, the Chip Griffin guidelines for using AI, just use some common sense and understand your own risk tolerance and the risk tolerance of your clients.
If you do that, you’re probably going to be okay. But I, I, and, and I think in one of the areas where they, they go even more overboard is voice edits. And I understand where they’re coming from a as far as using technology to edit people’s words. I think if you’re doing it for unethical reasons, no, you shouldn’t do that.
But if you’re using a tool like Descript, which allows you to go in and correct someone’s grammar or mispronunciation of something, and you can do it pretty easily. As long as you’re maintaining the intent and your client understands that in general you’re cleaning stuff up for them, I don’t think you need to go.
I mean, the, the suggestion is that you should go and get written authorization from the person whose voice you’re editing every single time you do it. I think that is, Bonkers. I don’t, I don’t think there’s any need to go to that level. Now, they do point out that there may be issues if you’re using voiceover talent, who is provided by, is under a union agreement or something like that.
Right. That’s a whole separate thing. Right? Right. You absolutely need to understand what your legal agreements are with clients, with talent, with whomever, and by all means, if it says, don’t do this, don’t do this, don’t do it. Right. But in general, I don’t think you need to go. And as long as you are maintaining the intent of the client and you’re not, Literally putting words in someone’s mouth for unethical purposes.
I don’t think it matters. It just, to me it doesn’t.
Gini Dietrich: Yeah. I mean, I can see you using it. Like we have one client that does a podcast and he mispronounces names every so often, and I have to have him rerecord it. If I can use something like Descript to have just fix it for him, that saves us all a lot of time.
Chip Griffin: Absolutely. He doesn’t, and there there’s no harm. It’s not a no. No. Animals were harmed in the filming of that. Right, right, right. Yeah. Yeah, so I, I mean, It, it, you know, never let the lawyers get in the way here. You, you should, you should ask lawyers how you can do something, not whether you can do something. That’s fair.
Because there’s always a way to do it. That is fair. And it doesn’t always require 18 tons of paperwork as that is. Exactly. If you took, if you took a literal reading of the, the council’s guidelines, it wouldn’t. The, the, the final area that I wanted to just touch on here is it, it does caution about potential diversity and inclusion issues with AI
and, you know, it, it asks things that I think are, are not particularly practical. The, the guidelines suggests you should ask all your vendors how they train their data sets and that sort of thing. Good luck. I don’t know any vendors who are gonna give you Right. Any kind of a detailed answer on that.
Right. But I, I do think that you need to be generally aware of the biases that, that are likely to be present in these. Anytime you’ve got training sets, the content that’s in the training set is going to dramatically impact the content that’s produced, whether that’s imagery or text or anything else.
And because a lot of content is created by majority populations, they’re likely to be overrepresented in those samples. Yep, yep. You simply need to be aware of that and do what you can in order to address it, but it, it is not something where you’re gonna get a lot of hard answers out of. You know, pretty much any vendor, they’re just, they’re not going to share that with you.
And, and I think you need to, again, this is, this is like anything else that you’re doing. You need to have in mind diversity and inclusion. But it, you know, it, it can’t be the only thing that you’re looking at otherwise, you’ll never find a tool out there that meets that test today at least.
Gini Dietrich: Yeah, I think that’s smart advice.
You know that, you know, making sure that it’s accurate, making sure that it is. That it does have, that you’re, that not, not necessarily that it doesn’t, that it doesn’t have, or that it does include D, E and I, but that you’re, you’re editing for those things and you’re adding absolutely the context in you should be absolutely doing those kinds of things.
So if the tool may not do it itself, but you as a human being who is focused on creating the best piece that you can, you can do that.
Chip Griffin: Right? And, and, and they suggest you shouldn’t ask the AI, you know, what does a certain population think about? An issue or something like that. I, I, I would agree. I, I wouldn’t say you should use it exclusively.
I wouldn’t mind asking the AI and asking people who are actually in that community. To me, there’s no reason not to get it, you know? See what the answer is from AI, just out of curiosity and, and put it into the mix. But I would not use it as a replacement for no, absolutely not Having conversations with real people.
Similarly, they suggest, you know, don’t just use automated translation from any of the AI tools. And I, I can back that up after many years of looking at automated translation, it is pretty tough. It’s, frankly, I I, I’ve worked in on international projects where even native speakers can’t agree on how to translate things.
Yeah. And, and this is it. It’s mind numbingly frustrating for me, particularly when it’s in languages that I can’t speak to. And so, you know, I’ve got two translators who are arguing over what the translation should be. I’m like, I don’t know. I can’t tell you.
Gini Dietrich: I can’t break the tie. I dunno.
Chip Griffin: It literally doesn’t, I have no idea.
But if you can’t get humans to agree, I mean, it goes back to the same issue. You know, I’ve always said that there are real challenges in using technology to do sentiment analysis, which is something I spent a couple of decades looking at when I ran CustomScoop. Yep. And, but the problem is humans can’t even agree on it.
Right. You know, you can give an article to two different people and ask them to gauge the sentiment, and they might look at, it happens all the time in entirely different ways. It’s incredibly common. And so if you can’t get humans to agree, how are you’re gonna get the technology, which by the way is created by humans.
I, I think this is something we all forget, these AI tools, that they didn’t just develop themselves, humans built them. So that means it has all of the flaws that we have. It has all of the, the inherent biases, it has everything all baked in. Because humans created it, humans created the content that it learns from.
We’re not gonna change that overnight.
Gini Dietrich: And humans are going to continue to train it.
Chip Griffin: Correct. So, correct. And, and, and, and that is one of the, the risks, right? I mean, I, you know, when we talk about disinformation, one of the things we have to look at is, You know, are these data sets going to be polluted? You know, in, is the new form of SEO going to be folks finding a way to, to pollute the data sets?
Or they wouldn’t view it it that way, but to populate the data sets with their own viewpoints. Yep. And so, you know, that’s something, I mean, companies may need to look at it again from that more SEO perspective. You know, if you’re asking, if you’re submitting prompts to it, do you submit prompts to it in order to try to shift how the data set is thinking or the, the algorithm is thinking about the data set to right. To get things into it. Yep. That’s something entirely possible that may be down the road and we need to be thinking about that. And that has a lot of ethical considerations to it as well. But, you know, ethics are never going away here.
We’re gonna be just talking more and more about this in the years to come.
Gini Dietrich: Nope. And our jobs as the protect Well, from a communications perspective anyway, as, as the protectors of reputation. AI is not gonna change that. We will still have jobs.
Chip Griffin: That’s true. We may not be the guardians of the galaxy, but we are the defenders of reputation.
Gini Dietrich: That’s right.
Chip Griffin: On that note, this will bring to an end this episode of the Agency Leadership Podcast. I’m Chip Griffin.
Gini Dietrich: I’m Gini Dietrich,
Chip Griffin: and it depends.