The rapid acceleration in the development and deployment of generative artificial intelligence tools in recent months has left agencies scrambling to figure out what it all means for them. Along with figuring out how the tools can help and where they may present threats to existing ways of doing business, agency leaders must also wrestle with the ethical implications of the technology.
The PR Council, a trade association that primarily caters to larger public relations agencies, created a task force to review generative AI and make recommendations for guidelines that agencies should consider following.
Co-chaired by Mark McClennan of C+C and Anne Green of G&S Business Communications, the group recently completed its work and shared those guidelines publicly. In a webinar unveiling the work, PR Council President Kim Sample noted that the task force itself was composed of a mix of individuals, some of whom are bullish on generative AI while others hold a much less positive view.
While the guidelines probably aren’t suited for most agencies to adopt without revision, they do provide a good starting point for an internal conversation about how to approach generative AI.
Protecting confidential information
The PR Council guidelines lead off with the need to avoid submitting client or agency information to any AI tool that doesn’t promise that it is a closed system that preserves privacy. Even those that do should be regarded cautiously due to the potential for even inadvertent leakage of confidential data over time.
On the surface, this recommendation makes sense, but if applied liberally it would essentially preclude the use of almost all generative AI tools. At one point in the webinar, McClennan noted that a lot of the way that you approach these guidelines is your own level of risk tolerance – and that is something that you need to keep in mind.
Using an automated transcription tool for an interview of a client likely includes confidential information but the risk of leakage with negative consequences is much lower than uploading a spreadsheet of quarterly financial data for a public company to a service like ChatGPT.
My advice: Use common sense and focus on the most sensitive confidential information, as well as anything that has specific legal or regulatory protections.
Using generative AI as final creative
Many copyright issues have been raised surrounding generative AI, but one area of particular concern for agencies is the likely inability to secure copyright protection for anything created solely by computers without human intervention.
I expect that there will be court tests in the future, and there is an argument that can be made that the generative AI tools only act at the direction of humans (in the form of the prompts submitted). In many ways, this is similar to using a tool like Adobe Illustrator to follow your commands to create circles, shadows, and other details. The question for the courts may end up being how much human intervention is required.
As the task force co-chairs noted in their webinar, it would be prudent to get your own legal advice and weigh that against your own risk tolerance.
My advice: Don’t be afraid of using generative AI to help you along the way, but apply some meaningful amount of human polish to the final written or visual product – or disclose to the client that it was entirely created by AI and allow them to determine their comfort level with that status. (More on disclosure later.)
Creating deepfakes and disinformation
I suppose that the PR Council needed to be explicit about this in their guidelines, but it really feels like it goes without saying that you shouldn’t use AI (or anything else) for nefarious purposes.
Of course, if you are so-inclined the existence of guidelines, voluntary or otherwise, are unlikely to make a difference.
My advice: Behave professionally and ethically at all times. Period.
There are numerous examples of generative AI making mistakes. I have personally seen it make up statistics and then immediately retract it as soon as I simply asked for a link that I could cite.
Similarly, I have seen AI-generated images that have basic mistakes (like extra arms on a person or double wings on a bird) that should clearly be caught by any agency before using them.
In addition to double-checking the results of any work, the PR Council guidelines encourage asking questions of their vendors about accuracy and methodology. While this is a nice idea, the likelihood of getting detailed answers is pretty slim, especially for smaller agencies.
My advice: Trust but verify. If you treat most generative AI tools as if they were interns (as I regularly suggest) then you should have the right mindset for their quality and accuracy.
Requiring extensive disclosure
The task force created by the PR Council encourages requiring pretty thorough disclosure of the use of generative AI. They advise mandating disclosure to clients and insisting upon disclosure from employees, contractors, vendors, influencers, and anyone else the agency works with.
Although the guidelines indicate that “flexibility can be applied”, that conflicts with the broader statement that “We recommend disclosure to clients if generative AI tools are used in any part of the creative process.”
On its face, that would require disclosing the use of tools like Otter.ai to transcribe an interview or using one of the many features in Photoshop that upscale or modify images based on cloud-based AI functionality.
That seems excessive. Focusing instead on the parts of the guidelines that recommend disclosure of “solely AI-generated materials” makes much more sense.
The reality is that many agencies have already been using generative AI (like the transcription and image editing tools referenced). Disclosing their use serves virtually no purpose and feeds into the general fear-mongering surrounding AI.
Think of it this way: do you disclose to your client when you have an intern perform work? Or even a contractor? Unless required by contract, you likely don’t (and shouldn’t in most cases).
My advice: Disclose if something is entirely or substantially created by AI on its own (and ask others to do the same). Answer honestly if a client asks about the use of AI in creating a specific deliverable. Encourage team members to let their managers know how they are using AI – not just for disclosure purposes but also for shared learning.
Handling voice edits
The guidelines include a section dedicated to the use of voice and music generation tools, especially for editing voiceovers and similar content.
The PR Council advises signed agreements for any edits, including corrections.
That probably goes farther than necessary, unless there is a contractual requirement that would mandate it. The guidelines also note that union rules may apply in some circumstances.
Many agencies have already been using tools like Descript that will help to clean up audio for interviews and correct instances in which someone unintentionally misspeaks. Getting signed agreements for every such change would likely irritate clients more than it would solve any potential problem.
My advice: Be aware of your legal and contractual requirements before using voice or music editing tools, but if you are behaving ethically and in compliance with client wishes, there is no need to overcomplicate things.
Avoiding bias and seeking diversity
The PR Council task force appropriately highlights the risk for bias in the output of any generative AI tool. Beyond encouraging asking questions to understand the biases (and they have a good list of questions to get you started), the guidelines also suggest specific steps to promote diversity and inclusion.
For example, they recommend against the use of automated translation tools without human review and editing. This is excellent advice since even human translators may disagree with each other on the best way to convey the intended meaning to a specific audience.
The guidelines also advise agencies not to use generative AI as a replacement for talking to specific groups to learn about their experiences or as a tool to avoid working with diverse talent.
Since all humans have biases, conscious and otherwise, the tools that they create undoubtedly harbor the same. In addition, large language and visual models will be biased based on the training data used and that will always tend to favor majority populations.
My advice: Do your best to be aware of and mitigate your own biases and those of the tools that you use. Encourage vendors and others to do the same.
Establishing internal practices and policies
The final point identified in the PR Council guidelines is the need for agencies to establish internal processes to manage the use of generative AI going forward.
As this is a technology that continues to develop at an incredibly rapid pace, there will need to be a regular review of how agencies are approaching and deploying the tools on behalf of themselves and clients.
Frankly, this is something that agencies should always be doing with all of their tools and tactics, but the spotlight is shining on generative AI right now and serving as a useful reminder that the way we all operate continues to evolve – and our policies and practices need to do the same.
My advice: Encourage thoughtful conversations within your team about how you are using (or could use) generative AI. Talk with other agency leaders to get their perspectives. Focus on listening and learning as you evolve your own agency’s approach.
The PR Council has provided a useful foundation for agencies to kick off deeper and more meaningful conversations about not just the business implications of generative AI, but also the ethical and legal considerations.
Small agencies, in particular, should not fear these developments, but rather take advantage of their nimbleness to experiment with and adopt the use of these tools to produce better results for clients with improved efficiency.
Acting with intention and understanding the risks will put you in a strong position to benefit from tools that will only continue to improve.