5 Quick Thoughts About Using AI in PR

Share

Artificial Intelligence, specifically generative AI tools like ChatGPT, is clearly a hot topic, one  we predicted would be the #1 trend for 2023. And for the foreseeable future.

Here are some quick thoughts about generative AI from a PR practitioner’s perspective:

  1. It’s important to test drive AI: ChatGPT and other generative tools, including visual generation like Midjourney, DALL-E and Canva AI, are out there, and PR practitioners should test them to see how they can provide value. We’ve used AI to help speed clip coverage reports, ordinarily a time-intensive job requiring lots of cutting-and-pasting. We need to find out what AI can do and what it can’t do, and realize that AI capabilities are changing and expanding almost daily.
  2. Keep in mind that AI is being incorporated into a range of applications and within 18 months, it may be a challenge to avoid AI. Already, AI is being incorporated in PR tools like Cision, Meltwater and other services that allow users to email reporters directly. It’s been incorporated into LinkedIn messages, GMail, and others. Of course, the red-underline-squiggle that indicates a misspelled word in emails, Word, iMessage, etc. is another example of AI at work. In other words, you may feel AI is a crutch and you don’t need it but it’s going to be available, and may take more work to avoid.
  3. Because of copyright issues, AI isn’t a tool you can always use. Since content created using AI tools cannot be copyrighted, some clients forbid the use of AI for their internal and external teams. So PR practitioners need to find out what the policy is for an organization. Another concern is using AI to write a press release for information that hasn’t been made public. This is an issue especially for confidential information such as material news for publicly held companies ahead of when they are made public. But this could be a problem for any organization with information before it’s intended to go public.
  4. AI tools aren’t always right so check the work. People are already using the term “hallucinates” to describe wrong or inaccurate information that AI includes in content. (We think the term originated in AI-created photos that included hands with six fingers and people with three legs or three arms.) It’s vital that a human review all content, check all references and links to make sure they are accurate, and all photos to avoid six fingers, etc.  Additionally, there’s an AI grammar tool that we like — we especially find it useful for identifying when we’ve typed so quickly but have left out a word on the screen —  but sometimes the corrections it suggests are wrong because it doesn’t get the context or because sometimes the language used in reports and emails is different from regular English, more stilted, less adorned. Anyway, our point: you’ve got to review the output. Here is a good article published in PRSA’s Strategy & Tactics magazine, “4 Steps to Take to Ensure the Accuracy of Your AI Content” by Monique Farmer, APR.
  5. Practitioners should let clients or supervisors know when they’ve used AI. This recommendation comes from “A Conversation on Managing the Ethics of AI” by Dianne Danowski Smith, APR, which was also published by PRSA’s Strategy & Tactics magazine. Check out the rest of her article but her point is that transparency is important, and that it’s important to “keep track of how AI is being used to craft content. Use the tools. Just attribute your work as you do.”  By the way, here is where we’ll put the disclaimer that this article was written entirely by a human; no AI was used.

The AP Stylebook available online offers a new chapter on generative AI, along with guidelines entitled “Standards around generative AI” for reporters.

This won’t be the only time we address the ethics and implications of generative AI. We look forward to addressing this again in 2024.

Related Posts