AI Guidelines for Campus Communicators
UC San Diego communicators are encouraged to use and incorporate generative artificial intelligence (generative AI)-powered tools into their communications practice with an ethical, human-centered approach. AI is useful in many contexts — to brainstorm, capture notes, test ideas and refine text, etc. — but the emerging technology also has pitfalls and requires a careful, thoughtful approach. AI tools are known to have bias and hallucinate (make up facts), and there are ethical concerns with the source data many tools use.
This guidance is meant to help campus communicators understand benefits and risks of using AI in their work, access available tools, and provide resources for further explanation.
What is generative AI?
Generative AI encompasses a range of technologies designed to create new content by leveraging extensive training on diverse datasets, ranging from text and music to images. Notable among these tools is ChatGPT, a chatbot equipped to engage users through natural language interactions. “Chat” signifies the user-friendly interface, while “GPT” (Generative Pre-trained Transformer) indicates the underlying machine-learning architecture responsible for content generation.
Generative AI tools like ChatGPT, DALL∙E and Gemini excel at generating text, music, images and even computer code. These models undergo rigorous training on comprehensive datasets, which include an assortment of text from books, websites and various other sources. By decoding complex patterns and linguistic nuances, these tools can produce content that is not only contextually appropriate but also grammatically accurate and stylistically coherent.
Learn more about AI and general campus guidance on the Blink AI webpage.
Guiding principles when using AI in campus communications
Human-centered use
The mission of University Communications is to protect and promote the UC San Diego brand by highlighting the transformative achievements of our people. Therefore, our approach to integrating AI is human-centered, and our communicators are encouraged to use AI to amplify and augment — rather than displace — human work. AI is an incredible tool, but it cannot replace or replicate the creativity, inclusivity and attention to detail that our communications community is so skilled at practicing. We encourage the use of AI as one of many tools in a communicator’s tool kit.
Transparency
Communicators should be fully transparent with their teams and editors when AI has been used to generate material so that there is adequate review for hallucinations, copyright issues or bias. Transparency is also important in building and maintaining trust with colleagues, reviewers, partners and clients.
Ethical use
We commit to educating ourselves about the different types of AI, as well as the potential pitfalls with specific tools, and using AI programs in the same ethical manner with which we develop all University Communications material. In generative AI, datasets used to train many models may include copyrighted, incomplete or biased data, and we are sensitive to these issues as we integrate AI into our workflow, giving careful consideration to how AI-generated material may impact people with disabilities and members of our community who are marginalized.
Using generative AI today
Below is some guidance for communications teams that use AI, as well as some viable use cases.
What AI is good for? |
What AI is bad for? |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
General communications guidance
In its current form, generative AI is most useful for refining and formatting text, code generation, preliminary editing, image manipulation and visual ideation. All output from generative AI requires close and expert review and thoughtful integration into a final product. In short, AI can give you a productive boost, but it can’t do your work for you.
Avoid using generative AI for:
- Anything that involves protected, sensitive information
- Crisis communications
- Communications on sensitive, reputational issues
- Bylined work (should not be primarily generated by AI)
- Creating images that include human subjects from unlicensed training data (due to brand, quality and intellectual property issues)
- Translation (tools are not currently accurate enough to provide full, nuanced translations for our diverse university community)
- Captions that will not be proofread (tools often do not provide accurate captions and are not compliant with ADA requirements)
Ensure quality and safety when using AI:
- Review and edit any AI-generated materials.
- Monitor for bias and hallucinations.
- Educate yourself about the tools you use and monitor for intellectual property issues.
- Be transparent with team members when using AI.
- Consider attribution, especially with AI-generated images.
Common tools and use cases
It is important to note that UC San Diego does not have licensing agreements with many of these tools, and communicators should not input confidential data, early research material or other sensitive information into these tools. Communicators who would like to purchase an AI tool, service or subscription with university funds can email procurement lead Andrew Bunker to obtain a license and ensure the product meets university guidelines.
- TritonGPT is similar to ChatGPT but with more access to information about UC San Diego. TritonGPT is currently powered by Llama 2.
- Adobe Creative Cloud offers multiple tools that can help with cropping, retouching and adding and removing elements from still and motion images. In addition, Firefly and similar products and services offer image generation, text effects, generative fill and more.
- We encourage campus communicators to use Adobe if they are working with image generation, as all training data is licensed. As always, consult these guidelines if incorporating AI-generated content.
- iStock by Getty Images offers some commercially safe AI-generated images.
- As long as the use is otherwise appropriate (see guidance above and necessary attribution below), appropriately licensed iStock AI images of nonhuman subjects in communications projects is permitted.
- Generative AI image tools like DALL·E and Midjourney can be useful for visual ideation and inspiration.
- Due to uncertainties around copyright, these image tools should not be used to create final published work.
- ChatGPT/Gemini/LaMDA
- Anyone can use these text processing tools to:
- Find alternative verbiage, brainstorm, research or assist with preliminary proofreading.
- Summarize and analyze meeting notes and transcripts.
- Make writing more concise, especially for limited space or word count.
- Draft or clean up web code.
- Anyone can use these text processing tools to:
- Semrush is another writing assistant tool.
- Sprout Social is useful for message tone, content scheduling, listening and hashtag management.
- Microsoft Outlook and Word employ predictive autofill features.
Attributing images created with AI
Requirements for attributing AI-generated images varies by vendor. For example, iStock requires AI-generated imagery used in an editorial context to be credited as “istock.com/iStock AI Generator.” iStock does not require a credit in commercial context. Adobe’s terms require the credit that is specified in each file’s IPTC credit line metadata for editorial purposes. The person who generates or uses an image from AI is responsible for providing the appropriate credit line.
Tools such as Midjourney and DALL·E have different requirements, but University Communications does not permit use of output from those tools in our published work.
Future possibilities
Today, the most common AI tools are processors for stand-alone text and images. In the future, AI will be more seamlessly integrated into the tools we use every day. Learn more about ITS’ efforts in this area.
Training
- Helpful information about using AI at UC San Diego (Blink)
- DeepLearning.AI: Generative AI for Everyone (Coursera course, free)
- Mastering Prompt Engineering (Enrollify course, paid)
- AI Essentials at UC San Diego (UC Learning Center, free)