Guidelines for generative AI usage

As our campus continues to evaluate the opportunities and challenges presented by artificial intelligence (AI), we want to provide you with some general guidelines on the use and procurement of generative artificial intelligence tools, such as chatbots like OpenAI’s ChatGPT, Microsoft’s Bing Chat, and Google’s Gemini, and image generation tools such as DALL-E, Midjourney, and Adobe’s Firefly.

NCSSM supports responsible experimentation with generative AI tools, but there are important considerations to keep in mind when using these tools, including ensuring data privacy and security, exercising caution around the quality and legality of content, mitigation of biases, and focusing on academic integrity.

 

  • Data privacy and security:

    • Protect sensitive data by not entering internal data (classified as Level 1 and above) into publicly-available generative AI tools. Information shared with generative AI tools is typically not private and could expose sensitive information to unauthorized parties.

  • Exercise caution about content:

    • AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called “hallucinations”), or may contain copyrighted material. Review your AI-generated content before sharing it.

    • If you come across any AI-related concerns or issues, whether related to privacy, bias, or ethical considerations, please report them to the appropriate channels such as ITS, Institutional Effectiveness, or Academic Programs.

    • Text and images generated by AI may be used for phishing and other social engineering attacks, so please be extra vigilant about verifying those asking for information and follow recommended best practices to report anything that appears suspicious.

  • Mitigating bias:

    • Be mindful of potential biases in AI systems and work to mitigate them, especially when developing or using AI applications. Bias can be a significant issue with generative AI, as it may inadvertently create content that reflects societal biases. To reduce this bias, it is valuable to know the source of a model’s training data, how humans participated in the training process, the steps that were taken to measure and reduce bias, and how the model is updated. To help review this information for many generative AI models, Stanford researchers have created a comprehensive transparency index.

  • Academic use:

    • NCSSM will be developing and updating policies related to generative AI tools. Faculty are encouraged to clearly communicate their policies regarding the permitted use of generative AI in their courses through their syllabi and communications with students.

 

In summary, we encourage all employees to approach new generative AI tools with a thoughtful and responsible mindset. These tools have the potential to offer substantial benefits to our campus community, but they also require careful consideration for their use.

Should you have any questions or concerns about software or services utilizing generative AI, either on an institutional device, the school network, or in any aspect of your work at NCSSM, please feel free to contact ITS at ithelp.ncssm.edu or via email at ithelp@ncssm.edu. Should you consider using an generative AI-based tool in conjunction with institutional data, please submit a data request here or by email at datarequest@ncssm.edu.