Before using any tool with artificial intelligence capabilities, it is important to learn more about the technology that powers these innovative tools and about the university’s policies that govern and guide the use of our tools and data.

Understanding Models and University Data 

Always consider data protection and liability risk prior to using data within AI tools or services. When considering data protection and liability risk determining the type of AI model that will be used is an important step, especially for AI-powered tools leveraging large language models (LLMs). AI models are typically either a public or private model: 

  • public model is a tool or service that utilizes data inputs by individual users to help train the model and to provide more comprehensive outputs to other users in the future.
  • A private model keeps data used as an input to the tool within the organization or institution utilizing those models.

Determining the sensitivity of the University data or information you will utilize is an equally important step in considering AI tool use. Is the University data to be utilized considered confidential, highly confidential, or moderate/high impact? Knowing this will guide appropriate use, for example, sensitive data or protected information (confidential, highly confidential, or moderate/high impact) should never be uploaded to nor utilized in a public AI model. If you need assistance to determine the sensitivity of your data, please see the University of Colorado data classification levels section below for guidance on the University’s data classification and impact standards.

It is important to note: 

  • Only private models approved by the university are vetted for use with sensitive or protected data sets.
  • Most free and widely available AI tools operate using a public model and therefore should never be used with sensitive data as an input. As a general consideration of what data can be used in a public model AI tool, if a data set can and should be published to a public-facing web page, then it can be used in a public model which meets the minimum security standards for Public Information.

Finally, keep in mind that with AI tools, third parties might include the right to reuse your data to further develop their services and tools so please consider product terms of use carefully before proceeding.

Sensitive university data must be protected from compromise, such as unauthorized or accidental access, use, modification, destruction or disclosure. The University of Colorado system classifies data as either:  

  • Highly Confidential Information 
  • Confidential Information
  • Public Information 

Please refer to the University of Colorado system’s data classification webpage for specific criteria on how data is classified.

The use of Confidential or Highly Confidential information with an AI-based service (or any service for that matter) requires the service be reviewed for compliance with CU’s data security standard. Submit a digital technology compliance review request form to initiate a compliance review for your use case.

AI Limitations & Considerations

Although AI tools have made significant strides in the areas of text and image generation, data analysis, and personal productivity, it is important to recognize their limitations. Many tools are still prone to mistakes or inaccuracies. Generative AI, even with recent advancements, is still susceptible to providing false outputs known as “hallucinations” where an output provided to a user is comprised of incorrect facts or citations. Additionally, AI algorithms and tools have been created by humans and trained on human-generated data and therefore can include biased human decisions or reflect historical or social inequities. Any output received from an AI-powered tool should be human reviewed thoroughly for inaccuracy and bias. 

Because copyright law and artificial intelligence precedence is evolving globally, CU Boulder recommends caution when utilizing AI-powered tools that may put faculty, staff, or students at risk for copyright infringement. Please review CU Boulder’s copyright resources for additional details on use of academic materials or contact contact CU Boulder’s copyright team with further questions. 

The development of increasingly capable artificial intelligence has the potential for major impacts within the classroom. Faculty and instructors should consider the implications of Generative AI within their existing syllabi to intentionally avoid the possibility of unintended AI use by students or to intentionally integrate the new possibilities that AI allows. Instructors are encouraged to review the resources made available by the Center for Teaching and Learning, Teaching & Learning in the Age of AI

For students, the acceptability of AI use within a curriculum could be unclear and may differ from course to course. If it is ever unclear to a student what level of AI use is acceptable, it is recommended to reach out to the course instructor to clarify. The unapproved use of AI is considered academic misconduct and could violate the Honor Code. Please review AI and the Honor Code for more clarification.

Campus Access

All CU Boulder Information Technology purchases and adoptions must undergo the Information Technology (IT) Accessibility and Security Review Process regardless of the cost. For any substantive change in an existing IT Technology offering, such as new AI functionality, the technology service managers should contact the ICT team to review the product changes before they are implemented.

AI is rapidly evolving with new AI-powered tools being made available every day. Contact OIT about adding an AI tool that is not already offered in our list of AI-capable tools.