Report an accessibility problem

Artificial Intelligence

Guidelines for Use of Artificial Intelligence (AI) in Research

At Arizona State University, we recognize the rapidly evolving landscape of Artificial Intelligence (AI) and its potential to advance knowledge, research, and scholarly work.  We support the responsible and ethical use of AI tools to ensure safety and integrity of research.  As the use cases of AI tools become better understood and federal agencies release guidance, we will be providing regular updates, guidance, and resources to keep the ASU Research Community up to date.

Before starting any research project that involves AI, it is strongly recommended that you discuss appropriateness of using the technology with your co-investigators, collaborators, and field experts.   If you decide to use generative AI in your research, keep in mind the following items: 

  • Many Federal agencies have tools to detect AI-generated content.  Be aware of these tools and their potential impact on your research.
  • Content generated from AI often paraphrases from other sources.  This could raise concerns regarding plagiarism and intellectual property rights.
  • Content generated from AI tool may be inaccurate or could be biased.  It is important to validate content provided using other reliable resources.
  • Do not rely solely on generative AI for decision-making purposes.   Use the results to inform your research while making decisions based on additional factors and evidence.
  • Do not place federal, state, or ASU data into an externally sourced generative AI tool.   Once the data is placed into AI tools that are available externally to your local network, the data becomes available to the public and open source.  This occurs, for example, with chatGPT, Bard, Bing or GPT as well as with prompts to generative image processors such as DALL-E.  Additionally, the data may be subject to other terms and conditions. 
    • Note:  There are Large Language Models (LLMs) that are HIPAA-compliant and support PHI.  For questions on LLMs see https://getprotected.asu.edu/
  • For meetings that will involve discussions of a sensitive nature (e.g. personal, confidential, financial, IP, proprietary, personnel, etc.), do not use AI automated meeting tools to record and capture discussions, measure attendee engagement, etc., as the data generated by these tools may be considered public records.  Be cognizant of virtual meetings where AI meeting tools may be used, inquire with the meeting host about the use of these tools if unsure, and decline participation in the meeting if the host insists on using these tools.
  • When working with vendors or subcontractors, inquire about their practices of using AI.   Additional terms and conditions may need to be included in any resulting agreement to ensure responsible and ethical use of AI tools by collaborating organizations.

By following these guidelines, researchers can leverage the benefits of generative AI in their research while ensuring, safety, responsibility, and ethical use of this technology.

Resources:

https://ai.asu.edu

https://provost.asu.edu/generative-ai

https://getprotected.asu.edu/

https://www.whitehouse.gov/ostp/ai-bill-of-rights/

https://www.nsf.gov/cise/ai.jsp

 

 

Questions?

Questions on a generative AI tool should be directed to your local IT support or https://getprotected.asu.edu/

Questions on using a generative AI in research should be directed to export.control@asu.edu

Inquiries from external funding agencies related to ASU use of AI in research should be directed to Research Operations Assistant Vice President, Heather Clark at Heather.Christina.Clark@asu.edu.