Skip to Main Content

Generative AI and Research

This guide is designed to support faculty and instructors as they navigate research and information literacy concerns caused by the rise of generative AI technology.

As an emerging and evolving technology, we need to critically evaluate and assess the creation processes of AI content. 

AI is a human-created tool that reflects the business goals of its creators. Biases present in AI's training data can result in the output of flawed content and misinformation.

Additionally, the development of AI involves environmental and labor costs. There are also equity and access challenges, such as subscription fees and the need for reliable internet, which can widen the digital divide.

Despite issues with the creation process of AI content, generative AI also offers applications that could benefit humanity

Understanding the limitations, costs, and advantages of content created through AI technology will guide future information use and practices. 

Issues with AI's Information Creation Process

  • AI is not neutral - Large language models generate data from human-created content that may contain biases. 
  • ​​​​​​“Garbage in, garbage out” - algorithms will reproduce bias from their training data. 
  • Large language models like ChatGPT can inadvertently spread misinformation by hallucinating false information.
  • Hallucinated AI suggestions have the risk of being interpreted as authoritative or “scholarly” and could be cited by students (Ex: ChatGPT citing a nonexistent paper).
  • While misinformation is not a new concept, AI is creating new ways to disseminate it. Generative AI tools allow users to create fake content quickly and easily. 
  • Programs like ChatGPT and Dall-E can be used by bad actors to create false text or images that may look factual or authentic. 

me find 10 scholarly articles chatgpt make them up

  • Large AI platforms like ChatGPT rely on outsourcing human labor to screen training data for potentially harmful content.
  • Much of this work is done in the global south due to cheaper labor costs.
  • Many AI tools with research applications are behind paywalls and require stable internet access.
  • Students without the financial means to subscribe to AI tools may be at a disadvantage compared to peers who can subscribe.

Humanitarian Benefits of AI

Addressing Humanitarian Concerns About AI in Your Classroom

Use these questions to reflect on the intersection of ethics, AI, and your discipline. 

  • What factors do I consider when deciding to use a new technology or tool?  

  • Does my discipline provide frameworks or guidance to assess for bias within data, tools, research practices, or final reports and articles? How might those apply to AI within my classroom? 

  • How can I guide students in evaluating the role and impact of AI within my discipline?  How can I guide students to make informed decisions that assess the function, benefits, and costs of AI creation processes? 

  • What knowledge or behaviors will students need to use AI ethically in their assignments? (Consult the resource below, Ethical AI for Teaching and Learning)
  • If I want students to use AI in my classroom, what barriers might I need to remove to make the technology more accessible?

Use these questions with your students for class discussion, reflection short-writes, or other assignments. 

  • Knowing more about the creation processes behind AI and its costs, how does that change your perception of AI tools and content?
  • How can you identify misinformation generated by AI? How can you verify sources cited by AI? 
  • While there are many lighthearted examples of fake AI-generated content like Pope Francis's puffer jacket, what are some situations in which AI-generated misinformation could be harmful?
  • Is it possible for AI to be an instrument for positive impact? If so - in what ways? 

Support and Resources