Skip to Main Content

Generative AI and Research

This guide is designed to support faculty and instructors as they navigate research and information literacy concerns caused by the rise of generative AI technology.

Generative AI is challenging our ideas about authorship and authority. 

Authority is constructed and contextual is a concept that students, as novice researchers, are beginning to master. Generative AI content adds a layer of nuance and complexity. Additionally, we know that information has value as a commodity, a means of education, a tool of influence, and as a means of understanding the world. 

Students are novice researchers and need clear guidelines regarding AI in the classroom as well as space to explore and think critically about generative AI.  

How do we define Authorship and Authority?

We often assume information is created by a person making intentional choices about that content. The value we assign to information is determined by the creator's experience with an issue and ability to demonstrate their understanding.

The lack of transparency around AI training data requires us to rethink or redefine authorship and authority. Although humans still have a role in crafting AI prompts, we do not know the full scope of training materials that many of these tools use to generate responses. 

This makes it hard to determine where AI generated information is derived from and how authoritative generated responses are.  Authority will depend on the situation, the purpose of the tool, and content generated. There may be situations where AI content is acceptable or preferable. 

What is apparent is that all AI content does require human intervention - whether that is designing an effective prompt or evaluating and revising output content. 

Copyright, Intellectual Property, and Privacy

LLMs and AI tools ingest large amounts of data or “training materials." Educators and journalists have raised questions about how these materials are gathered and about copyright, intellectual property, and fair use regarding those “training materials."

Many AI companies have been opaque about the sources of their training data. In addition to scraping data off the web, many tools will ingest materials and information submitted by prior users. There are current court cases about these issues.

In the meantime, we have a responsibility to protect student’s work, our work, and the work of others. We need to consider how AI technology is developed and what information is retained and reused later.

Related to intellectual property concerns, we need to assess the level of valuable private personal information that may be shared with AI technology when we use it.

We often make decisions for ourselves about what data or information we are willing to share to use a particular tool or website. However, we need to consider what we will ask of students in order to protect their identities, privacy, and personal information. 

Plagiarism

AI and Research Assignments

Because information has value, we expect that academic work properly credits the work of others. We value and recognize their work and we demonstrate our own "researcher credibility" by documenting where our ideas originated and how we are in "conversation" with past scholars. 

Our Research Assignments page discusses how to use information literacy concepts to rethink assignments with AI in mind. Strategies like scaffolding assignments, being transparent about course/assignment goals, and creating a learning environment where students are comfortable to ask questions about technology are recommended for student success. 

AI Citations

For additional support regarding specific citation styles, please see our Generative AI Citation guide

AI Plagiarism Checkers

AI Plagiarism checkers tend to have similar accuracy issues found in AI chatbots and they tend to produce false-positives for non-native English speakers. Additionally, there are questions regarding student privacy when their work is uploaded into these tools. It is best to have a sample of student writing to use as a comparison for their style of writing and voice. 
 

Reflection Questions about Information Ethics and Plagiarism

Use these questions to further reflect on AI and information ethics in your discipline. 

1. What will be accepted uses of AI in my classroom? What do I think the role of AI is in student work? How will I communicate classroom policies or expectations to students? 

2. What does ethical research look like to me in my discipline? How do I typically define that for students? How might AI "disrupt" those practices? 

3. What benefits or issues could arise if my published work was part of AI training materials? (See Further Reading for article about Taylor & Francis' agreement with Microsoft)

4. How do other researchers in my discipline use AI? Are AI tools currently being used to create or curate material in my discipline? 

Use these questions with your students for class discussions, reflection short-writes, or other assignments.

1. There are AI tools that summarize scholarly articles - which is a great service for students who are beginning to master comprehension and analysis skills. However, did the original authors/publishers allow for these materials to be uploaded into this kind of analysis tool? What are the ownership rights for those materials? Does the potential benefit for students outweigh the rights of authors? 

2. What does being an "author" or "creator" mean to you? Why might some people feel that AI content does or does not belong in academic work? What are acceptable uses of AI content? 

3. Do you have privacy concerns about using generative AI tools? Are they similar or different than privacy concerns about using Google or other web tools? 

4. What do you consider plagiarism? Why do you think that instructors and librarians focus on citations aside from "it's what we do?" Why is it a scholarly standard? 
 

Resources