Skip to Main Content

Generative Artificial Intelligence: Generative AI and Academic Inquiry

Generative AI as a tool for academic inquiry

This guide is designed to offer some suggested ways of using generative AI tools like ChatGPT in academic projects such as research papers that require developing a topic and searching for scholarly literature. In particular, it:

  • Offers guidance on the limitations and pitfalls of AI tools in academic work.
  • Suggests some potential applications of AI tools.
  • Reviews some of the major AI tools, including their varying capabilities (in the second tab).

In summary, when developing and researching an academic paper or other academic project, AI chatbots can be usefully employed in the following ways:

  • Developing a basic understanding of concepts or general background information through a conversational interface 
  • Brainstorming topic ideas for a research project
  • Coming up with search phrases to use in search engines or library research databases

AI chatbots are less able to:

  • Provide in-depth information on specific scholarly topics as they are generally not trained on scholarly literature or other content that is behind paywalls
  • Cite accurate sources to back up assertions 
  • Reference recent events (as they are not trained on text that is up-to-date)

Image generated by Bing Chat using prompt: "College students studying in a library with lots of cute dogs and cats around them." Generated 29 November 2023.

 

About Generative AI

What is Generative AI?

Generative AI refers to a category of artificial intelligence (AI) algorithms that generate new outputs based on the data they have been trained on. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more. (World Economic Forum)

The Accuracy of Generative AI

Large language models (LLMs) behind generative AI are able to generate text that can be indistinguishable from that written by a human. Moreover, since the models are trained on data that encompass all areas of knowledge, they can plausibly discuss and educate on a wide range of topics. These models are also able to hold discussions, improving and clarifying on earlier responses based on user input.

While it may seem like these models are growing and learning as you interact with them, they are static and pre-trained. They probabilistically generate plausible sequences of words based on their training and your input. 

These models are trained in a way that reinforces outputs that are convincing and plausible. They are not trained in a way that reinforces the use factual, verifiable information. When used for academic writing and research, these models will often produce outputs that, to someone without deep knowledge of the subject, will seem correct, but ultimately contain significant inaccuracies or falsehoods. This phenomenon is often called "hallucination"; where the model will reference knowledge that seems real, but has no basis in fact.

When asked to find academic sources, LLMs will often invent non-existent sources with titles tailored to your request. These sources will often be attributed to real authors and journals. It is therefore necessary to vet all sources suggested by LLMs. Rather than search for specific literature, you may have better luck asking it for influential authors. 

Recently, more LLMs have become able to access the Internet so they mix together the function of a web search engine with the conversational interface of a chatbot. Currently, Microsoft Bing and Perplexity.ai are examples of this. In this way, they can use current information to inform their otherwise purely probabilistic response. While this does not obviate the risk of hallucinations in their output, it is at least more likely that they will point you in the direction of sources for the information in the response.

Ethical Issues and AI

Generative AI poses a number of ethical issues including those around security, privacy, misinformation, bias, labor, and the environment. 

LLMs are trained entirely on data generated by humans. Like many other models LLMs are bound to reproduce, and even amplify biases represented in the data they are trained on. Therefore these models, while inhuman, should not be considered objective in their responses. Even so, it can be difficult to notice when there is bias present in the output of an LLM. It is important to use your own background knowledge and judgment, and results from other sources to counteract possible biases present in the outputs of LLMs.

Using Generative AI

Prompts

Taking the time to develop well-constructed prompts can greatly improve the effectiveness of generative AI chatbots. As a general rule, be as specific as possible about the information you seek and how you would like to receive it.  One of the most basic formulas for prompting is called role, task, format: the role is character that you'd like the AI to take on (eg a chef, a diplomat, a teacher); the task is what you'd like the AI to accomplish (write a story, generate a piece of code, etc.); and the format is the form in which you'd like to output to appear (web page, list, image, etc.) There are numerous videos and tutorials on the web on prompt engineering for generative AI (example).

Summarizing texts

Often, when given a paper, book passage, or transcript, a large-language model will be able to summarize it intelligibly and accurately.  However, generative AI models are always vulnerable to some level of hallucination and bias. AI summaries are no substitute for completing course readings. If understanding a text is especially important to your work, a summary generated by AI may give you a useful overview, but cannot be relied upon to confer deeper understanding or consistent interpretation. 

Exploring research topics

Generative AI chatbots like ChatGPT, Bard and Bing can be useful as you brainstorm topics for a research project. For example, you could tell an AI Chatbot, 

I am a college student. I'm interested in topics related to democracy movements in the Middle East. Please suggest some topics for a research paper for a sociology class.

The current generation of generative AI chatbots tend to offer more reliable answers to general questions at a level of depth that often does not exceed what's available on Wikipedia.  As one delves into more specific topics and as one seeks to incorporate scholarly perspectives, the AI chatbot may provide responses that are inaccurate or misleading. For example, the current ChatGPT competently summarizes the facts around the 1878 Congress of Berlin but lacks the ability to have a nuanced perspective on the historical scholarship around that event.

Developing search terms and phrases

AI Chatbots are also good at suggesting particular search phrases for library databases. For example, one could use the prompt:

Act as a scholar or librarian. Suggest search phrases to use in Google Scholar or library databases to find articles on the topic of gentrification and race in urban areas in the United States.

This guide from University of Arizona Libraries offers some suggestions on using generative AI to brainstorm topics for a research paper and to generate search phrases for use in library databases. However, there's no substitute for human assistance during the research process. Please don't hesitate to reach out to a librarian at Watzek Library if you have a quick question or would like a more in-depth research consultation.

Other Uses

There are many other applications of generative AI in educational settings including practicing a foreign lanaguage, training for an interview, and developing and debugging computer code.

A Cautionary Note

Using AI in your studies has the possibility of taking away opportunities for learning and growth. Generative AI cannot produce the experience of reading a book, wrestling with a thesis statement, or writing something that is distinctively one's own.  

More specifically, passing off writing that has been produced by AI as your own is a violation of academic integrity.

Some instructors consider any use of generative AI to be a violation of academic integrity. Always consult with your instructor before using these tools in your coursework.

In addition, one should be cautious about exposing sensitive data to generative AI applications. Lewis & Clark's policy on Generative AI and Use of College Data.

If you are looking for assistance in developing and executing a research project, the library has staff dedicated to guiding you though that process. For research help, you can schedule a consultation with one of our research librarians. For help with writing, please visit the Writing Center.