Skip to Main Content

AI in Research

Evaluation

Use the Robot tool to assess the AI technology

 

Reliability

How reliable is the information about the AI technology?
If it’s not produced by the party responsible for the AI, what are the author’s credentials? Is there author bias?
If it is produced by the party responsible for the AI, how much information are they making available? Is information only partially available due to trade secrets? How biased is the information they produce?

Objective

What is the goal or objective of the use of AI?
What is the goal of sharing information about it? To inform? To convince? To find financial support?

Bias

What could create bias in the AI technology?
Are there ethical issues associated with this?
Are biases or ethical issues acknowledged? By the source of information? By the party responsible for the AI? By its users?

Ownership

Who is the owner or developer of the AI technology?
Who is responsible for it? Is it a private company? The government? A think tank or research group?
Who has access to it? Who can use it?
Type Which subtype of AI is it?
Is the technology theoretical or applied?
What kind of information system does it rely on?
Does it rely on human intervention?

This information is taken from Wheatley, A., & Hervieux, S. (2022). “Separating artificial intelligence from science fiction: Creating an academic library workshop series on AI literacy.” In S. Hervieux & A. Wheatley (Eds.), The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries. Chicago, IL: Association of College and Research Libraries.

Generative AI can regurgitate or generate false information (hallucination). Evaluate sources and the information generative AI provides.

CIAP offer a free learning module on Appraise, Apply, and Assess the Evidence. The time taken to complete this module will differ between individuals, but the average time estimate is 2-3 hours (including additional reading and videos).

Verify AI-Generated Claims and Citations
Generative AI tools can produce convincing but potentially inaccurate or fabricated information known as "hallucinations". It is important to corroborate the AI's claims and validate any citations it provides. Here's how:


Cross-Reference with Reliable Sources
Compare the AI's output with other authoritative and trustworthy sources covering the same topic. Seek out alternative perspectives or the original context from which a claim may have originated. 


Validate Citations
While you can prompt an AI tool to cite sources, be aware that it may generate convincing but entirely fictitious citations, including fabricated author names, journal titles, or article details. These invented citations are referred to as "hallucinations." To verify a citation's authenticity, search for the specific article or book referenced and confirm its existence and accuracy.


Confirm Source Content
Even if a cited source exists, the AI's interpretation or representation of its content may be inaccurate. Locate the original source material and cross-check whether it aligns with the AI's claims about the information it purportedly contains.


Consider Currency
The currency (creation, update, or revision date) of the information is crucial, especially for time-sensitive topics like current events or rapidly evolving research areas. Generative AI models are trained on data up to a specific cutoff date, which may not include the latest developments. These tools are grounded for web search Perplexity, Google Gemini and Microsoft Co-Pilot.

By following these steps, you can critically evaluate the output from generative AI tools, separating factual information from potential hallucinations or inaccuracies, and ensure the reliability and currency of the sources cited.