Artificial Intelligence (AI) is a broad term that covers many different things, but which generally refers to computer programs which engage in machine learning or deep learning, training on massive datasets to establish patterns of information, and utilize that information in the performance of specific tasks. The phrase ‘artificial intelligence’ is misleading, as these programs do not truly “understand” information or engage in reasoning or discernment: they are, rather, tools that perform statistical analyses and provide outputs based on those analyses.
Understanding the different types of artificial intelligence - IBM Blog
AI tools can be used to great effect in healthcare. By leveraging the massive amounts of training data, AI can be used to identify patterns and statistical anomalies in order to assist in the diagnosis of occult conditions or streamline the development of new drugs.
For instance, a detailed summary of the way in which Johnson & Johnson plans to utilize AI can be found on their website: Artificial intelligence is helping revolutionize healthcare as we know it (jnj.com).
Misconceptions of how AI works can lead users to apply improper tools to the tasks they’re completing. A chatbot like Chat-GPT for instance is a tool to create convincing human-like speech, and its purpose is not to answer questions correctly or to diagnose conditions. While it often relays correct information due to the patterns in its training data which associate those answers with the questions that are related to them, it frequently produces information that has no basis in reality, so-called “hallucinations.” As natural-language chatbot AI becomes integrated into systems like search engines and personal assistant software, the lines between tools that provide trustworthy and untrustworthy information may blur. Such tools may also be used for intentionally deceptive purposes, like cheating on tests or generating written content and passing it off as one’s own work. For more issues specific to chatbots and generative AI, see the section that follows.
Even outside of chatbots and generative AI, more bespoke diagnostic tools retain certain risks. A common concern is that inadequate training data, or data which contains either explicit or latent biases will have those biases reflected in the output of the diagnostic program—if a dermatological tool, for instance, is trained on images that are mostly of white patients, it could have insufficient data to accurately diagnose skin conditions in persons of color. For an even-handed discussion of bias in machine learning, see a recent review by Vrudhula et al. (2024): https://doi.org/10.1161/circimaging.123.015495
There is a tremendous amount of literature being published on AI and its utilization in healthcare, focusing on different implementations and specialties. Yang et. al (2023) provide a comprehensive overview of its usage in general healthcare, including risks, in an open access article published in Health Care Science magazine, which can be accessed here: https://doi.org/10.1002/hcs2.61
What is generative AI?
The sort of AI that dominates news cycles is generative AI that uses natural-language processing. These programs take natural-language inputs (that is, plain speech), and provide outputs such as text or images. The most prominent of these is Chat-GPT, and for many people the name “Chat-GPT” is synonymous with “AI.”
Other examples of generative AI are the image generator Midjourney, and Google’s Bard (which has been succeeded by Gemini).
How does Chat-GPT work?
Chat-GPT’s function is to create human-sounding speech, and it accomplishes this by conducting statistical analysis of its training data to produce responses that are likely to fit the query, with some intentional randomness added to ensure it does not always produce the same results. In essence, it works like the autocomplete function on your mobile device, but with a vastly more robust and costly dataset. In this respect, it should come as no surprise that Chat-GPT can perform feats like answering practice questions from a standardized test: its training data contains those questions and the answers, and they appear often enough together for it to assign a high likelihood of one being related to another, and to provide that answer in response to a prompter using that question.
“Hallucinations” and factual inaccuracies generated by AI
Chat-GPT does not always yield accurate or factual results: its massive training dataset encompasses webpages and forums where many people post inaccurate and counterfactual claims. When Chat-GPT produces information that is not factually accurate, this is called a “hallucination.”
The term “hallucination” is misleading: it implies a problem of perception, but Chat-GPT does not have qualia or subjective experience: it is simply outputting words that often appear in close proximity to one another in order to imitate human speech.
Factual accuracy of information is not the purpose of the tool, and the hallucinations are not faults: they are the tool working as the developers intended, to create human-sounding speech.
In consideration of the fact that providing accurate real-world information is not the purpose of Chat-GPT and other generative AI programs, these programs should not be used for research or diagnosis, or anything that requires the delivery of accurate and factual material; indeed, research has already been retracted for including inaccurate AI-produced content, viz. the following retracted article: Frontiers | Retraction: Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway (frontiersin.org)
Artificial Intelligence is a swiftly-changing field that holds great promise and great risk. It is important to stay educated about the purpose of specific AI tools and to evaluate the appropriateness of their intended application in relationship to the purposes for which they are employed.
See the RSS feed on the right side of this page for a selection of recent publications indexed by PubMed on the topic of artificial intelligence.
Additionally, on 2/22/24, a review of the RAISE (Responsible AI for Social and Ethical Healthcare) conference's discussion on AI in healthcare was published in NEJM, available here: https://ai.nejm.org/doi/full/10.1056/AIp2400036
Librarians work virtually M-F 8am-4pm.
mail Email Library@RochesterRegional.org
movie Library Tutorials