Dr. Isaac Kohane makes important points in his Jan. 6 op-ed, “Your AI doctor may be working for someone else.” As a doctor myself who has had a number of patients come in who have “done their research” on the web, I would also like to point out a disturbing fact. Large-language AI models like the ones we use commonly for internet searches assume that anything on the web is true. AI programs cannot distinguish between a good research article and a bad research article. Most published articles (including my own) are potentially flawed or limited in some way.
A careful reader needs to examine the article’s purpose, hypothesis, study design, methods, ethical review and conflicts of interest, statistical design and analysis, presentation of data, discussion of the results relative to the hypothesis, identified limitations of the study, and suggested areas needing more research.
In addition to Kohane’s recommendations, I would add this: Approach a published medical research article as if you were a journal editor or peer reviewer (both of which I have been). Do not accept the AI program’s assessment and summary, which may have been influenced by uncritical inclusion of poor research. Look up the original articles if you can, examine them carefully, and form your own opinion. Then bring those articles to your physician and go over them together. If you need help, find educational resources designed to teach analysis of medical research, which is becoming an ever-more essential skill in modern society.
Dr. David Coulter
Natick
Sign in to read the full article.
Sign in with Google