The Ethical Use of AI Tools in Global Health Research
March/April 2026 | Volume 25 Number 2
Photo courtesy of Cheryl MacphersonCREEi participant Andrea Kanneh gives a presentation on AI research ethics
Artificial intelligence (AI)—the genius offspring of mathematics, statistics, cognitive science, and computer science—is a set of technologies that simulates learning, reasoning, problem-solving, and other human cognitive functions. Developers design AI models to synthesize massive amounts of information and perform tasks that typically require human intelligence. Their successes, which work faster and with more accuracy than is possible for mortals, often inspire awe.
Across the globe, the fields of medicine and biomedical research are integrating AI tools into practices, procedures, experiments, and analysis. In these pages, four Fogarty International Bioethics Research Training Program grantees discuss the value and potential ethical concerns when using AI in medicine and research in low- and middle- income countries (LMICs).
What makes the use of AI tools in global health research ethical?
The accepted framework for evaluating the ethics of clinical research studies consists of seven requirements: social or scientific value of the research; scientific validity (rigor of a study’s design); fair subject selection; a favorable risk-benefit ratio; independent review; informed consent; and respect for enrolled participants. “Fulfilling all seven requirements is necessary and sufficient to make clinical research ethical,” write the NIH-affiliated authors in a landmark paper published in 2000 in the Journal of the American Medical Association. A few years later in The Journal of Infectious Diseases, the same authors declare that, within developing countries, an additional “collaborative partnership” requirement is needed alongside the original seven obligations. Partnership with LMIC researchers, policy makers, and communities “helps to minimize” the possibility of misuse by ensuring that they determine for themselves whether a proposed study is “acceptable and responsive to the community's health problems.”
Fogarty ethics program principal investigators agree that the utilization of AI tools in global health research needs to align with the existing standards, yet each acknowledges that AI is an exceptional technology with unique ethical considerations.
For instance, ethical use would require an AI model be accurate and produce reliable results when applied to LMIC study participants and patients, says Icahn School of Medicine at Mount Sinai’s Rosamond Rhodes, PhD. Otherwise, the scientific validity of a proposed study and its risk-benefit ratio might not accord with existing standards. AI models trained on higher income country population data do not necessarily correlate to LMIC populations due to differences in the genetics and medical histories of the two populations, explains Rhodes. “All the vaccines you've had in your life make you biologically very different from people who are naïve [never had a vaccine]. And, once you've been treated with many different antibiotics, you're a different kind of person than someone found in a country where antibiotics aren’t used.”
Vina Vaswani, MD, agrees that the application of an AI tool within a global health research context is only ethical if accuracy and appropriate use have been verified, since “AI is only as good as its algorithms and its data.” She questions whether a “one-size-fits-all AI model” would ever work effectively in research conducted globally, or in India specifically with its many, diverse populations. Henry Silverman, MD, asks, “Is an AI model operating on a robust dataset that includes contributions from LMICs or is the dataset predominantly biased?” The answer to that one question will usually determine whether use of AI within a particular research context is ethical or not.
Cheryl Macpherson, PhD, says she’s uncomfortable with the possibility of AI “hallucinations,” where a large language model creates nonsensical or inaccurate outputs. “Misinformation is a major problem during AI use but also at the development stage.” She asks, Can researchers be certain that “hallucinatory” information has not been baked into an AI system in its formative phase, which might then invalidate any outputs related to all or parts of the research resulting from its use?
Another ethical point for consideration is whether end-users, including researchers, fully understand how to operate and deploy AI, says Rhodes. “When I get some new software, I just want to use it and I don't bother reading all the instructions,” she says. If someone’s life is on the line, then end-users certainly need to be trained and tested. A researcher’s comprehension of AI tools is equally important, since the faulty deployment of an AI model within a research context could lead to inaccurate results, false conclusions.
Is it ever possible for a researcher to attain a thorough understanding—or thorough-enough understanding—of an AI system to be certain of its ethical use within a study? Vaswani observes that end users often “don't know how a particular AI program was trained.” Given “the opacity of AI systems” operating as “black boxes,” she adds that doctors and investigators may find it difficult or even impossible to trace or explain to patients and research participants the rationale behind AI outputs. Can research participants truly provide informed consent?
Photo courtesy of Henry SilvermanDr. Henry Silverman teaches research ethics to students in Morocco
Common uses of AI in LMICs
Writing assistance is possibly the most common application for generative AI among researchers and students. Macpherson believes authorship, and the possibility of plagiarism, are central ethical concerns. “How do you stop students from inappropriately using AI while encouraging them to use it wisely and for the right tasks?” Silverman agrees, yet believes authorship problems have existed long before AI. Unusual or unfair requirements for publications and grants leads people to plagiarize and use AI improperly. As Silverman notes, “the research integrity climate of the university enhances or diminishes the prospect for research misconduct.”
Silverman asks his students to state how they used AI in their research and mostly they respond, “I had AI help organize my thoughts. I used AI to polish my writing.” These uses of AI are fair, yet he wonders, “Do you list AI as an author? I think the short answer is no. But if the whole paper is generated by AI, maybe the short answer is yes.” Despite finding AI “helpful” as a writing assistant, he cautions, “If students depend too much on AI, they’re not developing their skills in critical thinking and in writing.”
Another ethical talking point is AI note-taking, says Silverman. This practice, which is increasingly common in clinical settings worldwide, has implications for both patients and researchers. “Are the notes AI-generated? Are they accurate?” Patients “can live or die by medical records, plus insurance companies may not reimburse based on an inaccurate record.” Imprecise notes might also falsely influence research outcomes and analysis.
Finally, AI is reading x-rays and other scanned images across the globe, while many hospital systems, especially intensive care departments, depend on AI-generated algorithms to direct care. In these cases, these systems provide real benefits, even while raising thorny issues of responsibility and accountability, says Silverman. What happens when things go wrong? Vaswani writes in a recent paper, “Identifying who bears responsibility, whether the developers, users, or the AI itself, remains a contentious ethical dilemma.” Or, as Rhodes says, “You can't hold a computer program responsible.” Meanwhile, Macpherson wonders, “Where do you center accountability when multiple agents, both human and artificial, are at work?”
Applied ethics
The field of research ethics and Institutional Review Boards (IRBs), in particular, play an important role in research oversight. In LMICs, a research ethics education helps local scientists contribute to the discussion of global studies from a position of knowledge, says Macpherson. Former trainees of her program are now IRB members, who examine study design, analyze the risk-benefit ratio, and consider the potential for harm to participants, among other tasks. Rhodes says, IRBs need to question whether there are unusual risks when AI is introduced into study design and implementation. She asks, “Can an AI model cause harm if it’s applied to a lot of people all at once… or if it’s not used in the right way?” Others suggest that the environmental impacts of AI Data centers be weighed in IRB risk-benefit analyses.
IRB members not only oversee how researchers are using AI in their studies, they are also using AI to execute their own duties. (Consider that: AI programs assist in the ethical review of AI-enabled research.) Silverman is currently working with colleagues in Cairo to develop a study to demonstrate the efficiency of AI reviewing protocols.
Macpherson explains her concerns: “Usually a clinical trial is sponsored by either a commercial interest or a government with biosecurity interests or those kinds of things. Once you feed that information into the AI system, it's no longer confidential anymore even if you tell the AI to keep it confidential—it’s this black box of algorithms and we don't really know what's going to happen.” Such fears are not unfounded; many users have received incorrect responses from an AI program that has clearly strayed beyond the data specified. “The confidentiality of an individual study subjects’ data may be lost, so that's a potential harm to them as well as to the study sponsors and their own interests, whatever they may be,” says Macpherson.
Silverman understands that “there are no firewalls for data security” when it comes to “downloading to ChatGPT.” Still he believes the review of research protocols, which do not include patient data or confidential information, is “a different ballgame than uploading publishable papers.” His apprehensions align with Macpherson’s. He wonders, “If peer-reviewers use AI, are they putting the data out there for anyone to grab?”
Guard rails
AI is evolving within an uncertain regulatory ecosystem despite well-known pain points, such as AI model drift, where performance deteriorates over time due, in part, to changes in data (a phenomenon described by IBM). Are oversight mechanisms needed to mitigate ethical risks when AI is deployed within the space of global health research?
“Every research ethicist would say we need to regulate this—even those who’re strong proponents of AI. But if you look at the world today, there's a real unwillingness to regulate,” says Macpherson. She adds that this regulatory environment “opens us up to a lot of potential challenges and threats” to research ethics as well as human health.
When discussing legal parameters, some of the larger ethical issues include “who's going to participate in the regulation and governance of AI,” says Silverman. He worries the AI revolution will only increase the digital divide between higher and lower income countries. “The main complaint I get from people in LMICs is that they can't afford the monthly cost.” He also noted that “chip manufacturing requires a large supply of water.”
Concluding thoughts
In the end, Fogarty’s ethical thought leaders share similar concerns regarding the use of AI in global health research, while maintaining their idiosyncratic perspectives.
Rhodes believes risk and the potential harm to research subjects, accountability, and responsibility are the most important ethical issues in relation to AI use in research, “whereas concerns about confidentiality, to me, are less important elements.”
Vaswani fears using AI within a clinical or research setting without “a human in the loop,” because too much reliance on AI is causing a “trust deficit” between doctors and patients. “Patients come for healing, which is through touch, through dialogue. Many patients already feel that nobody examined them thoroughly.”
Silverman says, “AI is not inventing new issues, it's reinventing old ones. I tell my students I use AI as my assistant, but I'm the final agent who is accountable. And that's all the difference in the world.”
All nations face similar ethical issues regarding the use of AI in research, says Macpherson. “But LMICs are more vulnerable to its potential misuse than high income countries because they're perhaps more eager to benefit from it… they may be more willing to take that leap.”
More information
Updated April 22, 2026
To view Adobe PDF files,
download current, free accessible plug-ins from Adobe's website.