Understanding the Changing Landscape of Human Subjects Research in the Age of AI
If you’re a human, there’s a very good chance you’ve been involved in human subjects research. Maybe you’ve participated in a clinical trial, completed a survey about your health habits, or took part in a graduate student’s experiment for $20 when you were in college. Or maybe you’ve conducted research yourself as a student or professional.
- AI is changing the way people conduct research on humans, but our regulatory frameworks to protect human subjects haven’t kept pace.
- AI has the potential to improve health care and make research more efficient, but only if it’s built responsibly with appropriate oversight.
- Our data is being used in ways we may not know about or consent to, and underrepresented populations bear the greatest burden of risk.
What is Human Subjects Research?
As the name suggests, human subjects research (HSR) is research on human subjects. Federal regulations define it as research involving a living person that requires interacting with them to obtain information or biological samples. It also encompasses research that “obtains, uses, studies, analyzes, or generates” private information or biospecimens that could be used to identify the subject. It falls into two major categories: social-behavioral-educational and biomedical.
If you want to conduct human subjects research, you have to seek Institutional Review Board (IRB) approval. IRBs are committees designed to protect human subjects, and any institution conducting federally funded research must have them.
The Historical Context
We didn’t always have protections for human subjects in research. The 20th century was rife with horrific research abuses. Public backlash to the declassification of the Tuskegee Syphilis Study in 1972, in part, led to the publication of the Belmont Report in 1979. This report established key ethical principles to govern HSR: respect for people’s autonomy, minimizing potential harms and maximizing benefits, and distributing the risks and rewards of research fairly. This became the foundation for the federal policy known as the Common Rule, which regulates IRBs.
Men included in a syphilis study stand for a photo in Alabama. For 40 years starting in 1932, medical workers in the segregated South withheld treatment for Black men who were unaware they had syphilis, so doctors could track the ravages of the illness and dissect their bodies afterward. National Archives
The Role of AI in HSR
It’s not 1979 anymore. Now, AI is changing how we conduct research on humans, but our ethical and regulatory frameworks have not kept up.
Tamiko Eto, a certified IRB professional and expert in the field of HSR protection and AI governance, is working to address this gap. Eto founded TechInHSR, a consultancy that supports IRBs reviewing research involving AI. I recently spoke with Eto about how AI has transformed the field and the associated benefits and risks of using AI in HSR. Our conversation below has been slightly edited for length and clarity.
The Shift in Research Paradigms
You have over two decades of experience in human subjects research protection. How has the widespread adoption of AI changed the field?
AI has flipped the old research model on its head entirely. We used to study individual people to learn something about the general population. Now, AI pulls huge patterns from population-level data and uses that to make decisions about individuals. This shift is exposing the gaps in our IRB world, especially since much of what we do is based on the Belmont Report.
The report was developed almost half a century ago and did not consider what would later be termed “human data subjects.” Instead, it focused on actual physical beings rather than their data. AI is increasingly about human data subjects; it’s their information being utilized often without their knowledge.
Examples of AI in Human Subjects Research
Could you give me an example of human subjects research that heavily involves AI?
In social-behavioral-educational research, there are instances where people train on student-level data to identify ways to improve teaching or learning. In healthcare, we utilize medical records to train models that help predict certain diseases or conditions. However, the way we understand identifiable and re-identifiable data has also evolved with AI.
Currently, data can be used without oversight, with the presumption that it is de-identified, based on outdated definitions of identifiability.
AI’s Potential and Risks
What’s something that AI can improve in the research process — most people aren’t necessarily familiar with the argument for using AI?
AI has real potential to improve healthcare, patient care, and research — if built responsibly. Well-designed tools can catch problems earlier, such as detecting sepsis or spotting signs of certain cancers through imaging. However, many of these tools aren’t designed well, leading to possible harm.
I’ve focused on how we can leverage AI to enhance our operations. For example, AI can help us manage large amounts of data, making research more efficient. However, whether these advantages are realized depends entirely on responsible implementation.
What do you see as the greatest near-term risks posed by using AI in human subjects research?
The immediate risks include black box decisions where we don’t know how AI reaches conclusions, making it difficult to make informed decisions about its use. Beyond that, privacy emerges as a significant concern. There are inadequate privacy rights in the U.S., meaning individuals often lack control over their data regarding collection and usage.
Addressing Long-Term Risks
What about some of the long-term risks?
Currently, IRBs are technically prohibited from evaluating long-term societal impacts. Thus, ongoing discussions usually center around individual risks rather than broader consequences such as discrimination or misuse of data. These concerns are vital, especially as marginalized groups often bear the brunt of data being used to train AI tools, typically without consent.
This results in a situation where these communities do not benefit from advanced tools but are nonetheless subjected to the negative consequences of their deployment.
The Importance of Regulation and Awareness
How can IRB professionals become more AI literate?
Understanding AI literacy isn’t just about grasping the technology; it also involves knowing which questions to ask. I have created a three-stage framework for IRB review of AI research to help in assessing risks during specific development phases. This framework aims to support IRBs in adjusting their approach to reviewing cyclical projects.
What steps can we take to avoid a worst-case scenario involving AI in research?
A void exists in the research phase, particularly with unconsented human data being used without adequate IRB oversight. This negligence can lead to AI shaping decisions about vital aspects of individual lives, such as healthcare and finance—consequences that disproportionately affect marginalized populations.
To avoid this, we need stringent regulations and transparent practices in data collection and usage. Ultimately, fostering ethical data sourcing is essential as we navigate this complex landscape.
Conclusion
As we enter a new era marked by AI advancements, the regulation of human subjects research must evolve to ensure the protection of individual rights and overall societal equity. The dialogue initiated by professionals like Tamiko Eto serves as a critical step toward establishing these necessary guidelines.
For further insights, check the full conversation Here.
Image Credit: www.vox.com






