Where Tech Meets Bio

Where Tech Meets Bio

Share this post

Where Tech Meets Bio
Where Tech Meets Bio
AI Meets Clinical Trials: Recent Policy Shifts & Companies to Watch
Copy link
Facebook
Email
Notes
More

AI Meets Clinical Trials: Recent Policy Shifts & Companies to Watch

Today, clinical trials are longer and more data-heavy than a decade ago. At a time of regulatory rethinking, we trace how AI is beginning to orchestrate trial workflows.

BiopharmaTrend's avatar
BiopharmaTrend
May 20, 2025
∙ Paid
5

Share this post

Where Tech Meets Bio
Where Tech Meets Bio
AI Meets Clinical Trials: Recent Policy Shifts & Companies to Watch
Copy link
Facebook
Email
Notes
More
2
Share

Today, on May 20, we celebrate Clinical Trials Day, a nod to the anniversary of James Lind’s first randomized clinical trial back in 1747. It's a day to recognize how far we've come—from Lind's early experiment aboard a naval ship to today's high-tech clinical trials.


To know whether a medicine works, we need to compare what happens when someone takes it to what happens when they don’t. That’s the core idea behind clinical trials. It’s old—texts as early as the biblical Book of Daniel (around 500 BCE) describes comparing diets across groups to assess health outcomes. But the most known first methodical attempt that resembles modern testing came in 1747.

Aboard a British naval ship, surgeon James Lind tried giving sailors with scurvy different remedies and found that citrus made a difference. His experiment lacked many safeguards we now consider essential, but it introduced the core principle: isolate a variable, track the response.

James Lind: Conqueror of Scurvy, from “The History of Medicine” (by Robert Thom, ca. 1952)

The principle held, and trials became more formal, with increasing layers of statistical rigor and regulatory scrutiny. Now, the same forces that made trials more accurate also made them harder to run—today’s protocols are still manually drafted, recruitment is slow, data is fragmented, and monitoring participants outside clinical sites adds layers of coordination.

In this article: Encoding Judgment — Clinical Trial Design and Patient Recruitment — Synthetic Control Arms and Simulation — Digital Twins — Decentralized Clinical Trials — A System Under Revision

A 2024 Nature Scientific Reports analysis of over 16,000 industry-sponsored trials quantified this shift using a composite “complexity score” based on features like the number of endpoints, eligibility criteria, study arms, and trial locations.

Over the past decade, that score has risen by more than 10 percentage points on average, with trial designs becoming broader and multi-site. As a result, studies now take much longer to complete, with each 10-point increase in complexity linked to a 33–36% increase in trial duration.

Complexity score fluctuation throughout the years. Data from the report by Markley at al.

As complexity mounted, so did interest in applying AI. With automation transforming other data-heavy fields, clinical research continued to be an obvious candidate for computational assistance.


Encoding Judgment

The conceptual foundation stretches back to the 1970s at Stanford University. MYCIN, an early expert system, used a rule-based inference engine to diagnose bacterial infections. It guided physicians through a decision tree, ranked likely pathogens by probability, and proposed antibiotic regimens complete with justifications.

Although MYCIN was never deployed clinically, it demonstrated how domain-specific reasoning could be encoded into machine logic. Its successor, ONCOCIN, later applied similar principles to chemotherapy management, serving as an early example of AI supporting the enforcement and adjustment of treatment protocols.

Edward H. Shortliffe, MYCIN principle developer (photo courtesy of North-Holland)

These early prototypes foreshadowed what is now a full-fledged transformation. By the 2010s, AI began moving beyond academic settings and into operational roles. IBM’s Watson for Clinical Trial Matching, an early example, analyzed patient records to identify eligibility for oncology studies.

In as little as half a decade later, AI-native companies like Tempus and Deep 6 AI (now acquired by Tempus) applied natural language processing, machine learning, and predictive modeling to clinical trial workflows, spanning from protocol generation to patient recruitment.

Some regulatory agencies are now actively restructuring around the rise of AI. In 2025, the U.S. FDA announced plans to phase out animal testing requirements for certain drug categories, introducing a framework of New Approach Methods (NAMs) including AI-based toxicity models, organoids, and organ-on-a-chip systems..

🔹 The freshly launched Axiom Bio is one of the companies well-aligned with the FDA’s shift, raising $15 million to build AI models that replace animal testing, starting with drug-induced liver injury. Its team has created a large dataset using human liver cells and high-content imaging to predict how compounds might affect liver function.

FDA’s policy shift builds on a broader reorientation: in January, the agency released draft guidance for AI use in regulatory decisions, and by May, it had committed to rolling out generative AI tools across all centers by mid-2025 to accelerate drug review workflows. Parallel efforts at the NIH include the creation of a new office tasked with scaling non-animal research technologies.

In Europe, the newly adopted EU Artificial Intelligence Act introduces binding oversight for AI systems deemed “high-risk” (e.g. those used in patient recruitment, endpoint modeling, or synthetic control arms).

However, AI tools developed and used exclusively for scientific research (early-stage trial design and modeling) are explicitly excluded under Article 2(6). This carve-out aims to protect research flexibility but raises concerns about regulatory blind spots as tools migrate into operational use. As expected, systems handling patient data must comply with GDPR.

As seen from Europe’s example, clinical adoption is cautious, particularly in settings where opaque algorithms intersect with high-stakes decisions. Concerns mainly center around how explainable a particular system can be, liability, and trust.

Given these concerns, the field looks to be moving toward more auditable and context-aware implementations of AI, especially in functions where human oversight can remain embedded. This has given rise to several core areas where AI is becoming increasingly embedded in trial operations as a structured partner in decision-making.

To make this navigable, we may break down the current landscape into four distinct (by no means exhaustive) focus areas, each seeing tangible activity from emerging platforms and companies operating at different stages of trial development:

  1. Trial design and patient recruitment

  2. Synthetic control arms and simulation

  3. Digital twins for personalized modeling

  4. Operational support for decentralized trials


📋 Clinical Trial Design & Patient Recruitment

Designing a clinical trial is one of the trickiest parts of drug development. It involves figuring out dosing, picking the right endpoints, estimating how many participants are needed, and making sure the whole protocol meets both scientific goals and regulatory standards. More and more, AI is being brought in at this early stage to support and extend expert judgment…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 BiopharmaTrend (BPT Analytics Ltd)
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More