Weekly Tech+Bio Highlights #79: A Pharma Factory in an Egg
Lilly's $2.75B generative chemistry deal and why AI research might be locked into 'hypernormal' science
This week pharma keeps signing billion-dollar AI drug discovery deals, but the upfront commitments are a fraction of the headline numbers, with the rest contingent on the technology actually delivering. Meanwhile, the most surprising entry comes from a direction nobody was watching, and it involves poultry.
Hi! This is BiopharmaTrend’s weekly newsletter, Where Tech Meets Bio, where we explore technologies, breakthroughs, and cutting-edge companies.
If this newsletter is in your inbox, it’s because you subscribed, or someone thought you might enjoy it. In either case, you can subscribe directly by clicking this button:
There’s a quick format poll at the bottom — we would appreciate your vote.
🤖 AI x Bio
(AI applications in drug discovery, biotech, and healthcare)
Foundation models and atlases keep spreading:
Xaira‘s virtual cell model, which we covered last week, got a longer treatment in a Decoding Bio interview where Bo Wang and the team talked through their bet on interventional Perturb-seq data over observational atlases.
Bioptimus launched a spatial biology ‘STELA’ atlas profiling 100,000 patient tissue specimens across three continents with 10x Genomics and Broad Clinical Labs. Roughly 20x the scale of existing spatial biology datasets then feeds into their biology world model.
Meta open-sourced TRIBE v2, a brain encoding model trained on 700+ subjects that predicts whole-brain fMRI responses to video, audio, and text at 70x the resolution of its predecessor, functioning as a kind of digital twin for in-silico neuroscience experiments.
Not strictly a techbio item, but relevant to the atlases and the neurotech thread we covered in our Neurotech Review: a UNC-led team published the first comprehensive atlas of human brain functional connectivity across the full lifespan, drawing on fMRI scans from 3,556 people aged 16 days to 100 years (Nature).
Pushing back on the general mood of enthusiasm, a recent Nature Biotechnology review co-authored by Eric Topol defines “generalist biological AI” (GBAI)—unified systems that interpret, synthesize and scale across several biological domains—and maps both the progress and the persistent gaps.
Accomplishments:
Language models for nucleotides, proteins, cellular data, and metabolomics (scGPT, Evo 2, ESM-2, DreaMS)
Structure prediction and design (AlphaFold 3, RoseTTAFold All-Atom, RFdiffusion, Boltz-2, ATOMICA)
Microscopy and histology image analysis (CellPose, Virchow2, UNI)
Spatial transcriptomics integration toward virtual cells (Nicheformer, scGPT-spatial, CORAL)
Early agentic frameworks for autonomous discovery (Virtual Lab, Biomni, SpatialAgent)
Challenges:
Foundation models are expensive to train, difficult to interpret, prone to hallucinations, and “likely less effective than they appear” given team-selected benchmarks and proof-of-concept evaluations
Simpler specialized models consistently match or beat foundation models on tasks like gene perturbation prediction, raising the question of whether foundation model development is necessary for marginal improvements
Context length limits prevent capturing long-range genomic dependencies such as enhancers and epigenetic features
Joint encoding spaces across modalities remain underdeveloped—incorporating gene expression alongside nucleotides and amino acids is “not nearly as intuitive”
Biological complexity remains difficult
Data scarcity for RNA structures, eukaryotic genomes and perturbation response datasets hampers generalization
Experimental validation is shallow, because most models lack wet-lab validation, and the gap spans in silico/in vitro/in vivo levels, with organoids proposed as a bridge
Path forward: rather than treating foundation models as the default solution, authors argue for selectively deploying specialized models where they outperform, integrating them within agentic workflows that can call domain-specific tools as needed. Combine multimodal datasets (Human Cell Atlas, HuBMAP, HTAN), validate predictions experimentally at scale (including through organoid-based closed-loop systems) and build toward virtual cells that can simulate perturbation responses across biological layers.
📝 An essay in Asimov Press by Alvin Djajadikerta argues that current AI training architectures are structurally locked into what he calls ‘hypernormal science’—not as a side effect of scale, but because systems trained to minimize prediction error against existing datasets cannot derive concepts outside the variables those datasets encode. He points to early empirical evidence: a study of 41 million papers found AI-augmented research covers ~5% less topical ground despite higher output. Basically, AI scientist pipelines face a built-in evaluation problem in that the only available proxy for idea quality is consistency with the current paradigm, which is exactly what paradigm-shifting work violates.
⚡ In brief
🔹 insitro and Bristol Myers Squibb expanded their ALS collaboration with two additional AI-identified targets (ALS-2 and ALS-3), joining the first target BMS nominated in December 2024, all found through insitro’s Virtual Human platform.
🔹 Tempus and Daiichi Sankyo partnered on AI-driven biomarker discovery for an undisclosed ADC program in oncology, using Tempus’ multimodal foundation model for patient stratification and response mapping.
🔹 Iambic Therapeutics disclosed the chemical structure of its brain-penetrant HER2 inhibitor, now in Phase 1b (the program went from inception to IND in two years). Iambic also received a £4.5M compute grant to train the next generation of their protein-ligand structure prediction model.
🔹 Philips got FDA 510(k) clearance for an AI copilot that tracks mitral valve repair devices in real time during minimally invasive heart procedures by fusing echo and fluoroscopy imaging.
🔹 Ataraxis AI launched Breast CTX, a test that uses causal inference to estimate individualized chemotherapy benefit in breast cancer — separating baseline prognosis from treatment effect rather than relying on average group response. Said to be validated on 10k+ patients, already adopted at NCI-designated centers.
🔹 Purple Biotech partnered with Converge Bio to apply generative AI to its CAPTN-3 tri-specific antibody platform for solid tumors.
Brian Buntz at R&D World profiles the Wiley-OpenEvidence deal as a case study in why curated content is becoming a bottleneck for medical AI. Josh Jarrett (Wiley SVP, AI growth) argues the value split between AI technology and trusted source content is closer to 50-50 than most people assume, and that off-the-shelf ‘deep research’ tools search broadly but fail to go deep into proprietary or domain-specific datasets. He also flags a novice-expert gap: generalist AI is most dangerous for users in the middle of the expertise spectrum, who lack the domain knowledge to catch errors but trust the output more than true novices do. Note: Since 2025, Wiley is collaborating with Anthropic on principles for how AI agents should interpret scientific literature: distinguishing preprints from versions of record, handling retractions, and treating published evidence as evolving dialectic rather than settled fact.
A small open-source moment: PhytoVenomics released Blatant-Why, an autonomous antibody design agent for Claude Code that wires together BoltzGen, Tamarind Bio, and Adaptyv Bio into a single target-to-lab-ready-VHH pipeline. Their point is that the agentic orchestration layer is not the hard part—models and APIs are open and available.
💰 Money Flows
(Funding rounds, IPOs, and M&A for startups and smaller companies)
🔹 Biggest deal of the week: Insilico Medicine and Eli Lilly signed a $2.75 billion agreement covering AI-driven discovery of oral small-molecule therapeutics across multiple programs, with Lilly getting exclusive worldwide commercialization rights. Insilico gets $115 million upfront, with the rest in development, regulatory, and commercial milestones plus tiered royalties. Lilly is essentially plugging its own target selection into Insilico’s generative chemistry engine. Lilly picks the biology, Insilico generates the molecules.
The two have been working together since 2023, first through a software licensing arrangement, then a ~$100 million compound-design collaboration in 2025, so this is a third iteration.

Insilico listed on the Hong Kong Stock Exchange on December 30 in a ~$293 million IPO, where Lilly itself was a cornerstone investor. The company now has 28 preclinical candidates nominated since 2021 across over 40 programs, with 12 IND approvals and its lead idiopathic pulmonary fibrosis asset having reported Phase 2a data in Nature Medicine last June. STAT says Insilico’s pipeline page was briefly updated to show a GLP-1 candidate out-licensed to an undisclosed partner, though no annotation is visible there yet. Insilico’s CEO has also mentioned he wants to “develop the next” Mounjaro.
🔹 Merck put $20 million upfront into an AI-enabled target discovery deal with Quotient Therapeutics (Flagship-backed, up to $2.2 billion in milestones) for inflammatory bowel disease. Quotient’s angle is somatic genomics: mapping mutations that accumulate in individual tissues over a lifetime rather than relying on inherited variants. Merck already has a major IBD position after spending nearly $11 billion on Prometheus in 2023.
🔹 Broadly, IQVIA‘s recent 2025 annual numbers painted a split picture: overall biopharma funding down 20% to $82 billion, IPOs at a 10-year low ($3 billion), but mega-deals above $2 billion surging from 27 to 68, worth a combined $360 billion.
🚀 A New Kid on the Block
(Emerging startups with a focus on technology)
🔹 Neion Bio hatched out of stealth this week with a somewhat odd (but not entirely new) premise: genetically engineer chickens so their eggs produce pharmaceutical proteins—the same complex biologics (Humira, Keytruda) that currently require billion-dollar Chinese hamster ovary cell facilities to manufacture. Co-founder Sam Levin says the approach could cut production costs by 10-100x, and that 3,900 hens could cover global Humira demand. They’ve announced a deal to develop three compounds with a major pharma partner they won’t name yet.
The company has 50 engineered roosters so far, and the next generation of hens is where we see whether the birds reliably produce the target proteins at useful concentrations. People have been trying to make pharma-chickens work for 30 years with little to show for it (one FDA-approved egg-produced drug from 2016, Kanuma, is still priced at $310K/year per patient), but Neion says newer primordial germ cell techniques finally make the engineering practical. The more ambitious play is to skip embryo work entirely and use viral gene delivery in adult hens, which Neion’s CSO describes as “gene therapy for chickens.”
🔹 Triangle Health raised $4M in pre-seed for an AI health navigation platform pitched around the agentic AI angle—the idea being that a patient’s full medical record becomes the context layer for AI systems that can research treatments on their behalf. Similar to the Australian dog-cancer story we covered a few issues back, but productized. Origin story: co-founder Arun Verma got a surprise Grade 2 glioma diagnosis from a Prenuvo scan in 2023, had surgery, then spent a year using ChatGPT to research his condition and found two experimental therapies his oncologist agreed to try. He’s now doing a Master’s in AI at JohnsHopkins and built Triangle to replicate that research process for patients who can’t do it themselves.
Read also:
Three Big Ideas in Aging Research That Could Shift the Therapeutic Landscape





