Skip to main content

CoDEx 2026 Research Talks

The following Research Talks will be presented at CoDEx 2026.

Details on presentation times and room locations will be provided as soon as they become available.

Automatic Detection of Ultrasonic Vocalizations Using Deep Learning Methods: Exploring the Effects of Neonatal Hypoxia on Vocal Behavior in Rats

Yilan Wei, PhD Student, School of Communication

Rats can emit ultrasonic vocalizations (USVs) beyond the range of human hearing (>20 kHz), which are widely studied in research on communication, emotional changes, and language development. However, the short duration and high density of USV events make traditional manual annotation inefficient, limiting their application in large-scale behavioral studies. 

We are developing a deep learning-based automated detection tool that combines convolutional neural networks with Transformer architecture to automatically detect USVs. We applied this method to behavioral analysis in a neonatal rat hypoxia model to explore the impact of hypoxia on early vocalization behavior. Preliminary results show that rats in the hypoxia group produced fewer USVs than those in the control group, and USV counts decreased further with increasing hypoxia severity.

These findings suggest that neonatal hypoxia may negatively affect early vocal behavior and social communication. This method improves data processing efficiency and shows robust cross-species generalization, demonstrating the potential of deep learning methods for USV research.

Ultrasound-Mediated BBB Disruption Promotes Microglial Reprogramming and Enhanced Endothelial Crosstalk in the Human Brain

Víctor Andrés Arrieta, Clinical Research Associate, Health and Biomedical Informatics (HBMI), Feinberg School of Medicine

The brain has a natural “protective wall” called the blood–brain barrier (BBB). It keeps harmful things out, but it also blocks many medicines from reaching brain tumors like glioblastoma (GBM). In a clinical trial, we used gentle ultrasound plus tiny bubbles in the bloodstream to briefly open this barrier in patients with recurrent GBM. A key advantage of our study is that we collected two matched tissue samples from each patient: one from the ultrasound-treated area and one from a nearby untreated area, only 45 minutes later.

In this talk, I’ll show how we used computation to combine several types of data, single-cell sequencing, gene-regulation data, spatial maps of where cells are in the tissue, and multiplex imaging, to build a clear picture of what changes right after BBB opening. We find a rapid immune response, especially in microglia, and stronger communication between immune cells and blood vessels. This “BBB-opening response signature” could help doctors better time drug delivery and design smarter combination therapies.

Learning as Graph Traversal

Pipob Puthipiroj, PhD Student, School of Education and Social Policy

Recent research shows that retrieval is not a passive process but actively strengthens memory and deepens understanding. This paper explores quizbowl—a competitive academic trivia game—as a unique environment for studying retrieval-based learning, where the knowledge to be mastered is both extensive and well-defined.

By assembling and analyzing over 300,000 questions from 600 tournaments, we construct knowledge graphs for 36 subjects, using text clustering and preprocessing to map connections between concepts. These graphs model learning as a graph traversal problem, reflecting how players revisit topics at increasing levels of complexity. Traversing these graphs enables the generation of high-quality flashcards, and when combined with spaced repetition algorithms, these tools systematically reinforce retention.

Together, this approach demonstrates how computational methods can transform raw educational data into effective study resources

Human vs. Generative AI: A Comparison of Creative Strategies

Yulin Yu, Postdoc, Kellogg School of Management

The rapid advancement of generative AI has renewed optimism that creativity—central to innovation across domains from science to art—can be substantially expanded through human–machine collaboration. While AI systems can recombine vast bodies of existing knowledge in novel ways, prior work suggests that human creativity retains distinctive advantages, particularly in flexibility and contextual adaptation. To enable effective coordination between human and machine creativity, we ask a fundamental question: How do generative AI systems achieve creativity compared to humans, and what processes underlie these differences?

We address this question using the Divergent Association Task, a widely used and replicable measure of creative ideation. Analyzing over 5,000 human and machine DAT responses collected worldwide, we apply text and network analyses to compare creative strategies at both individual and collective levels.

First, drawing on theories of cognitive simplicity, we examine step-by-step vocabulary selection by operationalizing word complexity along five dimensions: age of acquisition, word frequency, word length, spoken-language tendency, and written-language tendency. We track the first through seventh words in each response. Second, we analyze categorical jumps between successive words, as larger semantic shifts are theorized to signal greater creative potential. Using WordNet, a tree-structured lexical network, we quantify per-response category counts as well as the average, variance, and maximum semantic jump distances. Finally, we examine collective behavior by comparing the total semantic space covered by humans and machines.

Our findings reveal three key insights. First, at the individual level, humans tend to use simpler and more common words than machines, yet collectively generate substantially greater lexical diversity (4,048 vs. 749 unique words). Both humans and machines progress from simpler to more complex words over time; however, machines exhibit a steeper late-stage increase in complexity. Second, at the categorical level, humans explore fewer broad categories per response but traverse more fine-grained categories and collectively cover a much wider semantic space.

Finally, while human word-to-word semantic distances are smaller on average, they exhibit higher variance, consistent with Lévy-flight–like search patterns. Together, these results suggest that the advantage of human creativity lies less in individual outputs and more in collective variation, highlighting a key complementarity between human and machine creative processes.

Comparative genomics of herpes simplex virus 2 isolated from a maternal-neonatal dyad reveals high consensus sequence homology as well as minor variant diversity

Reem Abu Rass, Postdoc, Feinberg School of Medicine

Neonatal herpes simplex virus (nHSV) disease most often occurs following mother-to-neonate transmission during childbirth and can be severe and even fatal. However, little is known about the maternal viral population, how it differs from the neonatal viral population, what viral genomic features may contribute to successful transmission, and what genomic features arise de novo in the infant. We performed deep whole-genome sequencing of HSV-2 clinical samples obtained from a maternal-neonatal dyad to investigate viral population dynamics at both the consensus and minor variant levels.

While at the consensus-level we discovered near identical viral genomes, analysis of minor variants uncovered a more complex pattern. This revealed the maternal HSV-2 population harbored minor variants that were not transmitted, while the neonatal population exhibited new, de novo minor variants. These findings are particularly interesting in the setting of short-term adaptations that weaken the maternal immune response to allow for fetal tolerance, and the underdeveloped state of the neonatal immune response immediately after birth, both of which may allow the emergence of diverse viral populations. This work provides a first look at the genomic variation that arises during HSV-2 transmission, and a framework for future analysis of additional maternal-neonatal pairs.

iVox: Deep Interpretable Survival Prediction for Personalized Radiotherapy Dose Optimization

Sagnik Sarkar, Senior Research Technologist, Feinberg School of Medicine

Stereotactic body radiation therapy (SBRT) is a standard treatment for early-stage lung cancer, yet predicting local tumor recurrence remains challenging. Current clinical practice lacks quantitative tools to assess how patient-specific dose distributions influence individual outcomes, limiting opportunities for personalized treatment optimization.

Methods: A deep learning framework integrating multimodal imaging with survival analysis was developed to predict local failure risk following lung SBRT. The model was trained on a multi-institutional cohort of 822 patients, processing over 3,200 three-dimensional volumetric images (CT, dose, contours, fractionation) as four-channel 64³-voxel tensors. A 3D ResNet backbone extracts spatial features, with dual prediction heads providing radiomic auxiliary supervision (256 prognostically-selected biomarkers) and patient-specific log-hazard outputs. Training employed extensive data augmentation and GPU acceleration. Cox proportional hazards calibration converts predictions into absolute survival probabilities. Voxel-wise attribution maps are computed via Integrated Gradients (50 integration steps) and gradient backpropagation with SmoothGrad noise tunneling (32 samples), followed by spatial filtering to produce interpretable risk localization.

Results: The model achieved a concordance index of 0.73 on internal validation and 0.71 on an independent external test set, outperforming classical Cox baseline (C-index ~0.64). Attribution analysis revealed elevated hazard ratios in peritumoral regions for delivered dose, suggesting microscopic disease extension requiring dose escalation, while intratumoral CT features associated with recurrence risk reflect intrinsic biological susceptibility. The calibrated Cox framework establishes a quantitative relationship between prescription dose, geometric dose patterns, and projected local failure probability, enabling calculation of recommended dose modifications for target tumor control rates.

Conclusion: This framework bridges deep learning with interpretable survival analysis to provide personalized, spatially-resolved risk assessment for lung SBRT. By identifying high-risk regions and quantifying the dose-response relationship, the approach offers a pathway toward individualized radiotherapy planning guided by predicted treatment outcomes.

Probabilistic Phylodynamic Models for Viral Evolution Across Biological Scales

Seth Borrowman, Research Technologist, Feinberg School of Medicine

Our talk shows how probabilistic models can uncover hidden patterns in how viruses evolve and spread using large genomic datasets. By applying a flexible phylodynamic framework across scales – from infections within a host to international transmission – we improve understanding of disease processes and strengthen the evidence base for public health decisions.

Neutron Star Drag Race: Simulating Neutron Stars in a Common Envelope

Nicole Flors,  PhD Student, Weinberg College of Arts and Sciences

Most stars in the universe exist in a binary system, and when the stars reach the end of their lives, they can become binary systems containing compact objects, either black holes or neutron stars, highly magnetized, rapidly spinning condensed objects. These compact object binary systems power exciting, high-energy phenomena like X-ray binaries, millisecond pulsars, and gravitational waves. Common envelope episodes, where a neutron star is engulfed in the envelope of its companion star, may explain how these systems form. However, the physics of this critical evolutionary phase is not well understood. The outcome (merger versus survival as a close binary system) depends on energy and angular momentum exchanged between the neutron star and the envelope.

Because common envelope evolution spans extreme scales in both space and time, we adopt a “wind-tunnel” approximation where the envelope is modeled as a magnetized flow past the neutron star in order to reduce computational expense. Existing works have studied common envelope evolution using this approach, but they have mainly used hydrodynamics and it is necessary to include magnetism and general relativity due to a realistic neutron star’s strong magnetic fields and rapid rotation. We perform the first high-resolution, long-duration 3D simulations of neutron star dynamics in a wind tunnel including all relevant physics using the GPU-accelerated, massively parallel code H-AMR. Our simulations use over 50 TB of data exploring how the neutron star dipole strength, orientation, and wind magnetization affect the drag force on the neutron star and the power of outflows.

Our results elucidate the microphysics of magnetized wind accretion and will inform future binary evolution models and predictions. Future work will include more realistic envelope density models to better connect with observations of compact object binaries relevant to the upgraded LIGO-Virgo-KAGRA gravitational wave detectors and upcoming LISA gravitational wave mission.

Gait Analysis Through the Deployment of Markerless Motion Capture in Routine Clinical Practice

Irina Djuraskovic, PhD Student, Interdepartmental Neuroscience Program (NUIN), The Graduate School

A person’s gait pattern holds critical information for establishing diagnosis, studying disease progression and predicting recovery. However, current clinical outcome measures, such as the 10-meter walk test that assess only gait speed, tend to overlook the complexities underlying motor quality. Recent advancement in markerless motion capture (MMC) has enabled the capture of detailed spatiotemporal and kinematic features, and the extraction of relevant metrics relating to coordination, asymmetry and compensation. This work demonstrates the novel integration of MMC into the clinical workflow in a rehabilitation hospital as patients undergo treatment and shows the rich longitudinal gait analysis it enables.

However, to trust these systems in clinical practice, reliable confidence intervals showing output accuracy for an individual are critical. We recently developed a probabilistic model using variational inference to estimate posterior distributions of joint angles. Using data from 68 participants across two institutions, we validated the model against an instrumented walkway and standard marker-based motion capture, demonstrating statistically sound, reliable and well-calibrated bounds on kinematic estimates across clinical populations. This work also highlights the potential use of this probabilistic method to identify unreliable outputs without the need for ground-truth instrumentation.

Video data collected from 30 participants with varying etiologies were processed and the kinematics during the rehabilitation were extracted and used to calculate relevant spatiotemporal measures, range of motion, and the Gait Deviation Index (GDI). Outputs from the GDI, a composite measure of how gait kinematics deviate from able-bodied controls captures kinematic features of gait quality that are not detected by current standard clinical outcome measures. Our results show the implementation of this workflow in clinic provides a significantly more holistic and accurate understanding of an individual’s gait while also demonstrating the feasibility and efficacy of MMC integration by overcoming common barriers such as specialized laboratories, equipment, and time-intensive procedures.

Contempt

Nathan Reitiner, Postdoctoral Fellow, Pritzker School of Law

Contempt of court is a medieval doctrine born in an era when bodily mutilation was considered a reasonable penalty. Surviving the journey from England to America, contempt evolved into what scholars have called the “Proteus of the legal world”: an amorphous doctrine that permits abuse more readily than it secures order. Commentators catalogue a litany of troubling examples—eye rolls, uttering “how come,” even taking an audible breath—each allegedly sufficient to earn a trip to jail. The conventional conclusion follows naturally: contempt has no place in the twenty-first century.

But what if these examples reflect bad apples rather than a rotten orchard? What if the prevailing account rests on a systematically skewed sample? Like much of legal scholarship, the literature on contempt relies heavily on published opinions housed in databases such as Westlaw and Lexis. Yet published opinions represent a thin and highly selective slice of judicial behavior. If so, the existing narrative may tell us more about what is controversial enough to be published than about how contempt actually functions in everyday trial courts.

This Article offers a different account. Drawing on the largest empirical dataset on contempt assembled to date—over 400,000 documents spanning more than 12,000 cases—we combine qualitative coding with quantitative analysis to examine how contempt operates on the ground. The results are surprising. The contempt that dominates headlines and casebooks is not the contempt that predominates in practice. Far from a shapeshifting Proteus, contempt appears largely routine and administrative—quotidian rather than chaotic—and continues to serve a functional role in contemporary courts. 

CalibrAI: Evaluating and Tuning Safety-Usability Tradeoffs in LLM-Based Systems

Dheeptha Rai, Graduate Student, McCormick School of Engineering and Applied Science

Safety mechanisms in large language model-based systems are often evaluated using aggregate metrics that obscure how systems fail in practice. CalibrAI introduces a calibration-based evaluation framework that separates user behavior generation from outcome assessment, making false positive and false negative tradeoffs explicit across safety thresholds. The talk examines how evaluation design choices shape our understanding of risk in deployed AI systems, and why aggregate safety scores can conceal meaningful shifts in system behavior.

Designing Gen AI Interaction Paradigms to Support User Creativity in Music Production

Katherine O'Toole, PhD Student, School of Communication

Generative AI systems have introduced new paradigms for music creation by enabling users to produce audio through simple text-based prompting, without requiring prior musical training or technical expertise. While these tools lower barriers to entry and broaden participation in creative production, they risk reducing opportunities for learning, skill development, and creative agency. This project investigates how insights from creativity research can be translated into computational interventions that support more meaningful human–AI co-creativity in music generation.

We present a scalable experimental framework for evaluating whether targeted interventions can improve users’ ability to explore musical possibility spaces and generate more diverse creative outputs when working with text-to-music AI tools. Participants interact with a custom web platform in which they are asked to listen to an initial audio track and generate three maximally distinct variations using a generative AI system. To operationalize creative divergence, we employ computational measures based on music information retrieval (MIR) and natural language processing. Text embeddings are used to represent prompts, and MIR-based audio embeddings are used to represent generated tracks in order to quantify the average amount of musical variation between the original stimulus and the generated artifacts using cosine similarity.

Participants are then randomly assigned to one of three interventions designed to support either implicit or explicit learning: (1) unstructured exploration of diverse musical examples, (2) guided active listening with an AI coach, or (3) guided compare-and-contrast of two audio tracks with an AI coach. Participants then repeat the creative task, allowing for pre–post comparisons of divergence. We hypothesize that AI-coach interventions will produce greater improvements than example-based exposure alone, particularly for non-musicians.

This work contributes new computational methods for large-scale analysis of human–AI co-creative behavior and offers evidence-based design strategies for AI systems that foster learning, creative thinking, and user agency rather than replacing them.