Poster Session
10:10 a.m. to 12:10 p.m. - Wildcat Room A and B
Effects of Chemical Short-Range Order on the Percolation Model of Passivation for Binary Alloys
Abhinav Roy, PhD Student, McCormick School of Engineering and Applied Science
Abstract: Metallic alloys exhibit corrosion resistance due to the formation of a passive oxide film. To better understand passive film formation, we develop a percolation model to investigate the effects of chemical short-range order (SRO) on the aqueous passivation of face-centered cubic (FCC) binary alloys to account for chemical ordering/clustering in nominally random solid solutions. We employ a lattice generation algorithm that directly utilizes the first nearest neighbor Warren-Cowley SRO parameter to generate the computational lattice. We quantify the effects of SRO on the first nearest neighbor three-dimensional (3D) site percolation threshold using the large cell Monte Carlo renormalization group method and find that the 3D site percolation threshold is a function of the SRO parameter. We analyze the effects of SRO on the distribution of the total number of distinct clusters in the percolated structures and find that short-ranged clustering promotes the formation of a dominant spanning cluster. We also examine the effects of SRO on the 2D–3D percolation crossover and find that the thickness of the thin film for percolation crossover is a function of the SRO parameter. Combining these results, we obtain a percolation crossover model that shows how SRO can be used as a processing parameter to tune corrosion resistance in metallic alloys. Finally, we develop SRO maps over a range of compositions and temperatures from Monte Carlo simulations using a Cluster Expansion model trained on the mixing energy of Cu-Rh FCC alloy calculated from first principles using Density Functional Theory (DFT). When coupled with the percolation crossover model, such SRO maps demarcating regions of positive and negative Warren-Cowley SRO parameters can directly inform about the passive film formation behavior of binary alloys.
Computational component: This research work is entirely computational where we employ different statistical and numerical algorithms to solve each component of the project. The computational lattice is generated using a custom algorithm to consider short-range ordering in practical alloys. Thereafter, we develop a modified Hoshen-Kopelman cluster labeling algorithm for FCC lattice that sweeps through the lattice one site at a time and determines the appropriate cluster label for that site. The large cell Monte Carlo renormalization group is a statistical method that utilized Monte Carlo type simulation to extrapolate percolation thresholds for infinite lattice from distribution of finite sized computer-generated lattice. For this, we had to run thousands cluster labeling calculations in parallel on Quest - a task that would only be possible on large HPC system. The last part of the project uses high-throughput Density Functional Theory calculations to calculate mixing energies for thousands of small structures of Cu-Rh alloy system.
Characterization of a Novel Computation Approach to Detect Domain Architecture in Hi-C Data using a Block Diagonal Matrix Model
Adam O'Regan, Master's Student, McCormick School of Engineering and Applied Science
Abstract: Chromatin conformation capture techniques have revolutionized the study of three-dimensional genome organization, with Hi-C providing an unbiased, high-resolution view of all DNA-DNA interactions within a biological sample by incorporating massively parallel sequencing. Hi-C has been instrumental to characterizing topologically associate domains (TADs), defined as regions of contiguous chromatin that display enhanced probability of interaction. TADs present as blocks along the symmetrical contact matrix’s diagonal, delimit boundaries of off-diagonal topological features, and associate strongly with transcriptional regulation; these properties render them an excellent focus point of Hi-C analysis. The present study introduces a novel computational approach to TAD detection using a greedy algorithm that iteratively identifies genomic loci where the on-diagonal and off-diagonal sample sets are statistically maximal. It capitalizes on simplistic statistical testing to detect increasingly precise TAD clusters until threshold significance cannot be reached, extracting a hierarchical organization of the diagonal as well as off-diagonal features in the process. Building on observations from previous studies of chromatin architecture, it validates the algorithm by modeling Hi-C as a block diagonal matrix spanning the parameters of domain size, contact intensity, and noise. Results from these simulations indicate that the present greedy methodology detects TAD boundaries with high accuracy, as measured by its ability to measure one hundred percent of simulated domains even in matrices of increasing noise, size, and complexity. Further analysis reveals the constraints of this method as determined by parameters in which false discovery rises to an unexpected level; these data suggest insufficient statistical power in smaller matrices. Comparative analysis with existing methods like the Arrowhead Transformation position the current technique as a possible advancement for analysis of a widely implemented genomic tool—Hi-C—with implications for unresolved questions in chromatin topology and genetic regulation.
Using a Natural Experiment to Quantify the Impact of Geopolitical Shocks on the Trajectory of a Nation’s Scientific Enterprise
Huaxia Zhou, PhD Student, McCormick School of Engineering and Applied Science
Abstract: The scientific enterprise could be profoundly impacted by external geopolitical factors that might reshape its developmental pathways. To investigate such effects, we focus on a natural experiment in Germany, where the national science system was partitioned after World War II and later reunified in 1990. We exploit the availability of digital repositories capturing scholarly metadata and regional economic indicators to investigate the trajectories of research activities across East and West German regions. By identifying two comparable countries, Austria and Switzerland, along with their universities, as references, we center the analysis on the impacts under partition and reunification in Germany. Our findings reveal that while a persistent East–West gap in overall scientific output remains–with the West always leading–the growth rate of scientific productivity in the East has accelerated substantially since reunification, surpassing that of the West. Our further examination of co-authorship networks shows that East German institutions engaged in both domestic and international collaborations since reunification. We also find increased efforts in soft sciences research are emerging. These observed transformations provide insights into how scientific enterprise adapt and evolve in response to geopolitical shocks, providing references for other regions experiencing similar disruptions.
Heterostructure Morphology Stability Maps in Multiphase Nanoparticles
Elodie Sandraz, PhD Student, McCormick School of Engineering and Applied Science
Abstract: In the rapidly growing field of nanocombinatorics, the advent of megalibraries of multiphase nanoparticles (NPs) has created a need for predictive computational approaches. A notable characteristic of multiphase NPs is the internal interface architecture that arises from the joining of constituent phases at phase boundaries. We propose a strategy to predict the ground-state interface architecture from the DFT-calculated surface and interfacial energies of all constituent phases as input and use this strategy to create stability maps to predict multiphase NP morphologies. A Potts model, a q-state Ising model in which spins represent discretized regions of a constituent phase, provides a flexible, discretized representation of a multiphase nanoparticle with arbitrary interfacial join morphology. Interactions between spins represent interactions between phases, or interfacial energies. To model the surface of a nanoparticle, an additional spin represents the surrounding environment, and interaction energies with this spin are surface energies. A Monte Carlo algorithm is applied to search for the lowest-energy spin configuration. The trade-off between surface and interfacial energy minimization drives the minimization and prediction of a ground-state categorical interface morphology. We calculate a biphase nanoparticle interface morphology stability map to study the relation between the three biphase morphologies we found. Next, we use this approach to search for triphase nanoparticle interface morphologies and discover 11 triphase morphologies, nine more than previously reported. With this new information, we propose a revised interpretation of experimentally reported triphase NP morphologies. Finally, we select simple systems of interest and predict their bi- and tri-phase morphologies.
Evaluating Elements of Empathic Communication with Experts, Crowds, and Large Language Models
Fai Poungpeth, Undergraduate Student, Weinberg College of Arts and Sciences
Abstract: Empathic communication is an interpersonal skill that can transform a conversation from a mere information exchange into a meaningful human connection. Recent empirical studies have demonstrated large language models (LLMs) have the capability to identify instances of empathic communication and even express empathy, but their performance depends on the conversational context. We investigate the inter-rater reliability of experts, crowds, and LLMs for empathic communication by comparing annotations of 21 empathy-related features in 200 conversations across four datasets where one partner is sharing a problem and the other is offering empathetic support. We find relatively high reliability between experts across most features, but reliability varies with the diversity, complexity, and subjectivity of the feature. Furthermore, we find that LLMs and experts have higher inter-rater reliability than crowds and experts. These results reveal the importance of contextualizing the reliability of LLM and crowd annotations with the limits of agreement between experts. Moreover, these results show that many but not all proposed features of empathic communication can be annotated reliably by both LLMs and crowds.
Open-Source Large Language Models Can Outperform Medical Experts in Clinical Text Summarization of Pathological Reports.
Jacob John, Master's Student, McCormick School of Engineering and Applied Science
Abstract: With the ever-growing volume of electronic health records (EHRs), clinicians face the challenge of efficiently processing and extracting critical information. Summarization has emerged as an effective method for distilling key insights from pathology reports, enhancing clinical decision-making and patient outcomes. However, manually summarizing these reports imposes a significant burden on medical practitioners, affecting how they allocate their time. Large Language Models (LLMs) offer a promising solution by generating concise yet comprehensive summaries, reducing documentation workload while preserving high medical fidelity. Recent advancements in LLMs have demonstrated their ability to adapt to the evolving medical landscape, with continued improvements driven by research and hardware capabilities. In this study, we systematically evaluate multiple state-of-the-art LLMs—LLaMA 3.0, 3.1, 3.2, DeepSeek, Mistral, and Gemma—for their ability to summarize pathology reports while retaining vital clinical insights. Our approach prioritizes completeness and correctness to align with physician standards for medical documentation. To rigorously assess summarization quality, we introduce a hybrid evaluation framework that integrates objective and subjective measures. We employ traditional linguistic metrics, including ROUGE, METEOR, BLEU, and BERT-based embeddings, while incorporating a novel llama-based embedding model specifically designed to handle the complexities of long-form medical summaries. We also incorporate expert-driven subjective evaluations to assess the clinical usability and potential harm of the generated summaries. Our findings reveal trade-offs among models, with some excelling in fluency and readability while others demonstrate superior content retention. By combining computational metrics with domain-expert assessments, we robustly evaluate LLM effectiveness in clinical summarization. This research thereby underscores the transformative potential of AI-driven solutions in radiation oncology, alleviating documentation burdens and improving access to critical medical information.
Clustering Challenges in Mental Health NLP: Evaluating Domain-Specific vs. General-Purpose Embeddings
Juwon Park, Undergraduate Student, Weinberg College of Arts and Sciences
Abstract: Background: Increasing use of large language models (LLMs) in medicine necessitates responsible implementation, particularly when leveraging specialized clinical text embedders. This study evaluates embeddings generated by a domain-specific model (MentalBERT) and a general-purpose model (all-MiniLM-L6-v2) on mental health-related text from Reddit. Using dimensionality reduction and clustering, we assess which model better captures nuances indicated by subreddit labels, while exploring the implications for clinical application integration. Methods: A Hugging Face dataset containing 54,412 Reddit posts labeled with mental health-related subreddits (e.g., r/SuicideWatch, r/Depression, r/Anxiety) was utilized in this study. Fixed-length embeddings were generated using MentalBERT (a domain-specific model) and all-MiniLM-L6-v2 (a general-purpose model) and visualized via t-SNE for clustering. K-Means clustering (k=5) was applied to the t-SNE embeddings, and performance was evaluated using metrics such as Silhouette Score, Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), and Purity Score. Results: The dataset exhibits significant variability in post lengths, with an average of 178 words (SD = 237) and a median of 108. Clustering performance metrics and visualizations show that all-MiniLM-L6-v2 outperformed MentalBERT across all measures, though both models struggled to align clusters with true subreddit labels as evidenced by low evaluation metrics such as Silhouette Score, NMI, ARI, and Purity Score. While all-MiniLM-L6-v2 demonstrated better generalization, both models faced challenges in representing the nuances of informal, mental health-related text for unsupervised clustering tasks. Conclusion: Although all-MiniLM-L6-v2 marginally outperformed MentalBERT, both models struggled to capture the nuanced and overlapping expressions of mental health conditions in the informal and diverse language of online forums. Label imbalance, variability in post lengths, and the models' focus on short-sentence tasks likely contributed to the underperformance, raising concerns about the use of specialized embedders in fine-tuning LLMs for clinical applications.
Impact of Block Size on Mechanical Properties of Block Copolymer Spider Silk
Gwen Liu, PhD Student, McCormick School of Engineering and Applied Science
Abstract: Spider silk demonstrates exceptional mechanical properties, such as high strength, toughness, and biodegradability, making it a model material for bioinspired designs. Understanding the structural features behind the properties of spider silk is important for developing synthetic alternatives. Prior atomistic simulations have explored many design parameters influencing silk performance, including protein length and processing forces.
In this study, we use molecular dynamics simulations in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) to investigate how the block size of co-polymer sequences affects the mechanical properties of silk-like fibers. The protein system is coarse-grained into a co-block polymer consisting of repeating hydrophobic (A) and hydrophilic (B) segments. Silk processing is modeled using dissipative particle dynamics, incorporating solvent effects crucial to silk formation. We maintain a constant stoichiometric A/B ratio and protein chain length while varying the block length to examine its impact on fiber properties. Our analysis focuses on hydrogen bonding networks, structural order, and mechanical response under tensile testing.
Results, visualized using Visual Molecular Dynamics (VMD), indicate that as block size increases, large-scale aggregation of A beads occurs, forming fewer interconnected networks and reducing toughness. Optimal toughness performance is observed at an intermediate block size, where a balanced network of crystalline and amorphous regions enables efficient load distribution and strain accommodation.
Future work aims to expand network analysis using graph-based approaches to understand the trade off between crystalline cluster sizes and polymer chain interconnectivity. These insights can inform the design of advanced bioinspired fibers with optimized mechanical properties.
Augmenting Radiation Oncology Datasets with Synthetic Question Generation Using Large Language Models
Phillip Dawny, PhD Student, McCormick School of Engineering and Applied Science
Abstract: LLM-based chat assistants increasingly handle surface-level patient queries without physician interaction. These chatbots use Retrieval-Augmented Generation (RAG), where the LLM enhances responses with retrieved documents. RAG database quality impacts response accuracy and robustness, but compiling diverse question-answer pairs requires extensive human effort. This project explores LLM-driven question generation from scraped radiation oncology healthcare information.Chatbots use Retrieval-Augmented Generation (RAG), where the LLM enhances responses with retrieved documents. RAG database quality impacts response accuracy and robustness, but compiling diverse question-answer pairs requires extensive human effort. This project explores LLM-driven question generation from scraped radiation oncology healthcare information.Methods: A pipeline was implemented to generate questions from radiation oncology-related websites, including healthcare agencies and professional societies, using the Scrapy framework to extract structured content. The extracted text was segmented into coherent units and preprocessed using NLP techniques, including sentence tokenization and lemmatization. LLaMA 3.1 LLM was employed to generate synthetic questions from this information using directed and structured prompts. An evaluation framework combining LLM-based judgment, statistical answerability measures, and clinician feedback was implemented to assess the questions generated. The LLM self-evaluated each generated question on a scale of 1 to 5 for fluency and answerability. Human evaluators, including domain experts, independently assessed the same criteria using the same 1–5 scale. To refine the dataset, we analyzed the correlation between LLM and human evaluations and applied a filtering threshold to discard low-quality questions.
Results: A total of 250 questions were generated across eight anatomical sites and five treatment modalities. The LLM’s self-evaluated scores for answerability averaged 4.1±0.6, while fluency scored 4.4±0.25. Human evaluators assigned an average answerability score of 4.0±0.5 and a fluency score of 4.5±0.5. By discarding questions with human-assigned scores below a threshold of 2, the correlation between LLM and human evaluations improved significantly. The answerability correlation increased from 0.48 to 0.82, while the fluency correlation rose from 0.35 to 0.75.
Conclusions: LLM-generated questions aligned closely with human expectations after filtering low-quality outputs, demonstrating the model’s ability to produce high-quality, domain-relevant questions for radiation oncology. This method reduces manual effort in curating question-answer pairs and improves AI-driven patient communication in oncology.
Generating the Quizbowl Curriculum
Pipob Puthipiroj, PhD Student, School of Education and Social Policy
Abstract: In this talk, we introduce a novel, data-driven approach to preparing for quizbowl, a fast-paced, buzzer-based competition that tests players' knowledge across a broad range of academic subjects. In order to perform well, players must develop familiarity with facts surrounding important figures, events, works, and concepts, and thus efficient studying becomes paramount.
Historically, top quizbowl players have trained using a personalized combination of reviewing lists of what topics come up frequently, reading source material, and creating and reviewing flashcards. Instead, we scrape an online database of 300,000 quizbowl questions across 600 tournaments, and construct a large knowledge graph to display the interconnections between important topics. By revealing how high-yield topics overlap and cluster together, this visualization will quickly help beginners develop a broad foundation. Experienced players likewise benefit from identifying subtle gaps in their knowledge.
Secondly, we develop a tool that, given a topic, will find relevant questions-answer pairs, and cluster similar pairs together. This functionality is built back into the knowledge graph, so that one can hover over a node to see a rapidly distilled breakdown of pivotal clues. Alternatively, the output of clustered pairs can be automatically formatted per a user's preferences via a customGPT as notes for further review.
We fully acknowledge that learning is a heavily personal process, tied inseparably with identity: consequently we do not prescribe a one-size-fits all pedagogy, but rather a set of data-driven tools and practices that quizbowl players can use to refine their own modes of learning. We note also that this method works particularly well for subjects like literature, social science, and the humanities, and less so for the visual and auditory fine arts, and the hard sciences (although this criticism extends more generally to quizbowl). In the future, further integration with online resources, such as excerpts of sound, scientific diagrams, or images in the common domain will increase accessibility to other modes of learning.
Identification of Mysterious Infrasound Signal Using Template Matching
Cobin Diaz, Undergraduate Student, Weinberg College of Arts and Sciences
Abstract: Infrasound signals—sound waves with frequencies below the hearing threshold of humans—can be detected from various sources including meteor strikes, earthquakes, and atmospheric disturbances. Having a better understanding of different types of infrasound signals can help us better understand disturbances in the atmosphere and the world around us. By understanding the waveform signature of these atmospheric and geophysical events, we can better identify other events, and motivate improvements to algorithms for infrasound event detection and classification. Identifying these signals can pose a challenge due to the presence of background noise and other confounding signals. Template matching allows us to exploit data from known atmospheric and geophysical events by matching our signal of interest to previously identified signals. The technique measures the waveform similarity between our original signal and other data to find events that match well, allowing us to better understand the probable cause of our original signal. This study focuses on an unidentified 30-minute infrasound signal from a station in Riverwoods, Illinois, and its identification. Community members reported an unusual event, and we were interested in seeing if similar infrasound events have occurred before or after this one and why they might be happening. We used infrasound data from the station to create a template and search for similar signals from 2022 to 2024 of continuous data. This dataset was then filtered for high cross-correlation values to obtain optimal signals that could match the signal of interest, with similarity being confirmed by visual analysis. Detected events were then further investigated for their probable causes by reviewing news articles, weather reports, and first-hand accounts from those who may have witnessed the signal. We will present our template matching workflow and preliminary findings. We will also discuss the utility and challenges of template matching for investigating infrasound signals of an unknown origin.
Big Data and Big Rigs: Sentiment Analysis of EPA Greenhouse Gas Regulations in the Trucking Industry
Deirdre Edward, PhD Student, McCormick School of Engineering and Applied Science
Abstract: Commercial trucking fleets are a significant contributor to climate change, and also emit pollutants that directly harm human health. The transition to electric trucking fleets is a key component of greenhouse gas reduction strategies, yet stakeholder perspectives on this shift remain deeply divided. This study leverages computational sentiment analysis to examine public comments submitted to the Environmental Protection Agency (EPA) on recent greenhouse gas regulations for heavy-duty vehicles. Using FinBERT, a transformer-based natural language processing model trained on financial texts, we applied sentiment analysis to assess how different stakeholder groups—trucking companies, public interest groups, and industrial entities—frame their concerns across key themes, including cost, infrastructure, climate change, and public health.
Our findings reveal significant disparities: trucking companies consistently express negative sentiment, particularly regarding financial and logistical topics, while public interest organizations were largely positive, advocating for stricter regulations. Non-trucking businesses show mixed sentiment, reflecting diverse industrial priorities. Interestingly, smaller sentiment gaps exist for themes like climate change and environmental health, suggesting some alignment across stakeholders despite financial conflicts.
By leveraging computational methods to quantify these differences, this research highlights key areas of conflict and consensus regarding fleet electrification. Furthermore, by analyzing regulatory comment data at scale, this research demonstrates the value of public comment records as a rich, underexplored data source for policy analysis and stakeholder engagement. These insights can inform policy discussions, particularly in urban trucking hotspots, where balancing industry needs with community health and environmental protection remains a pressing challenge.
Molecular Insights into the Mechanical Behavior of Polymer-Grafted Nanoparticles: The Role of Fabrication and Relaxation
Sri Maddukuri, PhD Student, McCormick School of Engineering and Applied Science
Abstract: The integration of inorganic nanoparticles into polymers has significantly transformed the paradigm of material design. By grafting polymer chains directly onto the nanoparticle surface, polymer-grafted nanoparticles (PGNs) exhibit enhanced mechanical, thermal, and electrical properties compared to polymer melts. The presence of rigid nanoparticles induces structural order and mitigates the risk of agglomeration. Recent experimental studies suggest that mechanical enhancement originates from molecular-scale interactions; therefore, understanding the microscale behavior of PGNs is crucial for predicting the macroscale mechanical properties of the resulting nanocomposites.
The fabrication method, solution casting versus spin casting, plays a critical role in determining the mechanical properties of PGNs. To investigate this effect, we conduct molecular simulations by subjecting PGNs to varying relaxation times before performing uniaxial deformation tests. We employ a chemistry-specific coarse-grained model of poly(methyl methacrylate) (PMMA), which is grafted onto silica nanoparticles to construct the PGN system. The simulation environment is initialized by arranging four PGNs at the face-centered cubic (FCC) lattice points. For equilibration and deformation, we utilize the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS), an open-source molecular dynamics software.
Our initial findings indicate that the modulus remains unchanged across different relaxation times. However, toughness increases significantly with extended relaxation. This effect arises because prolonged relaxation allows polymers from adjacent PGNs to interpenetrate and form entanglements, which bear stress beyond the initial yield point and contribute to strain hardening. Our analysis will elucidate the impact of different casting methods on the physical formation of PGNs and their subsequent influence on the overall mechanical response of the nanocomposite.
Quantitative Modeling to Characterize Maternal Inflammatory Response in the Placental Membranes
Teresa Chou, PhD Student, McCormick School of Engineering and Applied Science
Abstract: The placenta is a vital organ that supports the development of the fetus during pregnancy, and the placental membranes, which surround the amnionic cavity, serve as a key barrier to fetal and uterine infection. Inflammation of the membranes, diagnosed as maternal inflammatory response (MIR) or alternatively as acute chorioamnionitis, is associated with adverse maternal-fetal outcomes. MIR is staged 1-3, with higher stages indicating more hazardous inflammation. However, the diagnosis relies upon subjective evaluation and has not been deeply characterized.
The goal of this work is to develop a cell classifier for 8 placental membrane cells and quantitatively characterize MIR1-2. Method of Study H&E-stained placental membrane slides were digitized. A convolutional neural network was trained on a dataset of hand-annotated and machine learning-identified cells labelled in 8 classes (amniocyte, decidual cell, endothelial cell, trophoblast, stroma, lymphocyte, macrophage, and neutrophil). Overall cell class-level metrics were calculated. The model was applied to 20 control, 20 MIR1, and 23 MIR2 placental membrane cases. MIR cell composition and neutrophil distribution were assessed via density and Ripley’s cross K-function. Clinical data were compared to neutrophil density and distribution.
Results: The classification model achieved a test-set accuracy of 0.845, with high precision and recall for amniocytes, decidual cells, endothelial cells, and trophoblasts. Using this model to classify 53,073 cells from healthy and MIR1-2 placental membranes, we found that 1) MIR1-2 have higher neutrophil density and fewer decidual cells and trophoblasts, 2) Neutrophils colocalize heavily around decidual cells in healthy placental membranes and around trophoblasts in MIR1, 3) Neutrophil density impacts distribution in MIR, and 4) Neutrophil metrics correlate with features of clinical chorioamnionitis.
Conclusions: This paper introduces cell classification into the placental membranes and quantifies cell composition and neutrophil spatial distributions in MIR.
A Retrieval Augmented Generation (RAG) LLM with Self-Evaluation for Accurate and Personalized Radiation Oncology Patient Communication
Yuanrui Zhu, Master's Student, McCormick School of Engineering and Applied Science
Abstract: Radiation oncology patients often face challenges accessing accurate and personalized treatment information. To address this, we introduce CancerRAG—a Retrieval-Augmented Generation pipeline that leverages LLaMA 3.1 8B to dynamically retrieve, adapt, and generate tailored responses. The system alleviates clinician burden while ensuring factual accuracy, relevance, and patient-centered engagement. Its robust and highly scalable dual-vector retrieval chain efficiently employs FAISS for effective similarity search.
A knowledge chain was implemented to integrate patient queries, demographics (age, gender, education), and LLM responses, while a retrieval chain dynamically selected relevant information. User interactions and LLM gradings were stored in PostgreSQL for evaluation. A grader chain was used as a self-evaluation approach to assess relevance, correctness, conciseness, and coherence, with results compared to clinician assessments. Cosine similarity scores measured alignment between LLM responses and database answers, while readability scores evaluated clarity. Robustness was tested using adversarial prompts and input perturbations. The pipeline was built with FastAPI and Dockerized microservices.
This work demonstrates the use of RAG pipelines to dynamically reason, adapt retrieval strategies, and refine LLM outputs—paving the way for more trustworthy, patient-centered AI applications in healthcare. By integrating automated evaluation, clinician feedback, and real- time updates, our approach sets a new standard for LLM-assisted patient communication.
Understanding Wikipedia’s Governance in Practice
Sari Eisen, Undergraduate Student, Weinberg College of Arts and Science
Abstract: Rules are essential for structuring behavior and facilitating coordination in any community. Wikipedia offers a compelling glimpse into human interaction in a decentralized, collaborative environment, operating as an open-edit platform with minimal formal structure. The platform allows users to engage in discussions, collectively create and refine articles, and ensure content remains neutral, verifiable, and relevant, all while maintaining its role as an online encyclopedia. In this paper, we examine the effectiveness and redundancy of Wikipedia’s rules in shaping user engagement by analyzing 10⁷ distinct discussion threads in 20,000 “talk pages,” or the discussion forum associated with each article. We combine these discussion datasets with rules (policies and guidelines), which we organize by hierarchy and intended use. Rules, in the form of policies and guidelines, help structure these discussions, shaping both editorial decision-making and community governance. Therefore, we identify five distinct categories of rules, analyze how they are cited in discussions, and examine their dynamics over time. Our findings show that the most centralized rules—those most fundamental to Wikipedia’s core principles—are referenced most frequently. Additionally, discussions citing more rules tend to be associated with articles that have higher user participation or greater controversy. This implies an increased need for regulation in more tense environments. Finally, we observe that rule usage is highly correlated with administrator activity, suggesting that those most familiar with Wikipedia’s governance rely on rules the most, raising questions about the accessibility of regulation to general users. Since rule creation carries coordination costs, understanding which rules are most frequently used and how they influence interactions can inform decisions on rule development, enforcement, and governance strategies in online communities. In the future, we will explore how rules function in real-world discussions and their impact on user behavior within online communities.
HostSub_GP: a Bayesian Toolkit for Precise Host Galaxy Subtraction in Supernova Spectroscopy
Chang Liu, PhD Student, Weinberg College of Arts and Sciences
Abstract: Spectroscopy deciphers the explosion physics of supernovae (SNe), the splendid death of stars. The emission/absorption features in SN spectra encode the chemical composition and kinematics of the ejecta, which help to place constraints on SN progenitors and explosion mechanisms. In most spectroscopic observations, unfortunately, the SN emission can be heavily contaminated by the light from the galaxies that host them. Precise measurements of spectroscopic features require careful removal of galaxy background (i.e., the light blended with the SN). The classic background subtraction technique in longslit spectroscopy estimates the background background by interpolating the local flux beside the trace of the SN, similar to that of aperture photometry. In many cases, the local host environments are complicated (e.g., near clumpy star-forming regions), and the classic aperture method inevitably leads to a biased background estimation. Here, I present our open-source software HostSub_GP, a toolkit for precise and robust galaxy background subtraction in longslit spectroscopy in a Bayesian way. We implement archival images (e.g., Pan-STARRS) of the galaxy as the prior of the host light distribution along the slit. The host light in both spatial and spectral directions is then modeled as a two-dimensional (2D) Gaussian process (GP), with which we naturally incorporate the known instrumental resolution as scale lengths of the kernel. The model has been tested on the mock longslit observations of nearby galaxies, which are synthesized using the 3D spectra of Multi Unit Spectroscopic Explorer (MUSE) by aligning the 1D spectra extracted from a row of pixels. On the synthetic dataset, our GP model provides a much less biased estimate of the galaxy flux within the SN aperture and consistently outperforms the classic technique.
Whole Exome Sequencing Analysis of Variants in Polycystic Ovary Syndrome (PCOS)
Chloe Parker, PhD Student, Feinberg School of Medicine
Abstract: Polycystic ovary syndrome (PCOS) is a common multisystem endocrine disorder affecting 6-15% of women of reproductive age and is a major cause of anovulatory infertility. PCOS is a heterogeneous disorder with multiple sub-phenotypes modulated by both environmental and genetic factors.
Objective: Our goal is to identify the genes and pathways that directly impact the etiology of PCOS with the intent to expedite the development of targeted treatment.
Methodology: We analyzed whole exome sequences (WES) data of 718 phenotyped individuals with PCOS and 297 age-matched phenotyped controls. We generated a comprehensive catalog of protein-coding variants identified in our cohort and used in silico prediction tools to prioritize likely-to-be pathogenic missense and protein-truncating variants (PTV). Additionally, we will perform gene-based burden testing using (1) our 297 phenotyped controls and (2) 9,440 public database controls from the platform SVD-based Control Repository (SCoRe). Lastly, we will perform pathway analyses on genes with an enrichment of the prioritized variants. We expect the variants identified by WES to map to multiple genes/pathways including novel genes/pathways as well as genes/pathways previously predicted to play a major role in the etiology of PCOS.
Results: We have completed quality control, annotation, and filtration of our dataset and identified 9,103 missense variants and 22,382 PTVs that are likely-to-be pathogenic protein-coding variants. We have replicated our lab’s previous findings of variants in genes known to be associated with PCOS.
Conclusion: We have replicated the identification of protein-altering genetic variants in pathways predicted to be impaired in women with PCOS supporting a critical role for genetics in the etiology of PCOS. We are confident that our gene-based burden testing and pathway analyses will allow us to continue replicating known genes and pathways and identify novel genes and pathways implicated in PCOS pathology, leading to improved personalized phenotyping and treatment of PCOS.
Far-Right Rhetoric Across Borders: A Computational Analysis of Political Speeches
Jack McGovern, PhD Student, Weinberg College of Arts and Sciences
Abstract: Exploring the Commonalities in Far-Right Political Rhetoric Across Contexts
The global far right has achieved political success in a strikingly diverse set of contexts, from Western democracies to post-authoritarian states. This variation presents a puzzle: how can movements with ostensibly similar ideological roots thrive under such different political, economic, and cultural conditions? Existing explanations often focus on national contexts, but this project takes a different approach by investigating whether a shared political strategy or rhetorical style underpins far-right success across countries.
To explore this, we conduct an inductive analysis of the speeches of key right-wing leaders, including Donald Trump (U.S.), Giorgia Meloni (Italy), Viktor Orbán (Hungary), Javier Milei (Argentina), and Jair Bolsonaro (Brazil). Using a multilingual, dynamic BERTopic model, we identify and categorize the key themes in their rhetoric over time. This approach allows us to assess not only the topics they emphasize but also the ways in which these topics evolve and are framed differently across contexts.
Beyond identifying thematic commonalities, we analyze how far-right rhetoric spreads and influences leaders transnationally. By employing vector autoregression (VAR) models, we examine whether changes in the salience of certain themes in one leader’s discourse predict similar shifts in others’ rhetoric. This enables us to assess the degree to which these political figures are engaging in parallel rhetorical strategies versus actively shaping each other’s discourse.
Our findings contribute to the broader understanding of far-right mobilization by identifying which rhetorical elements are shared and which remain context-specific. This helps refine explanations of far-right success by ruling out context-dependent factors while highlighting transnational dynamics of influence. Ultimately, our study sheds light on the extent to which contemporary far-right leaders employ a common style, and whether this rhetorical cohesion contributes to their global political effectiveness.
Whole-Brain Functional Connectivity Analysis of Cortical, Subcortical, and Brainstem Sensory Regions During Tactile Stimulation
Keira Thompson, Undergraduate Student, Weinberg College of Arts and Sciences
Abstract: Functional connectivity (FC) analysis of correlated blood oxygen-level dependent functional magnetic resonance imaging (BOLD fMRI) signals can be used to determine networks of functionally connected brain regions. Furthermore, comparisons of FC at rest and during task demonstrate small but meaningful task-associated changes in coupling of spontaneous brain activity. While both resting-state and task-based analyses have been conducted relevant to the tactile sensory system, whole-brain FC differences associated with limb-specific tactile sensation has yet to be studied in depth. Here, we conducted the first whole-brain FC analysis of cortical, subcortical, and brainstem sensory regions during non-painful tactile stimulation of the left hand, right hand, and right foot. We found that interhemispheric S1 coupling during stimulation of the left or right hand was lower than that during the task-free condition. The same trend was not seen in interhemispheric S2 or thalamus coupling. Meanwhile, cerebellar/S1 regions corresponding to the same limb showed greater coupling than those corresponding to different limbs during task but not in the task-free condition. Finally, significant within-brainstem coupling was found between bilateral cuneate nuclei during both the task and task-free states. Our findings demonstrate that FC differences do exist between task and task-free states. Whole-brain mapping of these network reconfigurations is essential to better understand the sensory system in healthy individuals and clinical cohorts with disrupted sensory processing.