Skip to main content

CoDEx Sponsor Demonstrations

Thank you to our CoDEx 2025 corporate sponsors for providing the following demonstrations, which will be held in the Rock Room.

Wolfram logo

10:10 to 10:35 a.m. - Wolfram  Research

Scientific Literature Assistants: Embedding Spaces and Retrieval Augmented Generation for AI Applications

John McNally, Principal Academic Solutions Developer, Wolfram Research

This talk gives a brief overview of Large Language Models (LLMs) and retrieval augmented generation (RAG), as well as a practical application to scientific literature review. It begins with an introduction to Large Language Models and how they are able to generate human-like text. Then, an introduction to embedding spaces and how they are used to construct vector databases follows. Some comments and computational experiments on the geometry of embedding spaces are presented. Finally, a demonstration of these concepts as applied to scientific literature review.

AWS logo

 

 

10:40 to 11:05 a.m. - Amazon Web Services (AWS)

Navigating Uncertainty: Tangible Ways AWS Can Empower Researchers in Current Times

Carolyn Przybylinski, Account Manager, AWS
Shashank Tanksali, Senior Solutions Architect, AWS

AWS can help accelerate your research and time to science. Join us to learn how AWS can support you throughout the research lifecycle, from initial grant proposal to research publication and impact. We will introduce the local AWS team supporting Northwestern and explore the resources available, including letters of support, proofs of concept, access to subject matter experts, and AWS cloud credits to strengthen your proposals. Learn how some of your NU peers are leveraging AWS services and the Amazon Partner Network to fast-track scientific discovery. Our AWS team can guide you in leveraging the breadth of our 200+ services to advance knowledge and develop innovative solutions. 

Nvidia logo

11:15 to 11:40 a.m. - NVIDIA

Accelerating Science

Angus Forbes, Strategic Researcher Engagement, NVIDIA

Researchers increasingly find it valuable to integrate LLMs, PINNs, reasoning models, intelligent agents, and other AI tools into their practice. This talk explores challenges and opportunities in using contemporary AI innovations for scientific endeavors. We will look at a range of examples showcasing how research can be accelerated in different ways using AI tools, and in particular we will present use cases in which NVIDIA platforms are used across different scientific domains.

Lenovo logo

11:45 a.m. to 12:10 p.m. - Lenovo

Optimizing LLMs: Sizing and Fine-Tuning Strategies

David Ellison, Chief Data Scientist and Director of AI Engineering, Lenovo

Deploying large language models requires careful planning to balance performance, cost, and infrastructure efficiency. In this session, David will share insights from the Lenovo LLM Sizing Guide, outlining how to select the right compute resources, optimize GPU and memory usage, and scale models cost-effectively. He will also discuss fine-tuning GPT models for retrieval-augmented generation, explaining how targeted adjustments improve accuracy, reduce hallucination, and enhance retrieval efficiency for business applications. Join this session to gain practical strategies for optimizing LLM performance.