CoDEx Sponsor Demonstrations
Thank you to our CoDEx 2025 corporate sponsors for providing the following demonstrations, which will be held in the Rock Room.
10:10 to 10:35 a.m. - Wolfram Research
Scientific Literature Assistants: Embedding Spaces and Retrieval Augmented Generation for AI Applications
John McNally, Principal Academic Solutions Developer, Wolfram Research
This talk gives a brief overview of Large Language Models (LLMs) and retrieval augmented generation (RAG), as well as a practical application to scientific literature review. It begins with an introduction to Large Language Models and how they are able to generate human-like text. Then, an introduction to embedding spaces and how they are used to construct vector databases follows. Some comments and computational experiments on the geometry of embedding spaces are presented. Finally, a demonstration of these concepts as applied to scientific literature review.the Rock Room.
10:40 to 11:05 a.m. - Amazon Web Services (AWS)
Abstract and presenter information coming soon.
11:15 to 11:40 a.m. - NVIDIA
Accelerating Science
Angus Forbes, Strategic Researcher Engagement, NVIDIA
Researchers increasingly find it valuable to integrate LLMs, PINNs, reasoning models, intelligent agents, and other AI tools into their practice. This talk explores challenges and opportunities in using contemporary AI innovations for scientific endeavors. We will look at a range of examples showcasing how research can be accelerated in different ways using AI tools, and in particular we will present use cases in which NVIDIA platforms are used across different scientific domains.
11:45 a.m. to 12:10 p.m. - Lenovo
Optimizing LLMs: Sizing and Fine-Tuning Strategies
David Ellison, Chief Data Scientist and Director of AI Engineering, Lenovo
Deploying large language models requires careful planning to balance performance, cost, and infrastructure efficiency. In this session, David will share insights from the Lenovo LLM Sizing Guide, outlining how to select the right compute resources, optimize GPU and memory usage, and scale models cost-effectively. He will also discuss fine-tuning GPT models for retrieval-augmented generation, explaining how targeted adjustments improve accuracy, reduce hallucination, and enhance retrieval efficiency for business applications. Join this session to gain practical strategies for optimizing LLM performance.