How do I get LLMs up and running quickly?
About
With the rise of large language models (LLMs), new experimental methods are emerging across disciplines — especially in the social sciences. Researchers are increasingly interested in using LLMs as reasoning tools or even as synthetic study participants. But how do you actually work with these tools in practice? In this workshop, we will walk through how to get an LLM up and running, whether through an API or locally on your own machine. We will focus on the most accessible ways that are cheap and high-quality so you can begin experimenting right away.
Speaker

Yan (Stella) Si
Stella is a PhD student at Boston University Computing and Data Sciences, where she works at the intersection of cognitive science and AI.
Her research centers on modeling human decision making, combining neural networks with traditional cognitive models to uncover the psychological principles behind how we choose. She is also building large-scale, high-quality datasets to drive this work forward.
Outside the lab, she is working on startup building. Check out her work on GitHub: github.com/sbel2