- This event has passed.
Building Enterprise Large Language Model Applications
November 11 @ 9:30 am - 5:30 pm
**In this full-day in-person workshop, we will cover the foundations of Large Language models and using custom dataset applications**
### Modules 1-2: Fundamentals of Language Models
### Modules 3-4: Deployment of LLM applications
855 Maude Ave,
Mountain View, CA 94043
9:30 AM Doors open, morning coffee, Course Intro
10:00 AM: Welcome message – ACM SF Bay
10:10 AM – 11:30AM: Module 1
11:30 AM – 12:00 PM Cluster Access & Hands-on exercise
12:00 PM – 1:00 PM Lunch
1:00 PM Module 2: Fine-Tuning LLaMaV2 with custom dataset (RAGs)
2:00 PM – 2:30 PM Building your own app (hands-on lab)
2:30 PM – 4:00 PM App Deployment (Vercel/Streamlit)
4:30 PM – 5:00 PM Wrap up
This workshop provides a comprehensive introduction to building AI applications with large language models. Attendees will learn the foundations of models like GPT-3.5/GPT4 and LLaMa2, including how they work, how to access them, and best practices for fine-tuning and prompting. A key part of the day will involve hands-on work with custom datasets to train models on specific tasks and document types. We’ll cover gathering quality data, cleaning and labeling, choosing model architectures, prompting techniques, and evaluating performance. The workshop wraps up with deployment strategies, including hosting models locally, leveraging APIs, monitoring, and maintaining production systems. Participants of all backgrounds are welcome. The material will cater to beginners while still diving deep on topics critical to real-world language model adoption.
Some key recent topics to cover could include chain-of-thought prompting, an approach to conversational AI; reinforcement learning from human feedback for improved answers over time; and cross-domain transfer learning to leverage models trained in one domain for new domains with limited data.
Large language models like OpenAI’s GPT-3.5/GPT4, Meta (Facebook)’s LLaMa v2, Google’s Palm, and many more have sparked a new wave of AI capabilities, enabling more natural language processing, text generation, and code writing than previously possible. However, effectively leveraging these models requires specialized knowledge around model architectures, training approaches, prompting techniques, and infrastructure. At the same time, access to foundation models is expanding through APIs from companies like Anthropic, Cohere, and HuggingFace. This democratization opens up AI augmentation for a much broader audience.
There is a major need to equip developers, data scientists, and other practitioners with the capabilities to build impactful AI solutions powered by language models. This workshop aims to make large language model adoption more accessible by providing both a 101-level introduction and a deep dive into topics critical to real-world application development. Participants will gain hands-on experience while learning best practices around datasets, training, evaluation, prompting strategies, and deployment of AI systems. Our goal is to empower attendees to leave ready to utilize these transformative models within their own organizations and domains.
### Event description
This full-day workshop is intended to teach you what open source models like LLaMa and closed-source models such as OpenAI’s GPT3.5 turbo and GPT4 can be utilized for building applications.
During the morning session you will focus on LLM fundamentals. Via **hands-on exercises and notebooks** you will explore open-source and closed-source LLM APIs that allow you to run Python scripts to interact programmatically with the models.
In the afternoon session we will begin building the chatbots with custom datasets. You will also learn about approaches to debugging, promp-engineering, and methods of fine-tuning using RAG (retrieval augmented generation).
**NOTE**: Attendees will have access to the full Deep Learning infra for training AI models and deploying at scale. There is a nominal charge for the full-day of compute and API access to language models hosted by Huggingface or OpenAI. **Registration also includes a 1-year SFBay ACM membership ($20 value).**
**Interactive notebooks, hands-on exercises, slides and QA sessions** will help you understand relevant concepts, APIs and best practices.
### Access to the training materials
You will have access to the dedicated GitHub repository with all training resources.
You will be provided with a dedicated Anyscale compute cluster that you will use for the duration of training. After the event, you will always be able to run Ray on your laptop with the training material on the Github repo.
SFBay ACM Prof Dev Chair: Yashesh Shroff
Lunch, snacks, coffee, and community camaraderie included.
For more information on Registration, contact yshroff at g | m | a i l