The host has requested RSVPs for this event

TLDR: On Thursday (3/17) at 5pm, in room 3-333, Jared Kaplan will be giving a talk on scaling laws for neural nets: this is the 3rd talk in a series on AI Alignment and the Long-Term Future (Agathon Discussion Series) that AI Safety Club is hosting in a hybrid format.

The AGI model in the future might be scaled up to multiple orders of magnitude than current models. Can we extrapolate the behavior of large models from smaller ones? First, Jared will briefly review scaling laws and introduce Anthropic’s pragmatic definitions and approach to the alignment problem, while providing some very simple baselines and associated scaling trends. Next, we will focus on preference models and various details of scaling laws for it. Finally, some results using preference modeling to train helpful and harmless language models are shown.

Kaplan is a professor of physics at Johns Hopkins University and currently on leave as co-founder of Anthropic. He is a graduate of Stanford and obtained his Ph.D. in high energy physics from Harvard University.

Here is the detailed schedule (https://www.harvardea.org/agathon) and the RSVP form (https://airtable.com/shrHcggOEIevXv3St). Dinner will be provided at in-person venues.

Location: MIT Room 3-333

2

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities