article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

Some notable possible candidates include: Goodharts Law. Which specification learning algorithms are guaranteed to converge to a good specification? How do these errors depend on how much optimisation pressure we exert, and other relevant parameters? Are there any distinct failure modes that could be individuated and characterised?

article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

Goodhart's Law in Reinforcement Learning As you probably know, "Goodhart's Law" is an informal principle which says that "if a proxy is used as a target, it will cease to be a good proxy". This paper is also discussed in this post (Paper 4). For details, see the full paper.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Google at NeurIPS 2022

Google Research AI blog

A Workshop for Algorithmic Efficiency in Practical Neural Network Training Workshop Organizers include: Zachary Nado , George Dahl , Naman Agarwal , Aakanksha Chowdhery Invited Speakers include: Aakanksha Chowdhery , Priya Goyal Human in the Loop Learning (HiLL) Workshop Organizers include: Fisher Yu, Vittorio Ferrari Invited Speakers include: Dorsa (..)

Google 52
article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

Daniel Filan (00:28:50): If people remember my singular learning theory episodes , theyll get mad at you for saying that quadratics are all there is, but its a decent approximation. (00:28:56): Or is that Daniel Filan (01:40:07): Well, if we knew the Chinchilla scaling law , we would know this. Jason Gross (01:40:16): Probably.

Model 52