Remove Law Remove Learning Theory Remove Measure
article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

A nave answer might be to measure their L 2 -distance. For example, a complete answer to question (2) would be a set of necessary and sufficient conditions on two reward functions R 1 , R 2 which characterise when it would be acceptable (as measured by R 1 ) to maximise R 2 instead of R 1. Convergent Instrumental Subgoals.

article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

Goodhart's Law in Reinforcement Learning As you probably know, "Goodhart's Law" is an informal principle which says that "if a proxy is used as a target, it will cease to be a good proxy". This paper is also discussed in this post (Paper 4). For details, see the full paper. This paper is discussed in more detail in this post.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Google at NeurIPS 2022

Google Research AI blog

A Workshop for Algorithmic Efficiency in Practical Neural Network Training Workshop Organizers include: Zachary Nado , George Dahl , Naman Agarwal , Aakanksha Chowdhery Invited Speakers include: Aakanksha Chowdhery , Priya Goyal Human in the Loop Learning (HiLL) Workshop Organizers include: Fisher Yu, Vittorio Ferrari Invited Speakers include: Dorsa (..)

Google 52
article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

And the takeaway from this paper is that you can use proofs to measure how much compression you get. And then you prove this, and the measure of compression is how long your proof is. And a technical note: it needs to be in some first-order system or alternatively, you need to measure proof checking time as opposed to proof length.

Model 52