Remove Evaluation Remove Learning Theory Remove Metrics
article thumbnail

Six Books About Skills You Need To Succeed in A Networked World

Beth's Blog: How Nonprofits Can Use Social Media

This book is filled with great tips on designing engaging learning experiences that help your participants connect, inspire, and engage. The model balances content, learning design, and participants. The ideas, tips, and tricks are grounded in adult learning theory, but the book is very practical.

Skills 106
article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

Some relevant criteria for evaluating a specification language include: How expressive is the language? Similarly, a complete answer to (3) would be a (pseudo)metric d on the space of all reward functions which quantifies their similarity. We should only trust a reward learning method that is at least reasonably robust to such errors.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

Kochenderfer Contact : philhc@stanford.edu Links: Paper Keywords : deep learning or neural networks, sparsity and feature selection, variational inference, (application) natural language and text processing Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Authors : Jeff Z.

Contact 40
article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

We think this adversarial style of evaluation and iteration is necessary to ensure an AI system has a low probability of catastrophic failure. Wed like to support more such evaluations, especially on scalable oversight protocols like AI debate. and Which rules are LLM agents happy to break, and which are they more committed to? .

article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

We also managed to leverage these results to produce a new method for conservative optimisation, that tells you how much (and in what way) you can optimise a proxy reward, based on the quality of that proxy (as measured by a STARC metric ), in order to be guaranteed that the true reward doesnt decrease (and thereby prevent the Goodhart drop).

article thumbnail

Google at NeurIPS 2022

Google Research AI blog

Derrick Xin , Behrooz Ghorbani , Ankush Garg , Orhan Firat , Justin Gilmer Associating Objects and Their Effects in Video Through Coordination Games Erika Lu , Forrester Cole , Weidi Xie, Tali Dekel , William Freeman , Andrew Zisserman , Michael Rubinstein Increasing Confidence in Adversarial Robustness Evaluations Roland S.

Google 52
article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

And the way you said it just then, it sounded more like the first one: heres a new nice metric of how good your mechanistic explanation is. 00:26:47): And so what this gives us is an interaction metric where we can measure how bad this hypothesis is. But I dont know, it feels kind of surprising for that to be the explanation.

Model 52