Remove Analysis Remove Benchmark Remove Learning Theory
article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

We’re excited to share all the work from SAIL that’s being presented at the main conference , at the Datasets and Benchmarks track and the various workshops , and you’ll find links to papers, videos and blogs below.

Contact 40
article thumbnail

Stanford AI Lab Papers and Talks at ICLR 2022

Stanford AI Lab Blog

Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!

Contact 40
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

Interpretability benchmarks: Wed like to support more benchmarks for interpretability research. A benchmark should consist of a set of tasks that good interpretability methods should be able to solve. Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark by Pan et al. by Liu et al.

article thumbnail

Google at NeurIPS 2022

Google Research AI blog

Ruoxi Sun , Hanjun Dai , Adams Yu Drawing Out of Distribution with Neuro-Symbolic Generative Models Yichao Liang, Joshua B.

Google 52
article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

In this episode, I speak with Jason Gross about his agenda to benchmark interpretability in this way, and his exploration of the intersection of proofs and modern machine learning. And you do some neat little tricks, but its like- Jason Gross (00:04:34): Interval propagation and case analysis. Daniel Filan (00:04:36): Yeah.

Model 52