article thumbnail

Moving from Red AI to Green AI, Part 1: How to Save the Environment and Reduce Your Hardware Costs

DataRobot

They are used for different applications, but nonetheless they suggest that the development in infrastructure (access to GPUs and TPUs for computing) and the development in deep learning theory has led to very large models. The natural follow-up question is if this increase in computing requirements has led to an increase in accuracy.

Green 145
article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

We’re excited to share all the work from SAIL that’s being presented at the main conference , at the Datasets and Benchmarks track and the various workshops , and you’ll find links to papers, videos and blogs below. Altman, Ron O.

Contact 40
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Stanford AI Lab Papers and Talks at ICLR 2022

Stanford AI Lab Blog

Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!

Contact 40
article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

Interpretability benchmarks: Wed like to support more benchmarks for interpretability research. A benchmark should consist of a set of tasks that good interpretability methods should be able to solve. Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark by Pan et al. by Liu et al.

article thumbnail

Google at NeurIPS 2022

Google Research AI blog

Ruoxi Sun , Hanjun Dai , Adams Yu Drawing Out of Distribution with Neuro-Symbolic Generative Models Yichao Liang, Joshua B.

Google 52
article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

In this episode, I speak with Jason Gross about his agenda to benchmark interpretability in this way, and his exploration of the intersection of proofs and modern machine learning. Or according to our singular learning theory friends, the local learning coefficients should be small and that implies this thing about this.

Model 52