Remove Application Remove Learning Theory Remove Metrics
article thumbnail

Moving from Red AI to Green AI, Part 1: How to Save the Environment and Reduce Your Hardware Costs

DataRobot

This increase in accuracy is important to make AI applications good enough for production , but there has been an explosion in the size of these models. In the graph below, borrowed from the same article, you can see how some of the most cutting-edge algorithms in deep learning have increased in terms of model size over time.

Green 145
article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

The answer to this question should be something like a metric over some type of task specification (such as reward functions), according to which two task specifications have a small distance if and only if they are similar (in some relevant and informative sense).

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

Kochenderfer Contact : philhc@stanford.edu Links: Paper Keywords : deep learning or neural networks, sparsity and feature selection, variational inference, (application) natural language and text processing Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Authors : Jeff Z.

Contact 40
article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

We also managed to leverage these results to produce a new method for conservative optimisation, that tells you how much (and in what way) you can optimise a proxy reward, based on the quality of that proxy (as measured by a STARC metric ), in order to be guaranteed that the true reward doesnt decrease (and thereby prevent the Goodhart drop).

article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

Applications ( here ) start with a simple 300 word expression of interest and are open until April 15, 2025. We have plans to fund $40M in grants and have available funding for substantially more depending on application quality. Inter-model messages: Some applications of LLMs involve multiple models working together.

article thumbnail

Google at NeurIPS 2022

Google Research AI blog

A Workshop for Algorithmic Efficiency in Practical Neural Network Training Workshop Organizers include: Zachary Nado , George Dahl , Naman Agarwal , Aakanksha Chowdhery Invited Speakers include: Aakanksha Chowdhery , Priya Goyal Human in the Loop Learning (HiLL) Workshop Organizers include: Fisher Yu, Vittorio Ferrari Invited Speakers include: Dorsa (..)

Google 52
article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

And the way you said it just then, it sounded more like the first one: heres a new nice metric of how good your mechanistic explanation is. 00:26:47): And so what this gives us is an interaction metric where we can measure how bad this hypothesis is. But I dont know, it feels kind of surprising for that to be the explanation.

Model 52