Remove Alternative Remove Learning Theory Remove Problem
article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

Finally, in the last post, I will also provide some resources for anyone who wants to contribute to this (or similar) research, in the form of both open problems and some thoughts on how these problems could be approached. Note that reinforcement learning refers to both a problem setting, and to a set of algorithms.

article thumbnail

How To Think Like An Instructional Designer for Your Nonprofit Trainings

Beth's Blog: How Nonprofits Can Use Social Media

Designing and delivering a training to a nonprofit audience is not about extreme content delivery or putting together a PowerPoint and answering questions. If you want to get results, you need to think about instructional design and learning theory. And, there is no shortage of learning theories and research.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

To me, the main takeaway from this paper is that we should be careful with the assumption that the basic RL setting really captures everything that we intuitively consider to be part of the problem domain of sequential decision-making. Alternatively, see the main paper. This paper is discussed in more detail in this post.

article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

This guide provides an opinionated overview of recent work and open problems across areas like adversarial testing, model transparency, and theoretical approaches to AI alignment. Alternative approaches to mitigating AI risks These research areas lie outside the scope of the clusters above. Kumar et al. ,