Remove Instructional Remove Learning Theory Remove Method
article thumbnail

How To Think Like An Instructional Designer for Your Nonprofit Trainings

Beth's Blog: How Nonprofits Can Use Social Media

So, expect to see regular reflections on good instructional design and delivery for any topic, but especially digital technology and social media related. As someone who has been designing and delivering training for nonprofits over the past twenty years, the most exciting part is apply theory to your practice.

article thumbnail

Six Tips for Evaluating Your Nonprofit Training Session

Beth's Blog: How Nonprofits Can Use Social Media

” While a participant survey is an important piece of your evaluation, it is critical to incorporate a holistic reflection of your workshop. This includes documenting your session, reviewing your decks and exercises, analyzing your instructional design, and figuring out how to improve it. Use Learning Theory.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

Concretely, this research agenda involves answering questions such as: What is the right method for expressing goals and instructions to AI systems? Which specification learning algorithms are guaranteed to converge to a good specification? Should it just be maximised navely, or are there better methods?

article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

Kochenderfer Contact : philhc@stanford.edu Links: Paper Keywords : deep learning or neural networks, sparsity and feature selection, variational inference, (application) natural language and text processing Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Authors : Jeff Z.

Contact 40
article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

The third and final class of tasks I look at in this paper is a new category of objectives that I refer to as modal objectives, where the agent is given an instruction expressed not just in terms of what does happen along a given trajectory, but also in terms of what could happen.

article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

Were interested in more research on this, and other stress tests of todays state-of-the-art alignment methods. We want to fund research that identifies the conditions under which these failure modes occur, and makes progress toward robust methods of mitigating or avoiding them.