Remove Instructional Remove Language Remove Learning Theory
article thumbnail

How To Think Like An Instructional Designer for Your Nonprofit Trainings

Beth's Blog: How Nonprofits Can Use Social Media

So, expect to see regular reflections on good instructional design and delivery for any topic, but especially digital technology and social media related. As someone who has been designing and delivering training for nonprofits over the past twenty years, the most exciting part is apply theory to your practice.

article thumbnail

Why Movement Is the Killer Learning App for Nonprofits

Beth's Blog: How Nonprofits Can Use Social Media

As a trainer and facilitator who works with nonprofit organizations and staffers, you have to be obsessed with learning theory to design and deliver effective instruction, have productive meetings, or embark on your own self-directed learning path.

Learning 139
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

Concretely, this research agenda involves answering questions such as: What is the right method for expressing goals and instructions to AI systems? Some relevant criteria for evaluating a specification language include: How expressive is the language? Are there things it cannot express?

article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

Kochenderfer Contact : philhc@stanford.edu Links: Paper Keywords : deep learning or neural networks, sparsity and feature selection, variational inference, (application) natural language and text processing Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Authors : Jeff Z.

Contact 40
article thumbnail

Other Papers About the Theory of Reward Learning

The AI Alignment Forum

The third and final class of tasks I look at in this paper is a new category of objectives that I refer to as modal objectives, where the agent is given an instruction expressed not just in terms of what does happen along a given trajectory, but also in terms of what could happen. This paper is also discussed in this post (Paper 4).

article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

In either of these settings, theres a chance that the LLMs will write messages that encode meaning beyond the natural language definitions of the words used. Externalizing reasoning: It could be safer to have much smaller language models which put more reasoning into natural language. Kumar et al. , OpenAI , Yuan et al.