Remove Attention Remove Instructional Remove Learning Theory
article thumbnail

How To Think Like An Instructional Designer for Your Nonprofit Trainings

Beth's Blog: How Nonprofits Can Use Social Media

So, expect to see regular reflections on good instructional design and delivery for any topic, but especially digital technology and social media related. As someone who has been designing and delivering training for nonprofits over the past twenty years, the most exciting part is apply theory to your practice.

article thumbnail

Stanford AI Lab Papers and Talks at NeurIPS 2021

Stanford AI Lab Blog

Kochenderfer Contact : philhc@stanford.edu Links: Paper Keywords : deep learning or neural networks, sparsity and feature selection, variational inference, (application) natural language and text processing Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss Authors : Jeff Z.

Contact 40
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

The Theoretical Reward Learning Research Agenda: Introduction and Motivation

The AI Alignment Forum

Concretely, this research agenda involves answering questions such as: What is the right method for expressing goals and instructions to AI systems? Which specification learning algorithms are guaranteed to converge to a good specification? However, this is not the only option, and it is not self-evident that it is the right choice.

article thumbnail

The Future of Social: Gen Z

NonProfit Hub

Beth is an expert in facilitating online and offline peer learning, curriculum development based on traditional adult learning theory and other instructional approaches. She has trained thousands of nonprofits around the world. Gen Z by the Numbers. It’s the reason we’re not in the Dark Ages anymore.

Social 28
article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

display inputs on which LLMs take undesirable/misaligned actions without being instructed to do so. 3.6) ) than we do inputs that include instructions to do some harmful task (as in Andriushchenko et al. Kumar et al. , OpenAI , Yuan et al. Jrviniemi and Hubinger ( 4) , and Meinke et al. Jrviniemi and Hubinger ( 4) , Meinke et al.