Remove International Remove Language Remove Learning Theory
article thumbnail

Why Movement Is the Killer Learning App for Nonprofits

Beth's Blog: How Nonprofits Can Use Social Media

As a trainer and facilitator who works with nonprofit organizations and staffers, you have to be obsessed with learning theory to design and deliver effective instruction, have productive meetings, or embark on your own self-directed learning path. Internal: These theories take into account our minds and bodies.

Learning 139
article thumbnail

Google at ICLR 2023

Google Research AI blog

Posted by Catherine Armato, Program Manager, Google The Eleventh International Conference on Learning Representations (ICLR 2023) is being held this week as a hybrid event in Kigali, Rwanda. We are proud to be a Diamond Sponsor of ICLR 2023, a premier conference on deep learning, where Google researchers contribute at all levels.

Google 105
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

How To Think Like An Instructional Designer for Your Nonprofit Trainings

Beth's Blog: How Nonprofits Can Use Social Media

In addition, I’m also doing a lot of training of other trainers and am now an Adjunct Professor at the Monterey Institute for International Studies (a graduate school of Middlebury College). It is really important to pay attention to body language (discomfort, confusion, boredom, etc.)

article thumbnail

Timaeus in 2024

The AI Alignment Forum

Published on February 20, 2025 11:54 PM GMT TLDR: We made substantial progress in 2024: We published a series of papers that verify key predictions of Singular Learning Theory (SLT) [ 1 , 2 , 3 , 4 , 5 , 6 ]. The S4 correspondence in small language models. Alignment).

article thumbnail

Stanford AI Lab Papers and Talks at ICLR 2022

Stanford AI Lab Blog

The International Conference on Learning Representations (ICLR) 2022 is being hosted virtually from April 25th - April 29th. Feel free to reach out to the contact authors directly to learn more about the work that’s happening at Stanford!

Contact 40
article thumbnail

Research directions Open Phil wants to fund in technical AI safety

The AI Alignment Forum

In either of these settings, theres a chance that the LLMs will write messages that encode meaning beyond the natural language definitions of the words used. Activation monitoring : Probes on a models internal activations are one strategy for catching AIs taking subtly harmful or misaligned actions.

article thumbnail

AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability

The AI Alignment Forum

Daniel Filan (00:28:50): If people remember my singular learning theory episodes , theyll get mad at you for saying that quadratics are all there is, but its a decent approximation. (00:28:56): Whereas in the crosscoder paper, language modeling doesnt seem like the kind of thing that is going to be very symmetric.

Model 52