What would you teach students in <1 min to prepare them to change the world?

Hello everyone, I'm Hailey and I manage the social content strategy for Khan Academy, one of the world's largest nonprofit EdTech platform providing free Pre-K through college curricula in >50 languages over 140 million users worldwide. You may know us by from our original Youtube channel, but we're now a global organization partnered with school districts across the US, Brazil, and India to try to improve learning outcomes in critical areas. In particular, we are now trying to move the needle on STEM education by accelerating learning in historically-under resourced communities. 

I'm tasked with launching our social content strategy to drive meaningful learning to as many students as possible. I'm trying to conceptualize a TikTok/Youtube Shorts series with advice, tips, lessons etc that will get young learners excited about EA topics. I would love to curate a list of topics from the EA community that you think would give students the best chance of creating a better future.

I think there is a unique opportunity through Khan Academy to drive millions of learners toward the topics that could build a better future, so I'd love to make the most of our social platforms with your insights! If you'd like to connect, feel free to shoot me an email at hailey@khanacademy.org

Comments19


Sorted by Click to highlight new comments since:

Hello, Hailey. As a big fan and long-time user of Khan Academy, I'm thrilled to see KA express interest in creating EA-related content! I'll send you an email shortly with some thoughts and suggestions.

I’d recommend concepts from https://conceptually.org/. Should be short enough for TikTok videos!

Thanks! I'm the author of most of the concepts on Conceptually, and also the founder of Non-trivial. I'll send you an email. :)

Going very broad, I'd recommend going through the EA Forum Topics Wiki and considering the concepts included there. Similarly, you may look at the posts that make up the EA Handbook and look for suitable concepts there.

Techniques for reasoning under uncertainty, like Fermi estimates!

Yeah, I support this. Using a tool like https://www.getguesstimate.com/scratchpad  - free scratchpad  that could help late-highschoolers (a) understand that you can make guesses under uncertainty, and how uncertain the result is, and (b) make decisions about their careers using tools. That tool could be demonstrated in a TikTok I reckon.

[anonymous]13
0
0

As a fellow ed tech traveller and big fan of Khan Academy, very excited to see this initiative!

If the goal is to get young learners excited, compelling stories and/or thought experiments can be a good entry point for the format you're thinking of.

None of this is mutually exclusive:


- The drowning child definitely. Showing you don't need to literally save a drowning child to save lives.


- Related - maybe an overall framing about heroism, how much easier it is to be a hero than it seems. This can be a good narrative thread to get the core points of EA thinking across.


- Something from the point of you of an idealistic young student who is undecided about her career choices, giving her different frames to think through her problem. The 80000 hours concept, how much of your life it represents, how a good career can meaningfully contribute those 80000 towards positive outcomes, possibly the earning to give concept as well (but presented as a complement rather than a substitute).


- On longtermism, I find the thought experiment about living every sentient life at the beginning of What We Owe the Future quite moving and motivating to think about the significance of living a moral life. Also the hypothesis that we could stand at the beginning of history is easiest to get across with a visual format such as video.


-Existential risk I'm less sure of how best to approach it. I have some experience doing online trainings (the target audience was adult professionals) on climate change. Even though they were based on solid data stakeholders found them catastrophist... and yet at the end of it learners loved them and from their verbatim feedback it seemed like they were really fired up and motivated to do something.  I'm somewhat confused by this finding, my guess would be you shouldn't wait too long in the video to introduce some kind of hope+actionable plan, it doesn't have to be too specific, but something that doesn't just leave the learner depressed and anxious at the end of it.

I love these recommendations, I think a strong storytelling approach like you mentioned will be very powerful! Bookmarking your ideas to keep in mind for for future content :)

The EA Austin group brainstormed some more ideas. In the order they came up:

  • How many people could exist in the future?
  • Top 10 Ways Humanity Fails To Reach A Star Trek Future
  • Anthropomorphic (Existential/Extinction Risks) are Much Greater than Natural Risks
  • How Do We Know Bed Nets Work?
  • Human Challenge Trials Explained
  • What It's Like To Donate Your Kidney
  • What It's Like To Donate Your Bone Marrow
  • AI Risk Explained in One Minute
  • Hack Your Happiness Scientifically (approaching mediation from a trial and error perspective)
  • 80,000 Hours in your career -- worth spending 1% of that time choosing a career a career to do good
  • Impact matters more than Overhead (when it comes to choosing charities)
  • Cultured Meat Could Prevent A Lot of Animal Suffering
  • Why worry about "Suffering-Risks" -- It'd be very bad if in the future humanity spread some of the bad stuff that happens on Earth today (e.g. extreme animal suffering) across the universe.
  • Explainer videos on e.g. Vitamin A supplementation, deworming, bed bets as malaria prevention, other GiveWell recommended charities
  • Steelman arguments you disagree with (Steelmanning explained)
  • 5 Things More Dangerous Than Donating A Kidney
  • We are in triage every second of every day
  • Opportunity cost - explained (one of the most important concepts in economics)

Woah, love a bunch of these! Especially "How Do We Know Bed Nets Work?"

Neglectedness.

The prior for deprioritizing the topic they keep hearing about in social media.

For example, like this

Changing "is this doing good" (a yes/no question which invites answers like "med tech? yeah that's good!"),

with "how much good is this doing?" or "can we do 100x as much good with the same effort?" (which invites comparing different directions)

Ideas which I assume you're not interested in, but I'll post anyway and tell me if I'm wrong:

  1. Advertising programs like the Atlas Fellowship.
  2. Nudging them away from studying at university as a default don't-think-about-it action
  3. Other generic advice which isn't related to important problems in the world, for example
    1. Productivity
    2. Career
    3. Decision making
    4. Mental health

"There is more than one thing that could destroy the world, so before picking one to work on, let's compare them"

Because lots of people are worried about the world ending already, I think, and this is the nudge I'd add

Read hpmor (full pitch).

How I thought of this ( == what I'm actually trying to solve) : The main value people will get is probably from getting into a community, more than what they'd get from a course of a few hours, by far, I expect. Perhaps you have some other idea on how to do this

General information about people in low-HDI countries to humanize them in the eyes of the viewer.

Similar for animals (except not “humanizing” per se!). Spreading awareness that e.g. pigs act like dogs may be a strong catalyst for caring about animal welfare. Would need to consult an animal welfare activism expert.

My premise here: it is valuable for EAs to viscerally care about others (in addition to cleverly working toward a future that sounds neat).

I like the low-HDI country idea, I've been really taken with something I can't find which gives you a random person and facts about that person [kids, religion, etc], weighted by actual probabilities.

Three related ideas:

The long view -- looking at history from the perspective of someone who has lived for millions or billions of years rather than decades.

This Can't Go On / Limits to Growth -- The economy can't continue to grow at the rate it has for the last several decades for more than 10,000 years. Total compute can't continue to grow at the same rate it has the last several decades for more than 350 years, since the physical limit of the maximum size computer in the observable universe would be reached by then.

This is the Dream Time -- Billions of years from now, if civilization is still around, people will look back on this era of only a few centuries that we are in now as being special and unique.

I'd be happy to help explain how building capacity for responding to abrupt food catastrophes (nuclear winter, volcanic winter, collapse of electricity/industry, etc.) by rapidly increasing food production could help save lives and reduce the chance of civilizational collapse (see ALLFED - Alliance to Feed the Earth in Disasters)

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr