I've written a piece for Asterisk about the learning crisis in developing country schools (and what we do and do not know about the value of education)

This piece was based on my research on education for Open Philanthropy.

76

0
0
1

Reactions

0
0
1
Comments8


Sorted by Click to highlight new comments since:

Do you think there's an opportunity for LLMs to enable a lot of translation of primary school books into local languages / help develop lesson plans? Is there a charity idea here?

There's definite possibility here - even potentially in marking and monitoring lessons. How much to "Automate" learning in general is. To put it crudely, many LMIC primary school education systems are based on rote learning. So one big question (and debate) in education circles is, should we then make that rote learning as effective as possible? Which is what orgs like Gates foundation funded bridge acadamies have tried to do.

https://www.bridgeinternationalacademies.com/

Or should we try and transform learning environments and teaching styles, so that classrooms are transfrormed into the kind of interactive and exploratory spaces we have in higher income countries?

I don't have a strong opinion on this, but lean towards the "improve the rote learning" in places like Uganda where i live, especially if the government isn't putting in a huge effort to transform education sustem

LLMs even right now could easily play a big role in improving rote learning, but I'm not sure they are at the stage yet to play much of a role in transforming classroom spaces - but that could come in the near future. 

Yes, this also came top of the ideas when we did a discussion on potential use cases for AI in education for LMIC (https://ai-for-education.org/working-group-discussion-ai-use-cases/)

 

There’s a few people trying this - my concern though, and something we just got a grant to think about, is how we make sure the content is good qualiTy. So we will start the year thinking about benchmarks for AI in education. 
 

lots of charity ideas here, and something we’re fortunate to get funding from BMGF and others to explore. 

Great piece!

I've long thought society overestimates the value of schooling (particularly secondary school). 

One reason is negative spillovers (i.e, some of the benefits to individuals from education is probably from winning zero-sum games around jobs). Do you know if education RCTs have tried to take this into account (Eg - via two-step randomisation?)

Another reason I've been thinking about recently is the fact that most people forget most of the knowledge they learned in school, very soon after finishing school. I don't think there's a plausible mechanism by which this forgotten knowledge generates benefits for the individual or wider society.

I think it's likely that the optimal age to finish school isn't 17/18 as is the norm in many countries. The time we spend in school seems to have been very arbitrarily selected (eg - why not extend secondary school by 5 more years and have everyone continue a broad education if education is so beneficial?)

I also feel that the opportunity cost of more schooling isn't discussed near enough - people could do more on-the-job training more relevant to their actual jobs instead.

Another reason I've been thinking about recently is the fact that most people forget most of the knowledge they learned in school, very soon after finishing school. I don't think there's a plausible mechanism by which this forgotten knowledge generates benefits for the individual or wider society.

It's quite likely imo that the primary intellectual benefit of school is not knowledge (easily forgotten) but the learned cognitive endurance that makes it easier to do cognitively demanding jobs later in life. Those jobs are also better paid, and they have larger benefits to society in terms of helping a country grow. If this is the main benefit, then negative spillovers likely won't be large, because they are taking jobs that less educated people couldn't do. Plus, noncognitive benefits of school, in terms of better socialization, are real.

I'm not aware of direct evidence on negative spillovers, but it's also important to point out that positive spillovers are also very plausible. More educated people can help their peers learn, not just in the classroom but also on the job. Plus if more educated people are better able to create successful businesses, then they create jobs for others in a positive-sum way.

Thanks for sharing, Lauren! I appreciate the humility about the difficult epistemic situation we are in with respect to education in developing country schools.

So, really, we just don’t know what you get from education in a developing country. We don’t know how much more you are likely to earn if you stick through high school. We don’t know if you’ll even make any more money. If you do make more money from going to more school, we don’t really know why. It could be that you just came from a family that was always going to use their connections for you. Or it could be because you’ve learned to read and write. Or it could be because you’ve learned to manage your own time and cooperate with others.

All are plausible, but we don’t know which one is true — or if all of them are a little bit true. We can’t know if we’re succeeding unless we understand what success means.

This is a great piece. A really good summary of the current status of education in LMIC.

This piece has so many choices bangers! Cheers for writing it! will be checking heaps of quotes for future use.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might