TLDR: Teaching love as a core curriculum (or even a basic one) could be a highly effective path to transforming the world and impacting many of the core issues EA aims to address.

@Yassin Alaya in her Rational Education as a Cause Priority?  post advocates for the teaching of Philosophy, Psychology, Microeconomics and statistics as core school subjects. 

@TimSpreeuwers discusses How to bring EA into the classroom? and I believe may be onto something with his assignments to students of finding a worthwhile charity for him to donate to.

My hypothesis is that what the world most needs is... more love. Brotherly love is defined as "feelings of humanity and compassion towards one's fellow humans." by Oxford Languages. 

While philosophy and rational thinking are great EA values, those tend to be dryer subjects that not everyone may be suited for. Love, on the other hand, is a value generally shared and desired by all countries/peoples and could be taught from early age. I get the feeling that EA's tend to frown on fuzzies, however I would argue that as it is what currently drives most people, it is likely to have a much greater impact.

Exact curriculum can be debated/researched (I and I suspect others likely would be willing to fund research/development), however teaching all humans the basics of humanity and compassion (with possible advanced classes on EA for those keen on the subject) could arguably have an enormous impact on almost all causes discussed here and many more. 

If people were taught to listen to each other, care for one another, to disagree/dislike actions/opinions but to nonetheless respect/value/be compassionate towards the PEOPLE behind them, most issues we face in the world and as individuals would, I argue, become MUCH easier to address (and/or not arise in the first place).  (research could be done if necessary)

I am new here (love the place, my brain's in heaven though I'll make some suggestions in a separate post), so perhaps I am missing something, but it feels like the world is AWARE that more love is needed, that we should "do something" about it, but doesn't know how to go about it. 

It feels like (am I allowed to say stuff like that here? ;) ) if we could come up with a healthy, inclusive (but respectful of differing opinions), curriculum that -encourages/rewards- love but does not impose it, governments the world over would be hard-pressed to argue that "more love" is unnecessary/undesirable. As it likely would make policing much easier/less resource intensive (at least in theory, as it may spur more peaceful movements, which I read somewhere are actually more effective), even authoritarian regimes may be encouraged to adopt it. 

Arguably, it ought to set billions on a path to altruism, with the more rational portion of them likely to join EA. Considering the almost limitless impact this could have over time, I would argue that even a small chance of success is worth investing in. Furthermore, any research/development of tools/curricula to help humanity learn be more loving could have very high value, even if not included in all/any schools.  

 

In conclusion, I would love to discuss this, and perhaps gather a group/work with Effective Philanthropy to fund/help with research on the topic. 

Comments5


Sorted by Click to highlight new comments since:

I suspect there would be a lot of challenges with this, such as how to measure/assess the impact, and the actual implementation of the project/intervention. The details matter a lot. But I also think that at its core this idea has merit.

I was just recently thinking about how reading two pieces of writing in my early 20s had a very beneficial effect on my long-term happiness. If I was building a curiculum on something like how to have a happy life or practical wisdom, I'd include these.

I welcome this idea! More love would be a good thing, and we would rather make this change earlier in the life course.

I think implementation is hard. This is a big "if":

if we could come up with a healthy, inclusive (but respectful of differing opinions), curriculum that -encourages/rewards- love but does not impose it

As Joseph said, it is difficult to assess educational interventions. When it comes to knowledge transfer we can be generally confident that education is helping - calculus classes increase students' aptitude for calculus. But love?

Recent research suggests that mindfulness interventions in schools were much less impactful than hoped. I suspect that something like mindfulness works well if you opt in - and is much less useful if you didn't ask for it.

To explore this idea further, I recommend looking for comparable values-based educational experiments that have been tried in the past (maybe something about attitudes to sexuality, or religious tolerance, or positive thinking, or even campaigns to instil hate). Did they succeed in changing values? If they failed, why did they fail? If they succeeded, what can we learn from them?

Extremist indoctrination campaigns clearly have an impact, to the point of getting member to self-sacrifice. Not the path i would want to explore. 

While I'd like to -encourage- love and perhaps other related/close/universally accepted positive values (don't steal, don't murder/hurt others type things), I believe everyone should retain self-agency. 

Basically, people -should- be free to -choose- to hurt others (or at least not benefit them) if they so decide, possibly incurring society's wrath/punishment in the process depending on the degree of harm, as it is currently done (reform of prisons/legal punishments is another topic).

Let's steer clear of 1984. Most people find helping others fulfilling to some degree. We ought to encourage that and make it an easy and early realization. Those that aren't interested can drop out after going through the 101 if they so choose.

I posit that the cost of researching and developing/testing a curriculum to that effect would be minimal compared to the possible impact.

And yes it's a big "if", but we don't have to get it perfectly right from the get-go. 

I'll be back in the Philippines in a few weeks, where i will be launching a sort of 360 housing/living/community program, aimed at the poorest of the poor/most excluded to provide safety, counseling, nutrition, health as well as education, life and professional skills etc. I'd love to have -something- like this to try out with those that join. 

My experience is that the culture is -already- very Altruistic-oriented, if often un-efficiently so. I'd very much like for them NOT to lose that aspect of the culture if/when they join the capitalist bandwagon. 

I think indoctrination (at least among adults) is actually surprisingly difficult. The psychologist Hugo Mercier was recently on the 80,000 Hours podcast to discuss why.

And the other thing which has had much more dramatic consequences is the idea of brainwashing: the idea that if you take prisoners of war and you submit them to really harsh treatment — you give them no food, you stop them from sleeping, you’re beating them up — so you make them, as you are describing, extremely tired and very foggy, and then you get them to read Mao for hours and hours on end. Are they going to become communists? Well, we know the answer, because unfortunately, the Koreans and the Chinese have tried during the Korean War, and it just doesn’t work at all. They’ve managed to kill a lot of POWs, and they managed to get I think two of them to go back to China and to claim that they had converted to communism. But in fact, after the fact, it was revealed that these people had just converted because they wanted to stop being beaten and starved to death, and that as soon as they could revert back to go back to the US, they did so.

I'd also echo others' comments that I think testing a curriculum will be relatively hard. Even education programs with clear measurables (e.g. financial literacy programs, work-skills programs for former convicts, second language programs) often end up unsuccessful. It would be even more difficult to teach "love." How do you measure how loving someone is and reliably teach it to others?

That specific method of indoctrination doesn't seem effective. However, we do see cases where indoctrination occurs successfully under certain conditions—such as French prisons reportedly being hubs for Islamist radicalization among inmates, or children in parts of Africa being forcibly recruited into militant groups and later made complicit in horrific acts, sometimes even against their own families. Similarly, vulnerable individuals are sometimes "guided" into becoming human weapons, as seen in cases of suicide bombings.

As for fostering better values, both evaluation and reliably teaching them are indeed significant challenges. But I believe are worth overcoming. 
Shouldn't our "end" goal be to cultivate a world where the majority of people grow up as loving, caring, and fulfilled individuals?

 Starting this process as early as possible—during formative years—seems more promising than attempting to "convert" those whose worldviews and habits have already been solidified through life experiences. Early interventions might lay a stronger foundation for lasting change.

What do you think?

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 · 
Until recently I thought Julia and I were digging a bit into savings to donate more. With the tighter funding climate for effective altruism we thought it was worth spending down a bit, especially considering that our expenses should decrease significantly in 1.5y when our youngest starts kindergarten. I was surprised, then, when I ran the numbers and realized that despite donating 50% of a reduced income, we were $9k (0.5%) [1] richer than when I left Google two years earlier. This is a good problem to have! After thinking it over for the last month, however, I've decided to start earning less: I've asked for a voluntary salary reduction of $15k/y (10%). [2] This is something I've been thinking about off and on since I started working at a non-profit: it's much more efficient to reduce your salary than it is to make a donation. Additionally, since I'm asking others to fund our work I like the idea of putting my money (or what would be my money if I weren't passing it up) where my mouth is. Despite doing this myself, voluntary salary reduction isn't something that I'd like to see become a norm: * I think it's really valuable for people to have a choice about where to apply their money to making the world better. * The organization where you have a comparative advantage in applying your skills will often not be the one that can do the most with additional funds, even after considering the tax advantages. * I especially don't think this is a good fit for junior employees and people without a lot of savings, where I'm concerned social pressure to take a reduction could keep people from making prudent financial decisions. * More issues... Still, I think this is a good choice for me, and I feel good about efficiently putting my money towards a better world. [1] In current dollars. If you don't adjust for inflation it's $132k more, but that's not meaningful. [2] I'm not counting this towards my 50% goal, just like I'm not counting the pay cut I took when