1 min read 9

2

We at Intentional Insights​, the nonprofit devoted to promoting effective altruism and rational thinking to a broad audience, are finalizing our Theory of Change (a ToC is meant to convey our goals, assumptions, methods, and metrics).

Here's the Executive Summary:

  • The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing.

  • To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice.

  • We assume that:

    • some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.

    • problematic decision making undermines mutual flourishing in a number of life areas.

    • these flawed thinking, feeling, and behavior patterns can be improved through effective interventions.

    • we can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment.

  • Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing.

  • Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations.

  • Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

 

Here is the full version. I'd appreciate any feedback on the full version from fellow EAs, on things like content, structure, style, grammar, etc. Thanks in advance!

2

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

Gleb, I'm going to pick on you a bit, but I'm just using you as an example of a broader trend:

for christ's sake there are too many redundant meta-organizations

ok I finally got that off my chest.

Off the top of my head here are some EA meta-organizations:

-Centre for Effective Altruism: including EA Global, EA Outreach, EA Ventures

-Giving What We Can

-The Life You Can Save

-The A-Factor (if you don't know, don't ask, we're not going down that road again)

-some site that was probably a scam but I don't even know anymore because I'm really not surprised when I see another Wordpress EA "organization" pop up

Here are some rationality/life-hacking/soylent-standing desk-pomodoro organizations:

-LessWrong (not exactly an organization, but still they were the OGs in the game...)

-CFAR

-SelfSpark

-whatever is on the top of Malcolm Ocean's linkedin profile right now

I am not saying all of these organizations are completely redundant or useless or bad. (Only some are...) But we pretty much have our bases covered now as far as EA and rationality go.

Especially with "rationality", agh. All these organizations rely almost exclusively on Kahneman's theories, which are certainly useful, but it's so naive (and frankly cultish) to act like you're going to save the world with them. Human behavior and society are complex, and, believe it or not, there are other theories of rationality (such as the revealed preferences theory). If you want to make people more "rational" for EA purposes, you should have a very specific goal in mind. Who do you want to make more rational? In what contexts? How will you change incentives to make that happen? And most importantly, rational with what values?

I think there's currently a perceived glamour to the Silicon Valley startup culture, and it's pushing a lot of people to do startups with thin ideas. I'm not pretending to be immune to this: I still would love to be an entrepreneur. But there's this sense that if we just soylent and standing desk and pomodoro enough, a good startup will just HAPPEN. But successful startups (ignoring the current bubble) typically come from someone spending a fair amount time in a field, gaining technical expertise, and finally finding a specific problem that hasn't been solved or isn't being done effectively.

Sorry to be a jerk. It's not a terrible idea, just a thin one in an already saturated market.

[This comment is no longer endorsed by its author]Reply

I still agree with most of this comment as a general trend I've noticed in EA... but I don't think this was the right context for it. It feels too much like punching down, since Gleb is a relatively new EA and clearly means well, he was just in the wrong place at the wrong time.

Gleb, please continue with EA and don't get discouraged. Lord knows I was an idiot as a new EA.

Lila, would you mind if I asked a mod to get rid of your post above? Feel free to make a similar point on a GWWC blog post. But as you seem to agree, this seems like it could be pretty off-putting for new people to the forum, given that it's directed at someone quite new to the community and who clearly means very well.

As an aside - it seems like a shame that any kind of metaphorical punching would be happening. We should be holding each other to account and continuously challenging each other to be more effective, but surely we shouldn't be ridiculing or fighting one another?

Thanks for your skepticism, and your encouragement!

The Theory of Change does lay out quite clearly who we want to help become more rational - the mass audience. LW, CFAR, etc. don't aim at the mass audience. Here's an example of how we're aiming at the mass audience in political contexts.

Here's some information about our EA work and its impact. Hope this is helpful, and I'm curious about your feedback :-) Always trying to improve.

Yeh, your comment was correct and needed, but where it's truly needed at punching up (which here obviously means calling out MIRI, CFAR and CEA). That's what I try to do. Otherwise newer and smaller "orgs" like Gleb's get criticized for being redundant and CEA gets a free pass for being one of the first movers and then claiming the EA movement that sprung up as its fiefdom and pass to limitless funding. Leave Gleb alone and fight the real battles.

Oh and good on you for being less of an insensitive (but truth telling!) ahole than you often are. ;-)

I appreciate your perspective, but I think there's a lot of space for charity entrepreneurship. See my response to Lila above, and let me know your thoughts :-)

I'd enjoy reading your reasons for this in a top-level forum post. I expect others would do, and there are certainly plenty who think like you do who could participate in a comment thread discussion of this, which your post could trigger.

I thought that a lot of this stuff was already covered in this post and the links there: http://effective-altruism.com/ea/q6/new_project_announcement_charity_entrepreneurship/ It seemed to have been positively accepted without much commentary, so I'm not sure others would have a lot to say.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A