Introduction

Based on the advice of some long-time members of the EA Forum, I as the President of Intentional Insights wanted to share InIn’s background and goals and where we see ourselves as fitting within the EA movement. I also wanted to allow all of you a chance to share your opinions about the benefits and drawbacks of what InIn is doing, put forth any reservations, concerns, and risks, and provide suggestions for optimization.

 

Background

InIn began in January 2014, when my wife and I decided to create an organization dedicated to marketing rational, evidence-based thinking in all areas of our lives, especially charitable giving, to a broad audience. We decided to do so because we looked around for organizations that would provide marketing resources for our own local activism in Columbus, OH, trying to convey these ideas to a broad public and found no such organizations. So we decided – if not us, then who? If not now, then when? My wife would use her experience in nonprofits to run the organisation, while I would use my experience as a professor to work on content and research.

 

We gathered together a group of local aspiring rationalists and Effective Altruists interested in the project, and launched the organization publicly in 9/2014. We got our 501(c)(3) nonprofit status, began running various content marketing experiments, and established the internal infrastructure. We also built up a solid audience in the secular and skeptical market, who we saw as the easiest-to-reach audience with promoting effective giving and rational thinking. By the early fall of 2015, we had established some connections and reputation, a solid social media following, and our articles began to be accepted in prominent venues that reach a broad audience, such as The Huffington Post and Lifehack. At that point, we felt comfortable enough to begin our active engagement with the EA movement, as we felt we could provide added value.

 

Fit in EA Movement

As an Effective Altruist, I have long seen opportunities of optimization in the marketing of EA ideas using research-based, modern content marketing strategies. I did not feel comfortable speaking out about that until I had built up InIn enough to be able to speak from a position of some expertise in the early fall of 2015, and to demonstrate right away the benefit we could bring through publishing widely-shared articles that promoted EA messages.

 

Looking back, I wish I had started engaging with the EA Forum sooner. It was a big mistake on my part that caused some EAs to treat InIn as a sudden outsider that burst on the scene. Also, our early posts were perceived as too self-promotional. I guess this is not surprising, looking back – although the goal was simply to demonstrate our value, the content marketing nature of our work does show through. Ah well, lessons learned and something to update on for the future.

 

As InIn has become more engaged in various projects within the EA movement, we have begun to settle on how to add value to the EA community and have formulated our plans for future work.

 

1) We are promoting EA-themed effective giving ideas to a broad audience through publishing shareable articles in prominent venues.

 

1A) Note: we focus on spreading ideas like effective giving without associating them overtly with the movement of Effective Altruism, though leaving buried hooks to EA in the articles. This approach has the benefit minimizing the risk of diluting the movement with less value-aligned members, while leaving opportunities for those who are more value-aligned to find the EA movement. Likewise, we don’t emphasize EA as we believe that overt uses of labels can lead some people to perceive our messages as ideological, which would undermine our ability to build rapport with them.

 

2) We are specifically promoting effective giving to the secular and skeptic community, as we see this audience as more likely to be value aligned, and also have strong existing connections with this audience.

 

3) We are providing content and social media marketing consulting to the EA movement, both EA meta-charities and prominent direct-action charities.

 

4) We are collaborating with EA meta-charities in boosting the marketing capacities of the EA movement as a whole being.

 

5) We are helping build EA capacity around effective decision-making and goal achievement through providing foundational rationality knowledge.

 

6) By using content marketing to promote rationality to a broad audience, we are aiming to help people be more clear-thinking, long-term oriented, empathetic, and utilitarian. This not only increases their own flourishing, but also expands their circles of caring beyond biases based on geographical location (drowning child problem), species (non-human animals), and temporal distance (existential risk).

 

Conclusion

InIn is engaged in both EA capacity-building and movement-building, but movement-building of a new type, not oriented toward directing people into the EA movement, but getting EA habits of thinking into the broader world. I specifically chose not to include our achievements in doing so in this post, as I had previously fallen into the trap of including too much and being perceived as self-promotional as a result. However, if you wish, you can learn more about the organization and its activities at this link.


What are your impressions on the value of this fit of InIn within the EA movement and our plans, including advantages and disadvantages, as well as suggestions for improvement? We are always eager to learn and improve based on feedback from the community.

10

0
0

Reactions

0
0

More posts like this

Comments12


Sorted by Click to highlight new comments since:

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

This is simply false. See what I actually said here

I like the fact that there's an active, strategic effort to engage in outreach, and I'm impressed with the media reach achieved to date. Also with how accessible it is, and how palatable, bypassing issues of tribal affiliation to get to the core principles and how to implement them.

I plan to get involved myself (both to contribute and to learn).

UPDATE, Oct 2017: I ended up not being involved for long. While I still appreciate InIn's intentions and a number of aspects of their work, I didn't feel completely comfortable and didn't have the personal resources (esp time) to dedicate.

One of the reasons I like Intentional Insights is that it has the potential to spark interest in EA among people who probably wouldn't become interested in EA otherwise - the counterfactual argument here is stronger than for GWWC or other meta-charities because Gleb is reach out to a group that's less close to EA.

I also think that we should really encourage and incentivize projects like this - we need more people doing EA outreach. There is absolutely no guarantee that this project will succeed, but Gleb has shown evidence of success and the expected value seems fairly large.

Scott, indeed, whether we get interest in EA or simply effective giving with more money channeled toward effective charities, the impact could be large, especially from a counterfactual perspective. I'm not necessarily trying to get people to grow into EAs, but simply change their habits of giving somewhat, prioritizing effective charities and expanding their circle of compassion. Getting people to give to effective charities without self-identifying as an EA would be a quite fine outcome :-)

I can see a lot of value of having EA concepts promoted separately from the discussion of Effective Altruism. Not everyone is going to become an EA, in fact, a surprising number of people seem to be turned off by the movement and so EA material is unlikely to reach them effectively. Having non-EA materials promoting the same ideas means that 1) they may still develop the attributes that EA wants to instil 2) some people may become more inclined towards EA after they have accepted some of its values.

Yup, there are a lot of people who are turned off the movement itself due to the research-based, data-driven, and philosophically-sophisticated nature of the movement. And I don't think it's a bad thing that they are turned off - we don't want people who are unable to engage well with core concepts of EA to shape the direction of the movement.

However, we can still get them to develop beneficial habits of thought and behavior. In this Huffington Post piece, I encourage specific behaviors that would get people to give effectively, with a clear and pragmatic behavior described in the very end. Imagine the impact if everyone took up that behavior pattern. How much money would go to effective charities?

Regarding some people becoming more inclined towards EA, I think it would be valuable to make sure those people are a good fit for the movement itself. It may be the case that someone who gets into effective giving may eventually come to become value aligned, but it's not necessarily the case, and it's fine if they don't.

I agree that quality is more important than quantity. We need to find people who are dedicated and actually do things.

My impression was that the purpose of InIn was to promote rationality, and that EA was a natural aspect of that. This sounds like EA and the values of EA are much more central than that. Could you clarify this, Gleb?

Sure, happy to clarify, and thanks for asking!

InIn's goal is to advance human flourishing through improving the way we think and make decisions. To do so, we promote rationality with a particularly heavy emphasis on EA, which is essentially rationality as applied to altruism. The reason for the emphasis on EA is goal factoring, namely a bigger opportunity to improve the world through getting people to be more rational about their altruism. If people are more rational in their altruism, then it not only improves their own lives but also the lives of others. This is why it makes sense to emphasize EA-themed content in the work of Intentional Insights.

An additional reason is that many InIn participants, such as myself, identify strongly as Effective Altruists, which makes us more motivated to advance EA content :-)

That being said, we have plenty of content that is not directly EA-related, but advances long-term thinking and rational decision making in other areas of life. Doing so advances human flourishing, and also has positive downstream impacts on issues of importance to EAs, such as existential risk, etc. Likewise, our audience would not be eager to engage with InIn if it was only about effective giving, so we make sure to have a variety of content.

Hope that clarifies things!

I'm not yet convinced that making people more rational is likely to make them more empathetic towards animals. What reasons are there to believe there is such a connection?

Good question!

One clear reason is that people who are more rational are more likely to be convinced by well-reasoned arguments and update on their beliefs. For example, more rational people are likelier to be convinced by this article.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr