Or, you could change your name to Wise Julia. This will also allow you to signify your intellectual superiority.
Tail risk: if EA ends up voting for a top leader, and you get elected, this could sound pretty culty. If that risk seems significant to you, I would advise avoiding the obvious choice here - Julia the Wise - which is even worse.
I don't have a strong opinion about this in the context of fellowships, but I can refer to setting a high entry bar in recruiting community members and volunteers in general, and specifically, by asking them to invest time in reading content. I hope this helps and not completely off-topic.
Though EA is a complex set of ideas and we want people to have a good understanding of what it's all about, demanding a lot from new people can be fairly offputting and counterproductive.
From my experience, people who are of high potential to be both highly-engaged and of...
Thanks for posting this! This can be pretty helpful for figuring out from which angle to approach broader audiences and people who are more skeptical about our ability to make a change.
Hey David, enjoyed reading this post so thank you for investing your time in putting this together.
One thing I'm not sure is clear to me, is if the goal of these communities is to bring together people who are interested in, say, animal welfare, and then trying to expose them to more EA content?
Or is it aimed to bring together people who are already interested in EA, but are more focused or interested in one area than other areas?
Also, this made me think of an idea - building teams of EA's who are professionals from the same field (Finance, law, marketing,...
Thank you for writing this Edo, it's really interesting to read about these topics as someone who's not really knowledgeable in research and academia.
"it's not clear to me how much productivity loss is there when scientists are working on stuff they are less intrinsically interested in. The situation seems to be fine in commercial companies..."
I would assume there's a major difference in why most researchers in academia do what they do (interest and sheer curiosity, along with prestige) and why most professionals in the private sector do what they do...
Thank you so much for sharing this with us and investing time in writing this.
I found this really insightful and helpful, and I can empathize with a lot of what you've felt throughout this journey.
"I’m sad that I’m not better or smarter than I grew up hoping I might be."
I feel like this is a thinking pattern that many people from our generation have, which is problematic because it's a fact that not everybody can be the most X person in the world, be it most impactful, most beautiful, most talented, or most wealthy. I feel it's also not true on an individu...
I fully agree with you on that, and from my humble experience, it's rare for people in EA to be interested in doing good purely from a cold and calculated point of view. A lot of us probably had the will to do good much earlier in life and long before we got to EA, and for us Effective Altruism is just the way in which we follow our ever-existing passion to do good.
I also think we should make sure people who stumble upon us don't get the idea that we're not doing this because we're passionate about it. That can and does alienate a pretty substantial ...
Effective Altruism Israel and LessWrong Israel present a new talk - Introduction to existential risk from Artificial Intelligence with Vanessa Kosoy.
In this talk, which assumes no prior knowledge in artificial intelligence, Vanessa will explain the problem in question, and how researchers in the field are trying to solve it. Vanessa is a Research Associate with the Machine Intelligence Research Institute (MIRI) studying the mathematical formalization of general intelligence and value alignment.
The talk will be in English, is not technical, very accessible ...
Exciting project!
I really love how it enables to do a lot of different things: helps produce content, allows a "trial period" to examine the potential of prospects, acquiring highly-engaged and highly-informed community members, and building the local community.
Waiting to hear about the longer term effects, but it already seems quite worthwhile.
Hi Prabhat!
First things first, I'm also relatively new to EA (approximately 8 months) and I think that it's of great value to take into consideration the ideas of new community members who still have a kind of 'outsider view' on things.
By in large, I agree and I actually started working on strategies to target people who are involved in relevant cause areas or might be more open to EA's concepts of expanding the circle of morality.
There a few assumptions that we can be the base of building this strategy:
Although I'm all for variance in opinions within the community, in the case of outreach and marketing I'm kind of happy that we do (:
First of all, I want to make clear that entering the broader market of charities can simply mean a different website design - I don't know how this should play out, and I believe that we need to be very careful to spend budgets, but I do think that there could be a way for organizations to be both appealing for EA's and non EA's without investing too much on marketing. It doesn't necessarily mean competing with big, well-funded charities that spend enormous amounts of money on marketing, it could simply mean learning what they do well a...
Curious to know why you think Bill Gates meeting the Israeli prime minister would be extraordinarily beneficial (:
I agree with the main premise of this post and I have been thinking about this a lot for the last few months. Having said that, I think this marketing strategies should be utilized mostly within charities that are EA aligned, and not within EA itself.
A very strong case for producing more emotional content is that there is already an X amount of money donated by people, and it's better that this money goes to effective charities than in-effective charities. I think this is also very important to do this in "saturated markets" that get a lot o...
That's pretty good for personal outreach, and I would agree that these assumptions can be helpful when trying to reach to people who will have a positive tendency towards EA.
Having said that, it's pretty unclear to me how you would translate that into ad targeting considering:
1. It's difficult to clearly target "rational and logical" people when you're trying not to approach a specific audience. I can obviously target engineers, mathematicians, and philosophy students, but that is excluding everybody else that is logical and ...
I'm up for the challenge and already sharpening my knife.