I agree that externalities should be taken into account in analyses of EA projects, and as as aogara's comment shows, they may be non-negligible (though the order of magnitude in that calculation wasn't changed). I think it's important to raise this point.
However:
I don't share your view about what a downvote means. However, regardless of what I think, it doesn't actually have any fixed meaning beyond that which people a assign to it - so it'd be interesting to have some stats on how people on the forum interpret it.
But that's where the users' identities and relationship comes into play — I'd feel somewhat differently had Max said the same thing to a new poster.
Most(?) readers won't know who either of them is, not to mention their relationship.
I like those Polis polls you keep posting. Maybe you should now have one to vote on that :)
Upvoted even though I disagree with important parts, because I think this kind of post is a good idea.
I'm curious about your idea of the relationship between the community and funders/managers. On the one hand, you say (without much explanation) that funding decisions ought not to be, and never will be, made democratically. On the other hand, you think the community should inspect and check decisions by funders.
This leads me to ask: what do you envision should happen, if the community finds funding decisions to be bad, or points to a new appointment being ... (read more)
I think the problem isn't with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).
The problem is the actual reason:
I think you're pointing to some important issues... However, I worry that you're conflating a few pretty different dimensions, so I downvoted this post.
The message that, for me, stands out from this is "If you have an important idea but can't present it perfectly - it's better not to write at all." Which I think most of us would not endorse.
As you noted, it's not you who "has money" as a grantmaker. On the other hand, it is you who knows what parameters make projects valuable in the eyes of EA funders. Which is exactly the needed expertise.
I'm not implying how this should compare to any individual grantmaker's other priorities at a conference. But it seems wrong to me to strike it down as not being valuable use of conference time.
Conference time is valuable precisely because it allows people to do things like "get feedback from an EA experienced in the thing they're trying to do". If "insiders" think their time is too valuable for "outsiders", that's a bad sign.
Getting feedback from someone because they have expertise feels structurally different to me than getting feedback from someone because they have money.
It might make sense to have a central List of Lists and a head List Librarian
...
(After jotting this post down, I'm really sick of the word 'list'.)
Maybe we can even aspire to one day rival Wikipedia's legendary list of lists of lists.
This post was more interesting than I expected. Thanks!
It's zero on the event "three sixes are rolled at some point" and infinity on the event that they're never rolled. The probability of that second event is zero, though. So the expected value is zero.
Nevertheless, expected value is the best tool we have for analyzing moral outcomes
Expected value is only one parameter of the (consequentialist) evaluation of an action. There are more, e.g. risk minimisation.
It would be a massive understatement to say that not all philosophical or ethical theories so far boil down to "maximise the expected value of your actions".
The expected value of this strategy is undefined
It looks to me like there's some confusion in the other comments regarding this. The expected value is, in fact, defined, and it is zero. The problem is that if you look at a sequence of n bets and take n to infinity, that expected value does go to positive infinity. So thinking in terms of adding one bet each time is actually deceiving.
In general, a sequence of pointwise converging random variables does not converge in expected value to the expected value of the limit variable. That requires uniform convergence.
Infinities sometimes break our intuitions. Luckily, our lives and the universe's "life" are both finite.
Forgive me for only skimming this and making a rather off-topic comment, but:
[A]dhering to good cultural norms/existing EA axioms is a good heuristic for generating impact
From an outside perspective, how sure are we of this actually? E.g. have organizations and people that generated large positive impact so far adhered to EA-style culture or axioms?
paint a picture that does not very accurately describe what most effective altruists are up to in a practical sense.
And also what they do in their daily lives, outside the time or resources they allot to "effectiveness".
My very short summary of the post:
I agree w... (read more)
Any idea if the next cohorts will allow applying later?
Nitpicking: there's a copying error in the summary, in the party affiliation section regarding independents:
We also found sizable differences between the percentage of Republicans (4.3% permissive, 1.5% stringent) estimated to have heard of EA, compared to Democrats (7.2% permissive, 2.9% stringent) and Independents (4.3% permissive, 1.5% stringent).
grow cautiously (maybe around 30%/year)
Are there estimates about current or previous growth rates?
I think "large groups that reason together on how to achieve some shared values" is something that's so common, that we ignore it. Examples can be democratic countries, cities, communities.
Not that this means reasoning about being effective can attract as large a group. But one can hope.
They currently work for me.
Thanks for the post. I agree with most of it.
I think on the one hand, someone participating by donations only may still be huge, as we all know what direct impact GiveWell charities can have for relatively small amounts of money. Human lives saved are not to be taken lightly.
On the other hand, I think it's important to deemphasize donations as a basis for the movement. If we seek to cause greater impact through non-marginal change, relying on philanthropy can only be a first step.
Lastly, I don't think Elon Musk is someone we should associate ourselves with... (read more)
What is WWOTF?
"What We Owe the Future", Will MacAskill's new book.
I think there are two ways to frame an expansion of the group of people who are engaged with EA through more than donations.
The first, which sits well with your disagreements: we're doing extremely important things which we got into by careful reasoning about our values and impact. More people may cause value drift or dilute the more impactful efforts to make way on the most important problems.
But I think a second one is much more plausible: we're almost surely wrong about some important things. We have biases that stem from who the typical EAs are, where ... (read more)
I don't know that I agree with this, but it did make me think.
Thank you. This brings together nicely some vague concerns I had, that I didn't really know how to formulate beyond "Why are people going around with Will MacAskill's face on their shirts?".
Some things I used to think about when I was active about fair trade, and I wonder if they're discussed:
I basically agree with your comment, but wanted to emphasize the part I disagree with:
EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes.
EA is about prioritising in order to (try to) do the most good. The ITN framework is just a heuristic for that, which may very well be wrong in many places; and funding is just one of the resources we can use.
Thanks for this post!
I think it's really important to look at the underlying assumptions of any long-term EA project, and the movement might not be doing this enough. We take as way too obvious that the social and political climate we're currently operating in will stay the same. But in reality, everything could change significantly due to things like climate change (in one direction) or economic growth (in the other).
That's not what I meant. What I tried to say is that the universe is full of beautiful things, like galaxies, plants, hills, dogs... More generally, complex systems with so many interesting things happening on so many scales. When I imagine a utopia, I picture a thriving human society in "harmony", or at least at peace, with nature. Converting all of it into simulated brains sounds like a dystopian nightmare to me.
Since I first thought about my intrinsic values, I knew there's some divergence between e.g. valuing beauty and valuing happiness singularly. Bu... (read more)
I would mostly like to protest your notion of utopia. A universe where every gram of matter is used for making brains sounds terrible. A "good" life involves interaction with other brains as well as a living environment.
I remember hearing in 2018 about orders of millions of potentially autonomous cars by some companies (Intel?) intended for autonomous use in 2021, and we're not even close to that now. Fusion in the near term on some scale seems plausible, but the fact that a company is claiming a very close timeline isn't very indicative of the actual timeline, I think.
Another response could be that abundant energy means more destructive power for humanity, and so even more risks.
Though in reality I do tend towards the "sounds good but there's nothing we in particular should do about it" side.
A couple thoughts so far, written at 3am so hopefully at least somewhat clear:
I know this is just a small detail and not what you wrote about, but: much of your comment on the recommender systems post hinged on news articles being uncorrelated with the truth. Do you have data to back that up?
I'm replying here because it's a strong claim that's relevant to many things beyond that specific post.
I'm not very confident in my argument, but the particular scenario you describe sounds plausible to me.
Trying to imagine it in a simpler, global health setting - you could ask which of many problems to try to solve (e.g. malaria, snake bites, cancer), some of which may cause several orders of magnitude more suffering than others every year. If the solutions require things that are relatively straightforward - funding, scaling up production of something, etc. - it could be obvious which one to pick. But if the solutions require more difficult things, like r... (read more)
This isn't a well thought-out argument, but something is bugging me in your claim. The real impact for your work may have some distribution, but I think the expected impact given career choices can be distributed very differently. Maybe, for example, the higher you aim, the more uncertainty you have, so your expectation doesn't grow as fast.
I find it hard to believe that in real life you face choices that are reflected much better by your graph than Eric's.
I share some of that intuition as well, but I have trouble conveying it numerically. Suppose that among realistic options that we might consider, we think ex post impact varies by 9 OOMs (as Thomas' graph implies). Wouldn't it be surprising if we have so little information that we only have <10^-9 confidence that our best choice is better than our second best choice?
I don't personally think that's a good reason to not use one's name, but I'll concede my phrasing was indeed a bit too dramatic. It's probably because my experience on the forum is that it's really frustrating not being able to connect other commenters to a human identity.
Beyond asking about projects in a vague, general sense, it could also be interesting to compare the probabilities of success grantmakers in EA assign to their grantees' projects, to the fraction of them that actually succeed.
I'd add that having people use their real names adds to the forum looking like a platform for professional discussion, and adds transparency - both of which are important because of the impact and reach we wish to eventually achieve as a movement.
While pseudonyms have some use cases - the main one I can think of, is when one may fear retaliation for reporting bad behaviour of another EA or organisation - they should indeed be otherwise extremely discouraged.
Edit: ok, this paragraph was in hindsight somewhat exaggerated, and I can think of a few use cases that may be more common. But I still think anyone using a pseudonym should at least have a good reason in mind.
"Extremely discouraged" seems a bit dramatic. Some of us would rather not have our heavy EA involvement be the first thing that shows up when people Google us.
Thanks for writing this. I didn't attend this EAG, but I came out of the previous one completely exhausted. Every day of the conference I ended up in pain and barely able to move. Maybe it's because of not knowing my limits well enough, maybe it's because of accessibility issues[1].
But also maybe there's a culture that encourages us to stretch beyond our limits? I guess we can see if many people experience this and of there are, start looking for a general problem.
Not enough chairs. I wrote about this in the feedback form, so maybe something has change
Hi, thanks for your comment.
While it's reasonable not to be able to provide an impact estimate for every specific small grant, I think there are some other things that could increase transparency and accountability, for example:
There are also a lot of externalities that act at least equally on humans, like carbon emissions, promotion of ethnic violence, or erosion of privacy. Those are all examples off the top of my head for Facebook specifically.
I upvoted Larks' comment, but like you I think this particular argument, "people buy from these firms", is weak.
I strongly agree we need transparency. In lieu of democracy in funding, orgs need to be accountable to the movement in some way.
Also, what's a BOTEC?
Not long enough for the formatting to matter in my opinion. We can, and should, encourage people to post some low-effort posts, as long as they're an original thought.
given that I would like to have multiple comments-summary comments to choose from
That sounds cool, though I think it's a bit too optimistic.
But I basically agree with the rest.
I would be interested. Consider making it a new post instead.
I mean, physics solves the divergence/unboundedness Problem with the universe achieveing heat death eventually. So one can assume some distribution on the time bound, at the very least. Whether that makes having no time discount reasonable in practice, I highly doubt.
No individual, including Elon Musk or Jeff Bezos has more than a quarter of that amount of money
But governments do. Which, while being about a hypothetical, does demonstrate a good reason for EA to try to transition away from relying on individuals instead of governments.
GiveDirectly is great and is strongly supported by the EA community.
Theoretically - but GiveWell seems to prefer to keep money rather than give it directly. There may or may not be good reasons for that, but it's not a strong message for direct empowerment of marginalised communities.
Oh wow this is great! Subscribed.