Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.
With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).
If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative...
This point is reasonable, and I fully acknowledge that the EA Hotel cannot have much measurable data yet in its ~1 year of existence. However, I don't think it is a particularly satisfying counter response.
If the nature of the EA Hotel's work is fundamentally immeasurable, how is one able to objectively quantify that it is in fact being altruistic effectively? If it is not fundamentally immeasurable but is not measured and could have been measured, then that is likely simply incompetence. Is it not? Either way, it would be impossible to evidentia...
The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.
I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultima...
The 3rd update was reviewed, that was what led me to search for Part 2 which is expected in update 10. Frankly I am personally not interested in the 3rd update's calculations, because simply based on personal time allocations I would prefer a more concise estimate than having to tediously go through the calculations myself.
Please understand that this is not an insult, and I think it is a reasonable point in fact--for example, with other [in some ways competing] efforts (e.g. startups), it would not be acceptable to most venture capitalists to present...
That is understandable; however it is dissatisfactory in my personal opinion--I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.
Even if "the EA Hotel is a meta level project," which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests' proj...
Thanks for asking, that's a good question.
It basically comes down to yield or return on investment (ROI). It seems quite common for utilitarianism and effective altruism to be related, and in the former to have some quantification of value and ratio of output per unit of input; one might say that the most ethical stance is to find the min / max optimization that produces the highest return. Whether EA demands or requires such an optimal maximum, or whether a suboptimal effectivity is still ethical, is an interesting but separate discussion.
So in a lot...
I think there's a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.
The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to othe...
Noted; however upon lightly reviewing said information, it seems to be lacking. Hence the request for further information.
It does not seem like this is expected until update 10, as I noted previously. The fact that it is not a higher priority for an EA organization is a lack in my opinion.
This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That's why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It's why CEA started the EA Fun
...I would appreciate it if you could review the information a bit more thoroughly. Perhaps you could generate your own estimate using the framework developed in Fundraiser 3 and the outputs listed here. Fundraiser 10 was listed last because I want to try and do a thorough job of it (but also have other competing urgent priorities with respect to the hotel). There are also many considerations as to why any such estimates will be somewhat fuzzy and perhaps not ideal to rely on too heavily for decision making (hoping to go into detail on this in the post).
That is understandable, however when presenting information (i.e. linking to it on your homepage) there is an implicit endorsement of said information; else it should not be presented. This is irrespective of whether the source is formally affiliated or not--its simple presence is already an informal affiliation. The simple fact that the EA Hotel does not have a better presentation of information is itself meta information on the state of the organization and project.
However, this was not really the main point; it was only a "little" concern as I...
The inconsistency is itself a little concerning.
I am one of the contributors to the Donations List Website (DLW), the site you link to. DLW is not affiliated with the EA Hotel in anyway (although Vipul, the maintainer of DLW, made a donation to the EA Hotel). Some reasons for the discrepancy in this case:
The fact that there are only 18 total donations totaling less than $10k is concerning
If you are well-funded, they'll say: "You don't need my money. You're already well-funded." If you aren't well-funded, they'll say: "You aren't well-funded. That seems concerning."
(I think the tone of this comment is the reason it is being downvoted. Since we all presumably believe that EA should be evidential, rational and objective, stating it again reads as a strong attack, as if you were trying to point out that no assessment of impact had been done, even though the original post links to some.)
What is the actual calculations you used?
For the wild animal welfare lower bound: 0.99 * 0.99 * 0/75 * 0.99 * 0.95 * 0.9 * 0.8 * 0.9 * 0.8 * 0.9 * 0.9 * 0.8 * 0.95 * 0.95 = 21% ?
How do you determine whether something is 0.90, 0.95, 0.99, or some other number?
In your summary, you state that animal causes have a combined 7/12 chance of being the top priority, whereas human causes have a combined 5/12 chance. However, the error margins are huge, with the original wild animals priority having "wide-margins" of 25-90%.
It does not seem to me that the...
It is really too simple to look at only either a flat, first-order count of neurons OR some other gauge of experience (i.e. suffering and pleasure) and ignore potential higher order effects in my opinion. Perhaps I disagree with many on how utility should be defined.
(There are a couple reposts of this to Reddit's EA subreddit.)
Maybe I simply missed it, but where do the personal probability estimates come from? If they are simply pulled out of the air, then any mathematical conclusions in the summary are likely invalid; a different result could be obtained just by playing with the numbers, even if the same arguments are maintained.
Point taken, but look at OP's title--it is a definitive claim, one which is not supported at all by the accompanying text. Describing the uncertainty in fact does get us somewhere, it allows one to throw out the claim. "Animal charities could easily be better than the OP suggests" indeed, but they could also be far worse.
Unless someone submits new data one way or the other though, the point is moot; which is to say, "back to the drawing board," which is better than being led down a false path, i.e. is, again, an improvement over what was originally presented.
This is absurd. Not because human lives are necessarily inherently more valuable than other animal lives, but rather because the calculation is ridiculously unrefined and cannot be used to support the conclusion.
The idea of basing the calculation on a simple neuronal count is flat out wrong, because humans aren't even at the top in an even, 1:1 weighting in that regard, see: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons . If it were that easy, the point could much more easily be made by just looking at elephant charities rather th...
I think that when someone puts a number on an unknown value, the only good response is to say whether it's too high or too low. Merely describing the uncertainty doesn't get us anywhere closer to knowing where to donate. Animal charities could easily be better than the OP suggests.
Agree with the gist of the previous comments. This is just basic semantic confusion, people or agents who do not exist only exist as theoretical exercises in mindspace such as this one, they do not exist in any other space by definition, and so cannot have real rights to which ethics should be applied.
So focusing on [not just some but] all nonexistent is not controversial, it is just wrong on a basic level. I do not think that this is being close minded ironically, it is simply by definition.
What would be more productive to discuss are possible agents who could be realized with their potential rights, which is fundamentally different although not mutually exclusive with nonexistent agents.
Oh, found your 2nd reply to me.
This is an astute point, I fully acknowledge and recognize the validity of what you are saying.
However, it is not that simple, it depends on the expected yield curve of the specific effort and its specific context. In some cases that are already "well-funded" there is high value generation which is still below full potential and should be increasingly funded further, e.g. due to economies of scale; in other cases, there are diminishing returns and they should not be funded further.
Similarly, the same is true for &... (read more)