All of Open_Thinker's Comments + Replies

Oh, found your 2nd reply to me.

This is an astute point, I fully acknowledge and recognize the validity of what you are saying.

However, it is not that simple, it depends on the expected yield curve of the specific effort and its specific context. In some cases that are already "well-funded" there is high value generation which is still below full potential and should be increasingly funded further, e.g. due to economies of scale; in other cases, there are diminishing returns and they should not be funded further.

Similarly, the same is true for &... (read more)

Hey, thanks for the reply, it looks like there is a lot of interesting / useful information there. Also, it looks like you replied twice to me based on notifications, but I can only find this comment, so sorry if I missed something else.

With all due respect, I think there is a bit of a misunderstanding on your part (and others voting you up and me down).

If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative
... (read more)
2
Greg_Colbourn
4y
Do you think this is true even in terms of impact/$ (given they are spending ~1,000-10,000x what we are)? We now have ~3 months worth of runway. It's a good start to this fundraiser, but is hardly conducive to sustainability (as mentioned below, we would like to get to 6 months runway to be able to start a formal hiring process for our Community & Projects Manager. The industry standard for non-profits is 18 months runway).

This point is reasonable, and I fully acknowledge that the EA Hotel cannot have much measurable data yet in its ~1 year of existence. However, I don't think it is a particularly satisfying counter response.

If the nature of the EA Hotel's work is fundamentally immeasurable, how is one able to objectively quantify that it is in fact being altruistic effectively? If it is not fundamentally immeasurable but is not measured and could have been measured, then that is likely simply incompetence. Is it not? Either way, it would be impossible to evidentia... (read more)

2
Greg_Colbourn
4y
We are far too small to be on Bill Gates' radar. It's not worth his time looking at grants of less than millions of $ (Who know's though, maybe we'll get there eventually?)

The fact that others are not interested in such due diligence is itself a separate concern that support for the EA Hotel is perhaps not as rigorous as it should be; however, this is not a concern against you or the EA Hotel, but rather against your supporters. This to me seems like a basic requirement, particularly in the EA movement.

I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultima
... (read more)

The 3rd update was reviewed, that was what led me to search for Part 2 which is expected in update 10. Frankly I am personally not interested in the 3rd update's calculations, because simply based on personal time allocations I would prefer a more concise estimate than having to tediously go through the calculations myself.

Please understand that this is not an insult, and I think it is a reasonable point in fact--for example, with other [in some ways competing] efforts (e.g. startups), it would not be acceptable to most venture capitalists to present... (read more)

2
Greg_Colbourn
4y
See reply below.

That is understandable; however it is dissatisfactory in my personal opinion--I cannot commit to funding on such indefinite and vague terms. You (and others) clearly think otherwise, but hopefully you can understand this contrary perspective.

Even if "the EA Hotel is a meta level project," which to be clear I can certainly understand, there should still be some understanding or estimate of what the anticipated multiplier should be, i.e. a range with a target within a +/- margin. From what I can see upon reviewing current and past guests' proj... (read more)

3
Greg_Colbourn
4y
[Replying to above thread] One reason I asked you to plug some numbers in is that these estimates will depend a lot on what your priors are for various parameters. We will hopefully provide some of our own numerical estimates soon, but I don't think that too much weight should be put on them (Halffull makes a good point about measurability above). Also consider that our priors may be biased relative to yours. I'll also say that a reason for Part 2 of the EV estimate being put on the back burner for so long was that Part 1 didn't get a very good reception (i.e. people didn't see much value in it). You are the first person to ask about Part 2! [Replying to this thread] I think this is missing the point. The point of the EA Hotel is not to help the residents of the hotel, it is to help them help other people (and animals). AI Safety research, and other X-risk research in general, is ultimately about preventing the extinction of humanity (and other life). This is clearly a valuable thing to be aiming for. However, as I said before, it's hard to directly compare this kind of thing (and meta level work), with shovel-ready object level interventions like distributing mosquito nets in the developing world.

Thanks for asking, that's a good question.

It basically comes down to yield or return on investment (ROI). It seems quite common for utilitarianism and effective altruism to be related, and in the former to have some quantification of value and ratio of output per unit of input; one might say that the most ethical stance is to find the min / max optimization that produces the highest return. Whether EA demands or requires such an optimal maximum, or whether a suboptimal effectivity is still ethical, is an interesting but separate discussion.

So in a lot... (read more)

I think there's a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.

The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to othe... (read more)

5
Greg_Colbourn
4y
The EA Hotel is a meta level project, as opposed to the other more object level efforts you refer to, so it's hard to do a direct comparison. Perhaps it's best to think of the Hotel as a multiplier for efforts in the EA space in general. We are enabling people to study and research topics relevant to EA, and also to start new projects and collaborations. Ultimately we hope that this will lead to significant pay-offs in terms of object level value down the line (although in many cases this could take a few years, considering that most of the people we host are in the early stages of their careers).

Noted; however upon lightly reviewing said information, it seems to be lacking. Hence the request for further information.

It does not seem like this is expected until update 10, as I noted previously. The fact that it is not a higher priority for an EA organization is a lack in my opinion.

This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That's why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It's why CEA started the EA Fun

... (read more)

I would appreciate it if you could review the information a bit more thoroughly. Perhaps you could generate your own estimate using the framework developed in Fundraiser 3 and the outputs listed here. Fundraiser 10 was listed last because I want to try and do a thorough job of it (but also have other competing urgent priorities with respect to the hotel). There are also many considerations as to why any such estimates will be somewhat fuzzy and perhaps not ideal to rely on too heavily for decision making (hoping to go into detail on this in the post).

That is understandable, however when presenting information (i.e. linking to it on your homepage) there is an implicit endorsement of said information; else it should not be presented. This is irrespective of whether the source is formally affiliated or not--its simple presence is already an informal affiliation. The simple fact that the EA Hotel does not have a better presentation of information is itself meta information on the state of the organization and project.

However, this was not really the main point; it was only a "little" concern as I... (read more)

3
riceissa
4y
Can you give some examples of EA organizations that have done things the "right way" (in your view)?

The inconsistency is itself a little concerning.

I am one of the contributors to the Donations List Website (DLW), the site you link to. DLW is not affiliated with the EA Hotel in anyway (although Vipul, the maintainer of DLW, made a donation to the EA Hotel). Some reasons for the discrepancy in this case:

  • As stated in bold letters at the top of the page, "Current data is preliminary and has not been completely vetted and normalized". I don't think this is the main reason in this case.
  • Pulling data into DLW is not automatic, so there is a lag between wh
... (read more)

The fact that there are only 18 total donations totaling less than $10k is concerning

If you are well-funded, they'll say: "You don't need my money. You're already well-funded." If you aren't well-funded, they'll say: "You aren't well-funded. That seems concerning."

(I think the tone of this comment is the reason it is being downvoted. Since we all presumably believe that EA should be evidential, rational and objective, stating it again reads as a strong attack, as if you were trying to point out that no assessment of impact had been done, even though the original post links to some.)

5
CEEALAR
5y
Please read the posts linked to on eahotel.org/fundraiser (and as stated in the OP, we have more in the pipeline). See also the totaliser on that page (will be updated soon) - total donations (in addition to those made founder Greg Colbourn) are currently ~£36k from >50 individuals, they have come through various means - the PayPal MoneyPool, GoFundMe, Patreon and privately). UPDATE 6th Nov 2019: the fundraiser page has now been updated, and a histogram of donations added: https://eahotel.org/fundraiser/

What is the actual calculations you used?

For the wild animal welfare lower bound: 0.99 * 0.99 * 0/75 * 0.99 * 0.95 * 0.9 * 0.8 * 0.9 * 0.8 * 0.9 * 0.9 * 0.8 * 0.95 * 0.95 = 21% ?

How do you determine whether something is 0.90, 0.95, 0.99, or some other number?

In your summary, you state that animal causes have a combined 7/12 chance of being the top priority, whereas human causes have a combined 5/12 chance. However, the error margins are huge, with the original wild animals priority having "wide-margins" of 25-90%.

It does not seem to me that the... (read more)

2
Stijn
5y
As mentioned, those percentages wher my own subjective estimates, and they were determined based on the considerations that I mentioned ("This estimate is based on"). When I clearly state that these are my personal, subjective estimates, I don't think it is misleading: it does not give a veneer of objectivity. The clarifying part is that you can now decide whether you agree or disagree with the probability estimates. Breaking the estimate into factors helps you to clarify the relevant considerations and improves your accuracy. It is better than simply guessing the overall estimate of the probability that wild animal suffering is the priority. If you don't like the wide margins, perhaps you can improve the estimates? But knowing we often have an overconfidence bias (our error estimates are often too narrow), we should a priori not expect narrow error margins and we should correct this bias by taking wider margins.

It is really too simple to look at only either a flat, first-order count of neurons OR some other gauge of experience (i.e. suffering and pleasure) and ignore potential higher order effects in my opinion. Perhaps I disagree with many on how utility should be defined.

(There are a couple reposts of this to Reddit's EA subreddit.)

Maybe I simply missed it, but where do the personal probability estimates come from? If they are simply pulled out of the air, then any mathematical conclusions in the summary are likely invalid; a different result could be obtained just by playing with the numbers, even if the same arguments are maintained.

2
Stijn
5y
the personal probability estimates are pulled out of my 'air' of intuitive judgments. You are allowed to play with the numbers according to your intuitive judgments. Breaking down the total estimate into factors allows you to make more accurate estimates, because you better reflect on all your beliefs that are relevant for the estimate

Point taken, but look at OP's title--it is a definitive claim, one which is not supported at all by the accompanying text. Describing the uncertainty in fact does get us somewhere, it allows one to throw out the claim. "Animal charities could easily be better than the OP suggests" indeed, but they could also be far worse.

Unless someone submits new data one way or the other though, the point is moot; which is to say, "back to the drawing board," which is better than being led down a false path, i.e. is, again, an improvement over what was originally presented.

4
kbog
5y
You can interpret "much more effective" as a claim about the expected value of a charity given current information. Personally, that's what I think when I see such statements.

This is absurd. Not because human lives are necessarily inherently more valuable than other animal lives, but rather because the calculation is ridiculously unrefined and cannot be used to support the conclusion.

The idea of basing the calculation on a simple neuronal count is flat out wrong, because humans aren't even at the top in an even, 1:1 weighting in that regard, see: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons . If it were that easy, the point could much more easily be made by just looking at elephant charities rather th... (read more)

2
MichaelStJules
5y
Although I suspect this is more likely to be false than true, it is not inconceivable that less intelligent animals of a given species could matter more than humans, individually. For example, their experiences, good or bad, could be more intense than ours, or they could experience life more quickly*. They don't need to have more neurons for this to be true, either, and I am skeptical of the importance of neuron count, too, in part because of this. *if the rate didn't matter, you'd run into problems with the theory of relativity: if you are moving very fast compared to another person, you each will see the other as aging more slowly, all else equal. If the rate didn't matter, then you'd each see the other as mattering more, all else equal, because the other would live longer. Then ethics would have to depend on the frame of reference, which is pretty weird, but perhaps not fatal.
5
Avi Norowitz
5y
Since there are less than 1 million elephants alive today, even if each elephant has modestly more moral value than each human, elephant welfare is still very unlikely to meet the importance criteria.
kbog
5y12
0
0

I think that when someone puts a number on an unknown value, the only good response is to say whether it's too high or too low. Merely describing the uncertainty doesn't get us anywhere closer to knowing where to donate. Animal charities could easily be better than the OP suggests.

Agree with the gist of the previous comments. This is just basic semantic confusion, people or agents who do not exist only exist as theoretical exercises in mindspace such as this one, they do not exist in any other space by definition, and so cannot have real rights to which ethics should be applied.

So focusing on [not just some but] all nonexistent is not controversial, it is just wrong on a basic level. I do not think that this is being close minded ironically, it is simply by definition.

What would be more productive to discuss are possible agents who could be realized with their potential rights, which is fundamentally different although not mutually exclusive with nonexistent agents.