Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7950 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
999

Topic contributions
1

I'm not trying to imply it, I'm trying to state it clearly. You dismiss the arguments made in the book as not being empirical. If you haven't read your post, here are some quotes indicating where you do this explicitly:
"the chapter presents effectively zero empirical research" 
"one might expect Y&S to substantiate their case with empirical evidence"
"lack of empirical evidence"

The insistence that only a certain form of evidence, in this case, empirical evidence, would count is for an argument is something Eliezer anticipated in the sequences, unsurprisingly.

https://www.lesswrong.com/w/logical-rudeness

I will ask to keep the discussion in one place, and not duplicate comments. I've responded to the original post, if any readers want to click the link below to see the reply.

Strongly disagree.

Edit to add: Good to know that people react to other people's discomfort with jokes about mass murder of their family with downvotes.

That's a really good point, thanks! Though if they don't have short timelines, it seems like they are being quite irresponsible as board members not preventing Sam from making increasingly large bets on scaling. Of course, they might not be willing to cross him; the current board presumably learned the lesson from Ilya's ill-fated decision.

Also, you need what are currently considered almost implausibly long timelines to not think that them spending more quickly makes sense.

  1. This is year two, not year one.
  2. See the new Q&A item addressing the ned to build capacity; they could give to Givewell, Givedirectly, or via Coeff's funds specific to their goals. They could also give via Gates foundation, etc. They can do this while building up their internal capacity, so they really don't need to delay additional years.
  3. They have incredibly short AGI timelines, so per their own views, they can't afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that's a huge failure. So in my view, your proposed 2028 target for giving so little that they are more than doubling assets yearly is insanely conservative, not at all "an aggressive, public ramp-up targets."
  4. That said, yes, I already agreed that actually ambitious public ramp-up commitments could be sufficient; as I said in the post "if it’s done in the next two years, I will admit they are doing their jobs" - but they didn't announce any such plans, and as noted in the post, the total giving commitment is a cash total certainly worth less than 1/6th of their (current rapidly growing) funds; that's insanely low given that it is their total eventual commitment!

No, it would not. Per the frame that makes the argument more compelling, as I said; "Secondly, they may be even more successful in building significantly more powerful AI, transforming the world. Obviously, the nonprofit would become far wealthier, but given OpenAI’s mandate, it also becomes irrelevant."

But within the first option, if they are actually more than doubling their value yearly (as implied by 100x in 6 years, which matches their current revenue growth continuing at the current rate,) if they give away $20 billion per year, starting at their current valuation of $150 billion, they end up giving away only a small fraction of their eventual endowment - about 13%. And in that case, given that it's hard to spend 13% of $150b effectively, it's going to be far harder to spend any large percentage of their $15 trillion endowment in later years!

To forestall an obvious objection, I do not endorse the decision of OpenAI to use this structure, and there are many other problems. However, the above arguments should apply according to the views they profess, which seems important.

To be clear about my views, I do support spending on local community orgs - but "local organizations or those where I have personal affiliations or feel responsibilities towards are also important to me - but... this is conceptually separate from giving charity effectively, and as I mentioned, I donate separately from the 10% dedicated to charity."

I am not saying everyone is malicious, nor that no-one cares - but belief fixation can happen with a moderate non-majority proportion of a population incentivized to believe it is true, and something like this isn't incompatible with good motivations, and it is about as hard to refute once people claim it's true as it would to establish the claim in the first place.

Load more