The goal of this short-form post: to outline what I see as the key common ground between the “big tent” versus “small and weird” discussions that have been happening recently and to outline one candidate point of disagreement.
Tl;dr:
There's been a few posts recently about how there should be more EA failures, since we're trying a bunch of high-risk, high-reward projects, and some of them should fail or we're not being ambitious enough.
I think this is a misunderstanding of what high-EV bets look like. Most projects do not either produce wild success or abject failure, there's usually a continuity of outcomes in between, and that's what you hit. This doesn't look like "failure", it looks like moderate success.
For example, consider the MineRL BASALT competition that I organized. The low-... (read more)
https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support
To me at this point the expected impact of the EA phenomena as a whole is negative. Hope we can right this ship, but things really seem off the rails.
Eliezer's tweet is about the founding of OpenAI, whereas Agrippa's comment is about a 2017 grant to OpenAI (OpenAI was founded in 2015, so this was not a founding grant). It seems like to argue that Open Phil's grant was net negative (and so strongly net negative as to swamp other EA movement efforts), one would have to compare OpenAI's work in a counterfactual world where it never got the extra $30 million in 2017 (and Holden never joined the board) with the actual world in which those things happened. That seems a lot harder to argue for than what Elieze... (read more)
What is the definition of longtermism, if it now includes traditional global health interventions like reducing lead exposure?
Will MacAskill says (bold added):
... (read more)Well, it’s because there’s more of a rational market now, or something like an efficient market of giving — where the marginal stuff that could or could not be funded in AI safety is like, the best stuff’s been funded, and so the marginal stuff is much less clear. Whereas something in this broad longtermist area — like reducing people’s exposure to lead, improving brain and other health development —
As far as I can tell liberal nonviolence is a very popular norm in EA. At the same time I really cannot thing of anything more mortally violent I could do than to build a doomsday machine. Even if my doomsday machine is actually a 10%-chance-of-doomsday machine or 1% or etcetera (nobody even thinks it's lower than that). How come this norm isn't kicking in? How close to completion does the 10%-chance-of-doomsday machine have to be before gentle kindness is not the prescribed reaction?
My favorite thing about EA has always been the norm that in order to get cred for being altruistic, you actually are supposed to have helped people. This is a great property, just align incentives. But now re: OpenAI I so often hear people say that gentle kindness is the only way, if you are openly adversarial then they will just do the opposite of what you want even more. So much for aligning incentives.
My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown.
Has anyone looked to that movement for lessons about AI?
Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach?
Carrick Flynn lost the nomination, and over $10 million dollars from EA aligned individuals went to support his nomination.
So these questions may sound pointed:
There was surely a lot of expected value in having an EA aligned thinker in congress supporting pandemic preparedness, but there were a lot of bottlenecks that he would have had to go through to make a change.
He would have been one of hundreds of congresspeople. He would have had to get bills passed. He would have had to win enough votes to make it past the primary. He would have had to have his pol... (read more)
I think seeing the attacks that he's captured by crypto interests was useful, in that future EA political forays will know that attack is coming and be able to fend it off better. Worth $11 mil in itself, probably not, but the expected value was already pretty high (a decent probability of having someone in congress who can champion bills no one disagrees with but doesn't want to spend time and effort on) so this information gained is helpful and might make either future campaigns more successful or alternatively dissuade future spending in this area. Definitely good to try once, we'll see how it plays out in the long run. We didn't know he'd lose until he lost!
https://www.nytimes.com/2022/05/14/opinion/sunday/rich-happiness-big-data.html
This article from Seth Stephens-Davidowitz describes a paper (here) that examines who are the people in the top 0.1% of earners in the US, making at least $1.58 million per year. It was interesting to me in that many of those people were not high-status jobs, but rather owning unsexy businesses such as a car dealership or a beverage distribution operation. Obviously, this has implications for how we structure society, but it could also be a good thing to keep in mind for th... (read more)
An interesting thought, but I think this overlooks the fact that wealth is heavy tailed. So it is (probably) higher EV to have someone with a 10% shot at their tech startup getting huge than one person with a 100% chance of running a succesful plumbing company.
"Write a Philosophical Argument That Convinces Research Participants to Donate to Charity"
Has this every been followed up on? Is their data public?
I recently experienced a jarring update on my beliefs about Transformative AI. Basically, I thought we had more time (decades) than I now believe we will (years) before TAI causes an existential catastrophe. This has had an interesting effect on my sensibilities about cause prioritization. While I applaud wealthy donors directing funds to AI-related Existential Risk mitigation, I don't assign high probability to the success of any of their funded projects. Moreover, it appears to me that there is essentially no room for additional funds in kinds of denomin... (read more)
Consider s-risk:
From your comment, I understand that you believe the funding situation is strong and not limiting for TAI, and also that the likely outcomes of current interventions is not promising.
(Not necessarily personally agreeing with the above) given your view, I think one area that could still interest you is "s-risk". This also relevant for your interests in alleviating massive suffering.
I think talking with CLR, or people such as Chi there might be valuable (they might be happy to speak if you are a personal donor).
Leadership de... (read more)
Question for anyone who has interest/means/time to look into it: which topics on the EA forum are overrepresented/underrepresented? I would be interested in comparisons of (posts/views/karma/comments) per (person/dollar/survey interest) in various cause areas. Mostly interested in the situation now, but viewing changes over time would be great!
My hypothesis [DO NOT VIEW IF YOU INTEND TO INVESTIGATE]:
I expect longtermism to be WILDLY, like 20x, overrepresented. If this is the case I think it may be responsible for a lot of the recent angst about the relationship between longtermism and EA more broadly, and would point to some concrete actions to take.
There was a post on this recently.
(Disclaimer: The argument I make in this short-form feels I little sophistic to me. I’m not sure I endorse it.)
Discussions of AI risk, particular risks from “inner misalignment,” sometimes heavily emphasize the following observation:
... (read more)Humans don’t just care about their genes: Genes determine, to a large extent, how people behave. Some genes are preserved from generation-to-generation and some are pushed out of the gene-pool. Genes that cause certain human behaviours (e.g. not setting yourself on fire) are more likely to be preserved. But people don’t care
I suppose my point is more narrow, really just questioning whether the observation "humans care about things besides their genes" gives us any additional reason for concern.
I mostly go ¯\_(ツ)_/¯ , it doesn't feel like it's much evidence of anything, after you've updated off the abstract argument. The actual situation we face will be so different (primarily, we're actually trying to deal with the alignment problem, unlike evolution).
I do agree that in saying " ¯\_(ツ)_/¯ " I am disagreeing with a bunch of claims that say "evolution example implies misa... (read more)
I think some of us really need to create op-eds, videos, etc. for a mainstream audience defending longtermism. The Phil Torres pieces have spread a lot (people outside the EA community have shared them in a Discord server I moderate, and Timnit Gebru has picked them up) and thus far I haven't seen an adequate response.
First posted on nunosempere.com/blog/2022/05/20/infinite-ethics-101 , and written after one too many times encountering someone who didn't know what to do when encountering infinite expected values.
In Exceeding expectations: stochastic dominance as a general decision theory, Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only ratio... (read more)
Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x -> x^2 is monotonic.
The existential risk community’s relative level of concern about different existential risks is correlated with how hard-to-analyze these risks are. For example, here is The Precipice’s ranking of the top five most concerning existential risks:
This isn’t surprising.
For a number of risks, when you first hear about them, it’s reasonable to have the reaction “Oh, hm, maybe that could be a ... (read more)
Related:
... (read more)The uncertainty and error-proneness of our first-order assessments of risk is itself something we must factor into our all-things-considered probability assignments. This factor often dominates in low-probability, high- consequence risks—especially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or that are difficult to assess for other reasons. Suppose that some scientific analysis A indicates that some catastrophe X has an extremely small probability P(X) of occurring. Then the probability that A
A way of reframing the idea of "we are no longer funding-constrained" is "we are bottlenecked by people who can find new cost-effective opportunities to spend money". If this is true, we should plausibly stop donating to funds that can't give out money fast enough anyway, and rather spend money on orgs/people/causes you personally estimate needs more money now. Maybe we should up-adjust how relevant we think personal information is to our altruistic spending decisions.
Is this right? And are there any good public summaries of the collective wisdom fun... (read more)
FWIW, I think personal information is very relevant to giving decisions, but I also think the meme "EA is no longer funding-constrained" perhaps lacks nuance that's especially relevant for people with values or perspectives that differ substantially from major funders.
Relevant: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps
How to make the long-term future go well: get every generation to follow the rule "leave the world better off than it was under the previous generation".
I recently read a post which:
Normally, I would just ask if they wanted to get a comment from this account. Or just downvote it and explain my reasons for doing so. Or just tear it apart. But today, I am low on energy, and I can't help but feel: What's the point? Sure, if I was more tactful, more charismatic, and glibber, I might both be able to explain ... (read more)
I have a few drafts which could use that, send me a message if you feel like doing that.
FTX Future Fund says they support "ambitious projects to improve humanity's long-term prospects". Does it seem weird that they're unanimously funding neartermist global health interventions like lead elimination?
Will MacAskill:
... (read more)LEEP is lead by a very talented team of strong "neartermist" EAs.
In the real world and real EA, a lot of interest and granting can be dependent on team and execution (especially given the funding situation). Very good work and leaders are always valuable.
Casting everything into some longtermist/neartermist thing online seems unhealthy.
This particular comment seems poorly written (what does "unanimously" mean?) and seems to pull on some issue, but it just reads that everyone likes MacAskill, everyone likes LEEP and so decided to make a move.