"To see the world as it is, rather than as I wish it to be."
I'm a Senior Researcher on the General Longtermism team at Rethink Priorities. I also volunteer as a funds manager for EA Funds' Long-term Future Fund.
So I do understand this is not your actual position and you're trying to explain someone else's position. Nonetheless I'm going to try to argue against it directly:
Because of 1, this strategy is (I suspect) actually deliberately used in many cases as a form of racist dismissal...for example, if someone has been racistly dismissed in this way many times before, it will be more hurtful to them to face this sort of dismissal again
As a non-native speaker, I think I have literally never been dismissed in this way[1]. So I suspect you're setting up an imaginary problem. But I only have anecdotes to go off of rather than data; if someone has survey data I'm willing to update quickly.
I think this does roughly fit the second definition of racism that you point to (or, at least, the more complete version of this...[emphasis mine]
Here's where I'm coming from: I think you are bending over backwards to support what I view to be a blatantly false claim. In modern left-leaning American culture, "racist" is one of the worst things you can call someone, I'm surprised so many people would stand for me being called that based on such scant evidence, and I'm currently reflecting on whether it makes sense for me to continue to devote so much of my life to being in a community (see EDIT [2]) that finds this type of thing permissible.
For myself, I buy at least some of the above, and think it might mean it was worth commenting on the way that your commenting could be upsetting to some.
I'm not surprised my comment is upsetting to the intended target (what I perceive as poor reading comprehension by people who know better), and/or to people who might choose to take offense on others' behalf. If anybody genuinely feels unwelcome for object-level race or ethnic-related reasons based on my comment, I'm deeply sorry and I'm happy to apologize further publicly if someone in that position messages me about it and/or one of the forum mods to relay the message to me.
And as I'm confident your intentions here were good, I personally would avoid this description.
Thank you. To be clear, I do appreciate both your confidence and your reticence.
And to be clear I've experienced very blatant (though ultimately harmless) racism not too infrequnetly, as an argumentative visibly nonwhite person on the 'net.
EDIT: I'm afraid that came off too dramatically. I do view being involved in the community as a community pretty differently than being involved professionally. I still intend to work on EA projects etc even if I reduce eg EA Forum commenting or going to social events dramatically, and personal unpleasantness is not going to stop me from working in EA unless it's like >10-100x this comment thread daily. (And even if I stop having jobs in EA organizations for other reasons I'd likely still intend to be effectively altruistic with my time and other resources)
Ok, in that case your claim is that my sentence is part of a "policy, system of government etc...that favors members of the dominant social group?"
Can you explain what prompted it?
I do not view my actions as racist, at least in this instance. If the claim is accurate, I need to reflect more on how to be less racist. If the claim is inaccurate, then, well, I also have some other reflection to do about life choices.
I will probably refrain from engaging further.
Here's the dictionary definition of racism:
1. a belief or doctrine that inherent differences among the various human racial groups determine cultural or individual achievement, usually involving the idea that one's own race is superior and has the right to dominate others or that a particular racial group is inferior to the others.
2. Also called in·sti·tu·tion·al rac·ism [in-sti-too-shuh-nl rey-siz-uhm, -tyoo-], struc·tur·al rac·ism [struhk-cher-uhl rey-siz-uhm], sys·tem·ic rac·ism [si-stem-ik rey-siz-uhm] . a policy, system of government, etc., that is associated with or originated in such a doctrine, and that favors members of the dominant racial or ethnic group, or has a neutral effect on their life experiences, while discriminating against or harming members of other groups, ultimately serving to preserve the social status, economic advantage, or political power of the dominant group.
Can you clarify whether you think my comment fits into (1) or (2), or both? Alternatively, if you think dictionary.com's definition is not the one you were using, can you pull up which alternative definition of racism you and/or Akhil were invoking when you made your comments?
(My own professional opinions, other LTFF fund managers etc might have other views)
Hmm I want to split the funding landscape into the following groups:
LTFF
At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that's much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.
Going forwards, I don't really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we'll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we're likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations.
(Note that in $ terms LTFF isn't a particularly large fraction of the longtermist or AI x-safety funding landscape, I'm only talking about it most because it's the group I'm the most familiar with).
Open Phil
I'm not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision. As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it's not obvious that grantmaking capacity is their true bottleneck, as a) I'm not sure they're trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It's possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.
SFF
I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.
Other EA/Longtermist funders
My impression is that other institutional funders in longtermism either don't really have the technical capacity or don't have the gumption to fund projects that OP isn't funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding "obviously safe" projects.
Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).
Earning-to-givers
I don't have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there's a sufficiently large need for funding. My current guess is that it's fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:
If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn't be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.
Non-EA institutional funders
I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it's much harder for both individuals and grantmakers like LTFF to seek institutional funding[3].
I know FAR has attempted some of this already.
Everybody else
As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It's harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren't culturally EA or longtermist or whatever.
Which will also be harder after OP's matching expires.
If the rest of the funding landscape doesn't change, the tier which I previously called our 5M tier (as in 5M/6 months or 10M/year) can probably absorb on the order of 6-9M over 6 months, or 12-18M over 12 months. This is in large part because the lack of other funders means more projects are applying to us.
Regranting is pretty odd outside of EA; I think it'd be a lot easier for e.g. FAR or ARC Evals to ask random foundations or the US government for money directly for their programs than for LTFF to ask for money to regrant according to our own best judgment. My understanding is that foundations and the US government also often have long forms and application processes which will be a burden for individuals to fill; makes more sense for institutions to pay that cost.
Sorry by "best" I was locally thinking of what's locally best given present limitations, not globally best (which is separately an interesting but less directly relevant discussion). I agree that if there are good actions to do right now, it will be wrong for me to say that all of them are bad because one should wait for (eg) a "systematic, well-run, whistleblower organisation."
For example, if I was saying "GiveDirectly is a bad charity for animal-welfare focused EAs to donate to," I meant that there are better charities on the margin for animal-welfare focused EAs to donate to. I do not mean that in the abstract we should not donate to charities because a well-run international government should be handling public goods provisions and animal welfare restrictions instead. I agree that I should not in most cases be comparing real possibilities against an impossible (or at least heavily impractical) ideal.
Similarly, if I said "X is a bad idea for Bob to do," I meant there are better things for Bob to do with Bob's existing limitations etc, not that if Bob should magically overcome all of his present limitations and do Herculeanly impossible tasks. And in fact I was making a claim that there are practical and real possibilities that in my lights are probably better.
I.e. 'bad idea' connotes much more than just 'sub-optimal, all things considered'.
Well clearly my choice of words on a quickly fired quick take at 1AM was sub-optimal, all things considered. Especially ex post. But I think it'd be helpful if people actually argued about the merits of different strategies instead of making inferences about my racism or lack thereof, or my rudeness or lack thereof. I feel like I'm putting a lot of work in defending fairly anodyne (denotatively) opinions, even if I had a few bad word choices.
After this conversation, I am considering retreating to more legalese and pre-filtering all my public statements for potential controversy by GPT-4, as a friend of mine suggested privately. I suspect this will be a loss for the EA forum being a place where people could be honest and real and human with each other, but it'd be a gain for my own health as well as productivity.
Sorry, what does "bad idea" mean to you other than "this is not the best use of resources?" Does it have to mean net negative?
I've sorry that you believe I misunderstood other's positions. Or that I'm playing the "I'm being cool and rational here" card. I don't personally think I'm being unusually cool here.
I have made some updates as well, though I need to reflect further on the wisdom of sharing them publicly.
I agree what you said is a consideration, though I'm not sure that's an upside. eg I wasted a lot more time/sleep on this topic than if I learned about it elsewhere and triaged accordingly, and I wouldn't be surprised if other members of the public did as well.
You didn’t provide an alternative
Taking a step back, I suspect part of the disagreement here is that I view my position as the default position whereas alternative positions need strong positive arguments for them, whereas (if I understand correctly), you and other commentators/agree-voters appear to believe that the position "public exposes are the best strategy" ought to be the default position and anything else need strong positive arguments for it. Stated that way, I hope you can see why your position is irrational:
Nonlinear have their own funding, and lots of pre-existing ties to the community and EA public materials.
Sure, if people agreed with me about the general case and argued that the Nonlinear exposé was an unusual exception, I'd be more inclined to take their arguments seriously. I do think the external source of funding makes it plausible that Nonlinear specifically could not be defanged via other channels. And I did say earlier "I think the case for public writeups are strongest are when the bad actors in question are too powerful for private accountability (eg SBF), or when somehow all other methods are ineffective."
A public expose has a much better chance of protecting newcomers from serious harm than some high-up EAs having a private critical doc.
People keep asserting this without backing it up with either numbers or data or even actual arguments (rather than just emotional assertions).
The impression I have of your view is that it would have been better if Ben hadn’t written or published his post and instead saved his time, and prefer that Nonlinear was quietly rejected by those in the know. Is that an accurate picture of your view?
Thanks for asking. I think a better use of Ben's time (though not necessarily the best use)is to spend .2x as much time on the Nonlinear investigation + followup work and then spend the remaining .8x of his time on other investigations. I think this strictly decreases the influence of bad actors in EA.
I read your comment as a passive-aggressive “Can’t you read?” attack which carelessly used language issues as a shield against being called out for being an attack.
Yes, this is an accurate reading. Except I dispute "carelessly."
- I did not call you racist and neither did Akhil. We called out issues with your comment. I hope you are mindful of the difference.
Hmm sorry how is the following statement not a claim that I was being racist, at least in that incidence?
unacceptable comment, steeped with condescension and some racism.
If I say someone is doing something unacceptably racist, this is not exactly a subtle accusation! I mean, it's possible someone isn't overall racist in most ways but is racist in a specific way (eg think of an overall progressive voter/parent who still tries to persuade their children to not marry outside of their race). But I also contest that I was being racist in that comment specifically.
Thanks for engaging.
So I think this is a very reasonable position to have. I think it's the type of position that should lead someone to be comparatively much less interested in the "biology can't kill everyone"-style arguments, and comparatively more concerned about biorisk and AI misuse risk compared to AGI takeover risk. Depends on the details of the collapse[1] and what counts as "negative infinity", you might also be substantially more concerned about nuclear risk as well.
But I don't see a case for climate change risk specifically approaching anywhere near those levels, especially on timescales less than 100 years or so. My understanding is that the academic consensus on climate change is very far from it being a near-term(or medium-term) civilizational collapse risk, and when academic climate economists argue about the damage function, the boundaries of debate are on the order of percentage points[2] of GDP. Which is terrible, sure, and arguably qualify as a GCR, but pretty far away from a Mad Max apocalyptic state[3]. So on the object-level, such claims will literally be wrong. That said, I think the wording of the CAIS statement "societal-scale risks such as ..." is broad enough to be inclusive of climate change, so someone editing that statement to include climate change won't directly be lying by my lights.
I'm often tempted to have views like this. But as my friend roughly puts it, "once you apply the standard of 'good person' to people you interact with, you'd soon find yourself without any allies, friends, employers, or idols."
There are many commonly-held views that I think are either divorced from reality or morally incompetent. Some people think AI risk isn't real. Some (actually, most) people think there are literal God(s). Some people think there is no chance that chickens are conscious. Some people think chickens are probably conscious but it's acceptable to torture them for food anyway. Some people think vaccines cause autism. Some people oppose Human-Challenge Trials. Some people think it's immoral to do cost-effectiveness estimates to evaluate charities. Some people think climate change poses an extinction-level threat in 30 years. Some people think it's acceptable to value citizens >1000x the value of foreigners. Some people think John Rawls is internally consistent. Some people have strong and open racial prejudices. Some people have secret prejudices that they don't display but drives much of their professional and private lives. Some people think good internet discourse practices includes randomly calling other people racist, or Nazis. Some people think evolution is fake. Some people believe in fan death.
And these are just viewpoints, when arguably it's more important to do good actions than to have good opinions. Even though I'm often tempted to want to only interact or work with non-terrible people, in terms of practical political coalition-building, I suspect the only way to get things down is by being willing to work with fairly terrible (by my lights) people, while perhaps still being willing to exclude extremely terrible people. The trick is creating the right incentive structures and/or memes and/or coalitions to build something great or at least acceptable out of the crooked timbers of humanity.
ie is it really civilizational collapse if it's something that affects the Northern hemisphere massively but results in Australia and South America not having >50% reduction in standard of living? Reasonable people can disagree I think.
Maybe occasionally low tens of percentage points? I haven't seen anything that suggests this, but I'm not well-versed in the literature here.
World GDP per capita was 50% lower in 2000, and I think most places in 2000 did not resemble a post-apocalyptic state, with the exception of a few failed states.