Consider whether you're comparatively advantaged to give to non-tax-deductible things.
(Not financial advice.) I think people -- especially donors who are giving >$100k/year -- often default to thinking that they should stick to tax-deductible giving, because they have an unusually high "501c3 multiplier" due to high marginal income tax rates or low cost basis for capital gains taxes. I claim this is a mistake for some donors, because what matters is whether your 501c3 multiplier is unusually high relative to the average dollar in the donor mix, which is...
Weak-downvoted; I think it's fair game to say an org acted in an untrustworthy way, but I think it's pretty essential to actually sketch the argument rather than screenshotting their claims and not specifying what they've done that contradicts the claims. It seems bad to leave the reader in a position of being like, "I don't know what the author means, but I guess Epoch must have done something flagrantly contradictory to these goals and I shouldn't trust them," rather than elucidating the evidence so the reader can actually "form their own judgment." Ben_...
(Speaking for myself as someone who has also recommended donating to Horizon, not Julian or OP)
I basically think the public outputs of the fellows is not a good proxy for the effectiveness of the program (or basically any talent program). The main impact of talent programs, including Horizon, seems better measured by where participants wind up shortly after the program (on which Horizon seems objectively strong), plus a subjective assessment of how good the participants are. There just isn't a lot of shareable data/info on the latter, so I can't do much be...
I appreciate these analyses, but given the very high sensitivity of the bottom lines to parameters like how welfare ranges correspond to neuron counts or other facts about the animals in question, I find it implausible that the best donation option is to fund the intervention with the highest mean estimate rather than either 1) fund more research into those parameters or 2) save/invest until such research has happened. Maybe future posts could examine the tradeoff between funding/waiting for such research versus funding the direct interventions now?
Thanks for the comment. I actually agree that funding research explicitly aiming to decrease the uncertainty about the effects on soil animals would be more cost-effective than funding the cheapest ways to save human lives. However, I do not know about any concrete donation opportunities to support that research. I asked people from RP, the Welfare Footprint Institute (WFI), and Wild Animal Initiative (WAI) about it 3 months ago, and said I would be happy to donate 3 k$ myself. Only Cynthia Schuck-Paim from WFI replied saying Wladimir Alonso from WFI is wo...
I think this is comparing apples and oranges: biological capabilities on benchmarks (AFAIK not that helpful in real-world lab settings yet) versus actual economic impact. The question is whether real world bio capabilities will outstrip real world broad economic capabilities.
It's certainly possible that an AI will trigger a biorisk if-then commitment before it has general capabilities capable of 10% cumulative GDP growth. But I would be pretty surprised if we get a system so helpful that it could counterfactually enable laypeople to dramatically surpass th...
I think this picture of EA ignoring stable totalitarianism is missing the longtime focus on China.
Also, see this thread on Open Phil's ability to support right-of-center policy work.
It feels like there's an obvious trade between the EA worldview on AI and Thiel's, where the strategy is "laissez faire for the kinds of AI that cause late-90s-internet-scale effects (~10% cumulative GDP growth), aggressive regulation for the kinds of AI that inspire the 'apocalyptic fears' that he agrees should be taken seriously, and require evaluations of whether a given frontier AI poses those risks at the pre-deployment stage so you know which of these you're dealing with."
Indeed, this is pretty much the "if-then" policy structure Holden proposes here...
I notice a pattern in my conversations where someone is making a career decision: the most helpful parts are often prompted by "what are your strengths and weaknesses?" and "what kinds of work have you historically enjoyed or not enjoyed?"
I can think of a couple cases (one where I was the recipient of career decision advice, another where I was the advice-giver) where we were kinda spinning our wheels, going over the same considerations, and then we brought up those topics >20 minutes into the conversation and immediately made more progress than the res...
Yeah interesting. To be clear, I'm not saying e.g. Manifund/Manival are net negative because of adverse selection. I do think additional grant evaluation capacity seems useful, and the AI tooling here seems at least more useful than feeding grants into ChatGPT. I suppose I agree that adverse selection is a smaller problem in general than those issues, though once you consider tractability, it seems deserving of some attention.
Cases where I'd be more worried about adverse selection, and would therefore more strongly encourage potential donors:
Can you say more about how this / your future plans solve the adverse selection problems? (I imagine you're already familiar with this post, but in case other readers aren't, I recommend it!)
Having a savings target seems important. (Not financial advice.)
I sometimes hear people in/around EA rule out taking jobs due to low salaries (sometimes implicitly, sometimes a little embarrassedly). Of course, it's perfectly understandable not to want to take a significant drop in your consumption. But in theory, people with high salaries could be saving up so they can take high-impact, low-paying jobs in the future; it just seems like, by default, this doesn't happen. I think it's worth thinking about how to set yourself up to be able to do it if you do ...
Relevant: I've been having some discussions with (non-EA) friends on why they don't donate more.
Some argue that they want enough money to take care of themselves in the extreme cases of medical problems and political disasters, but still with decent bay area lifestyles. I think the implication is that they will wait until they have around $10 Million or so to begin thinking of donations. And if they have kids, maybe $30 Million.
I obviously find this very frustrating, but also interesting.
Of course, I'd expect that if they would make more ...
I basically agree with this (and might put the threshold higher than $100, probably much higher for people actively pursuing policy careers), with the following common exceptions:
It seems pretty low-cost to donate to a candidate from Party X if...
I don't know the weeds of the moral parliament view, but my suspicion is that this argument relies on too low of a level of ethical views (that is, "not meta enough"). That's still just a utilitarian frame with empirical uncertainty. The kind of "credences on different moral views" I have in mind is more like:
...I want my moral actions to be guided by some mix of like, 25% bullet-biting utilitarianism (in which case, insects are super important in expectation), 25% virtue ethics (in which case they're a small consideration -- you don't want to go out of your
I think it's reasonable to say "I put some credence on moral views that imply insect suffering is very important and some credence on moral views that imply it's not important; all things considered, I think it's moderately important."
A couple other comments are gesturing at this, but this logic could be applied to all kinds of things: existential risk is probably "either" extremely important or not at all important if you plug different empirical and ethical views into a formula and trust the answer; likewise present-day global health, or political polari...
I definitely agree there are plenty of ways we should reach elites and non-elites alike that aren't statistical models of timelines, and insofar as the resources going towards timeline models (in terms of talent, funding, bandwidth) are fungible with the resources going towards other things, maybe I agree that more effort should be going towards the other things (but I'm not sure -- I really think the timeline models have been useful for our community's strategy and for informing other audiences).
But also, they only sometimes create a sense of panic; I cou...
There's a grain that I agree with here, which is that people excessively plan around a median year for AGI rather than a distribution for various events, and that planning around that kind of distribution leads to more robust and high-expected-value actions (and perhaps less angst).
However, I strongly disagree with the idea that we already know "what we need." Off the top of my head, several ways narrowing the error bars on timelines -- which I'll operationalize as "the distribution of the most important decisions with respect to building transformative AI...
I agree that not everyone already knows what they need to know. Our crux issue is probably "who needs to get it and how will they learn it?" I think we more than have the evidence to teach and set an example of knowing for the public. I think you think we need to make a very respectable and detailed case to convince elites. I think you can take multiple routes to influencing elites and that they will be more receptive when the reality of AI risk is a more popular view. I don't think timelines are a great tool for convincing either of these groups because they create such a sense of panic and there's such an invitation to quibble with the forecasts instead of facing the thrust of the evidence.
Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)
In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):
Are you a US resident who spends a lot of money on rideshares + food delivery/pickup? If so, consider the following:
I think the opposite might be true: when you apply it to broad areas, you're likely to mistake low neglectedness for a signal of low tractability, and you should just look at "are there good opportunities at current margins." When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)
- Would it be good to solve problem P?
- Can I solve P?
What is gained by adding the third thing? If the answer to #2 is "yes," then why does it matter if the answer to #3 is "a lot," and likewise in the opposite case, where the answers are "no" and "very few"?
Edit: actually yeah the "will someone else" point seems quite relevant.
Fair enough on the "scientific research is super broad" point, but I think this also applies to other fields that I hear described as "not neglected" including US politics.
Not talking about AI safety polling, agree that was highly neglected. My understanding, reinforced by some people who have looked into the actually-practiced political strategies of modern campaigns, is that it's just a stunningly under-optimized field with a lot of low-hanging fruit, possibly because it's hard to decouple political strategy from other political beliefs (and selection effects where especially soldier-mindset people go into politics).
But neglectedness as a heuristic is very good precisely for narrowing down what you think the good opportunity is. Every neglected field is a subset of a non-neglected field. So pointing out that great grants have come in some subset of a non neglected field doesn't tell us anything.
To be specific, it's really important that EA identifies the area within that neglected field where resources aren't flowing, to minimize funging risk. Imagine that AI safety polling had not been neglected and that in fact there were tons of think tanks who planned to do AI saf...
I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.
Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be i...
I agree that a lot of EAs seem to make this mistake but I don't think the issue is with the neglectedness measure, ime people often incorrectly scope the area they are analysing and fail to notice that that specific area can be highly neglected whilst also being tractable and important even if the wider area it's part of is not very neglected.
For example, working on information security in USG is imo not very neglected but working on standards for datacentres that train frontier LMs is.
Disagree-voted. I think there are issues with the Neglectedness heuristic, but I don’t think the N in ITN is fully captured by I and T.
For example, one possible rephrasing of ITN is: (certainly not covering all the ways in which it is used)
I think this is a great way to decompose some decision problems. For instance, it seems very useful for thinking about prioritizing research, because (3) helps you answer the important question "If I don’t solve P, will so...
Upvoted and disagree-voted. I still think neglectedness is a strong heuristic. I cannot think of any good (in my evaluation) interventions that aren't neglected.
Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion
I wouldn't think about it that way because "scientific research" is so broad. That feels kind of like saying shrimp welfare isn't neglected because a lot of money goes to animal shelters, and those both fall under the "animals" umbrella.
...US politics is a frequently cited example of a non-negl
Biggest disagreement between the average worldview of people I met with at EAG and my own is something like "cluster thinking vs sequence thinking," where people at EAG are like "but even if we get this specific policy/technical win, doesn't it not matter unless you also have this other, harder thing?" and I'm more like, "Well, very possibly we won't get that other, harder thing, but still seems really useful to get that specific policy/technical win, here's a story where we totally fail on that first thing and the second thing turns out to matter a ton!"
Thanks, glad to hear it's helpful!
I hope to eventually/maybe soon write a longer post about this, but I feel pretty strongly that people underrate specialization at the personal level, even as there are lots of benefits to pluralization at the movement level and large-funder level. There are just really high returns to being at the frontier of a field. You can be epistemically modest about what cause or particular opportunity is the best, not burn bridges, etc, while still "making your bet" and specializing; in the limit, it seems really unlikely that e.g. having two 20 hr/wk jobs in diffe...
Thanks for running this survey. I find these results extremely implausibly bearish on public policy -- I do not think we should be even close to indifferent between improving the AI policy of the country that can make binding rules on all of the leading labs plus many key hardware inputs and has a $6 trillion budget and the most powerful military on earth by 5% and having $8.1 million more dollars for a good grantmaker, or having 32.5 "good video explainers," or having 13 technical AI academics. I'm biased, of course, but IMO the surveyed population is massively overrating the importance of the alignment community relative to the US government.
I mostly agree with this. The counterargument I can come up with is that the best AI think tanks right now are asking for grants in the range of $2 - $5 million and seem to be pretty influential, so it's possible that a grantmaker who got $8 million could improve policy by 5%, in which case it's correct to equate those two.
I'm not sure how that fits with the relative technical/policy questions.
I think "5%" is just very badly defined. If I just go with the most intuitive definition to me, then 32.5 good video explainers would probably improve the AI x-risk relevant competence of the US government by more than 5% (which currently is very close to 0, and 5% of a very small number is easy to achieve).
But like, any level of clarification would probably wildly swing whatever estimates I give you. Disagreement on this question seems like it will inevitably just lead to arguing over definitions.
Fwiw, I think the main thing getting missed in this discourse is that even 3 out of your 50 speakers (especially if they're near the top of the bill) are mostly known for a cluster of edgy views that are not welcome in most similar spaces, people who really want to gather to discuss those edgy and typically unwelcome views will be a seriously disproportionate share of attendees, and this will have significant repercussions for the experience of the attendees who were primarily interested in the other 47 speakers.
I recommend the China sections of this recent CNAS report as a starting point for discussion (it's definitely from a relatively hawkish perspective, and I don't think of myself as having enough expertise to endorse it, but I did move in this direction after reading).
From the executive summary:
...Taken together, perhaps the most underappreciated feature of emerging catastrophic AI risks from this exploration is the outsized likelihood of AI catastrophes originating from China. There, a combination of the Chinese Communist Party’s efforts to accelerate AI
Yes, but it's kind of incoherent to talk about the dollar value of something without having a budget and an opportunity cost; it has to be your willingness-to-pay, not some dollar value in the abstract. Like, it's not the case that the EA funding community would pay $500B even for huge wins like malaria eradication, end to factory farming, robust AI alignment solution, etc, because it's impossible: we don't have $500B.
And I haven't thought about this much but it seems like we also wouldn't pay, say, $500M for a 1-in-1000 chance for a "$500B win" because un...
I think the core issue is that the lottery wins you government dollars, which you can't actually spend freely. Government dollars are simply worth less, to Pablo, than Pablo's personal dollars. One way to see this is that if Pablo could spend the government dollars on the other moonshot opportunities, then it would be fine that he's losing his own money.
So we should stipulate that after calculating abstract dollar values, you have to convert them, by some exchange rate, to personal dollars. The exchange rate simply depends on how much better the opportunit...
Well, it implies you could change the election with those amounts if you knew exactly how close the election would be in each state and spent optimally. But If you figure the estimates are off by an OOM, and half of your spending goes to states that turn out not to be useful (which matches a ~30 min analysis I did a few months ago), and you have significant diminishing returns such that $10M-$100M is 3x less impactful than the first $10M and $100M-$1B is another 10x less impactful, you still get:
I think if you think there's a major difference between the candidates, you might put a value on the election in the billions -- let's say $10B for the sake of calculation.
You don't need to think there's a major difference between the candidates to conclude that the election of one candidate adds billions in value. The size of the US discretionary budget over the next four years is roughly three orders of magnitude your $10B figure, and a president can have an impact of the sort EAs care about in ways that go beyond influencing the budget, such as regulating AI, setting immigration policy, eroding government institutions and waging war.
It seems like you might be under-weighing the cumulative amount of resources - even if you have some pretty heavy decay rate (which it's unclear you should -- usually we think of philanthropic investments compounding over time), avoiding nuclear war was a top global priority for decades, and it feels like we have a lot of intellectual and policy "legacy infrastructure" from that.
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable.
I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves ...
I broadly want to +1 this. A lot of the evidence you are asking for probably just doesn’t exist, and in light of that, most people should have a lot of uncertainty about the true effects of any overton-window-pushing behavior.
That being said, I think there’s some non-anecdotal social science research that might make us more likely to support it. In the case of policy work:
Yes, some regulations backfire, and this is a good flag to keep in mind when designing policy, but to actually make the reference-class argument here work, you'd have to show that this is what we should expect from AI policy, which would include showing that failures like NEPA are either much more relevant for the AI case or more numerous than other, more successful regulations, like (in my opinion) the Clean Air Act, Sarbanes-Oxley, bans on CFCs or leaded gasoline, etc. I know it's not quite as simple as "I would simply design good regulations instead of ...
This post correctly identifies some of the major obstacles to governing AI, but ultimately makes an argument for "by default, governments will not regulate AI well," rather than the claim implied by its title, which is that advocating for (specific) AI regulations is net negative -- a type of fallacious conflation I recognize all too well from my own libertarian past.
Interesting! I actually wrote a piece on "the ethics of 'selling out'" in The Crimson almost 6 years ago (jeez) that was somewhat more explicit in its EA justification, and I'm curious what you make of those arguments.
I think randomly selected Harvard students (among those who have the option to do so) deciding to take high-paying jobs and donate double-digit percentages of their salary to places like GiveWell is very likely better for the world than the random-ish other things they might have done, and for that reason I strongly support this op-ed. But I ...
I object to calling funding two public defenders "strictly dominating" being one yourself; while public defender isn't an especially high-variance role with respect to performance compared to e.g. federal public policy, it doesn't seem that crazy that a really talented and dedicated public defender could be more impactful than the 2 or 3 marginal PDs they'd fund while earning to give.
The shape of my updates has been something like:
Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I'm most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!
Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things...
(Cross-posting from LW)
Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communica...
(I began working for OP on the AI governance team in June. I'm commenting in a personal capacity based on my own observations; other team members may disagree with me.)
OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo
FWIW I really don’t think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I don't think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoug...
Nitpick: I would be sad if people ruled themselves out for e.g. being "20th percentile conscientiousness" since in my impression the popular tests for OCEAN are very sensitive to what implicit reference class the test-taker is using.
For example, I took one a year ago and got third percentile conscientiousness, which seems pretty unlikely to be true given my abilities to e.g. hold down a grantmaking job, get decent grades in grad school, successfully run 50-person retreats, etc. I think the explanation is basically that this is how I respond to "I am ...
Yeah this is a good point; fwiw I was pointing at "<30th percentile conscientiousness" as a problem that I have, as someone who is often late to meetings for more than 1-2 minutes (including twice today). My guess is that my (actual, not perceived) level of conscientiousness is pretty detrimental to LTFF fund chair work, while yours should be fine? I also think "Harvard Law student" is just a very wacky reference class re: conscientious; most people probably come from a less skewed sample than yours.
Reposting my LW comment here:
Just want to plug Josh Greene's great book Moral Tribes here (disclosure: he's my former boss). Moral Tribes basically makes the same argument in different/more words: we evolved moral instincts that usually serve us pretty well, and the tricky part is realizing when we're in a situation that requires us to pull out the heavy-duty philosophical machinery.
In case it's helpful: as an attendee of this event I would say ~2.5 of these 5 were like "decently" represented (not saying that's sufficient)