Yeah but this pledge is kind of weird for an altruist to actually follow, instead of donating more above the 10%. (Unless you think that almost everyone believes that most of the reason for them to do the GWWC pledge is to enforce the norm, and this causes them to donate 10%, which is more than they'd otherwise donate.)
[This is an excerpt from a longer post I'm writing]
Suppose someone’s utility function is
U = f(C) + D
Where U is what they’re optimizing, C is their personal consumption, f is their selfish welfare as a function of consumption (log is a classic choice for f), and D is their amount of donations.
Suppose that they have diminishing utility wrt (“with respect to”) consumption (that is, df(C)/dC is strictly monotonically decreasing). Their marginal utility wrt donations is a constant, and their marginal utility wrt consumption is a decreasing function. There has t... (read more)
[epistemic status: I'm like 80% sure I'm right here. Will probably post as a main post if no-one points out big holes in this argument, and people seem to think I phrased my points comprehensibly. Feel free to leave comments on the google doc here if that's easier.]
I think a lot of EAs are pretty confused about Shapley values and what they can do for you. In particular Shapley values are basically irrelevant to problems related to coordination between a bunch of people who all have the same values. I want to talk about why.
So Shapley values are a sol... (read more)
I am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.
I would personally be pretty down for funding reimbursements for past expenses.
This is indeed my belief about ex ante impact. Thanks for the clarification.
That might achieve the "these might be directly useful goal" and "produce interesting content" goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don't. It wouldn't achieve any of the other goals, though.
Here's a crazy idea. I haven't run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff.
I share this concern, and I think a culture with more book reviews is a great way to achieve that (I've been happy to see all of Michael Aird's book summaries for that reason).
CEA briefly considered paying for book reviews (I was asked to write this review as a test of that idea). IIRC, the goal at the time was more about getting more engagement from people on the periphery of EA by creating EA-related content they'd find int... (read more)
I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending.
Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it
Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".
I think that the funds' RFMF is only slightly real--I think that giving t... (read more)
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy... (read more)
I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.
I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary.
I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard an... (read more)
Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.
Re 1: I don't think I would have granted more
Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.
Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse;... (read more)
Re your 19 interventions, here are my quick takes on all of them
Creating, scaling, and/or improving EA-aligned research orgs
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
Creating, scaling, and/or improving EA-aligned research training programs
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research... (read more)
I feel very unsure about this. I don't think my position on this question is very well thought through.
Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail ... (read more)
re 1: I expect to write similarly detailed writeups in future.
re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)
re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes ... (read more)
I don't think this has much of an advantage over other related things that I do, like
A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:
Presumably this differs a lot between grants; I'd be interested in some typical figures.
This question is important because you need a sense of these numbers in order to make decisions about which of these parties you sho... (read more)
(I'd be very interested in your answer if you have one btw.)
Making up some random numbers:
This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.
This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.
Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don't normally track it.
EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.
I occasionally track my work time for a few weeks at a time; by coincidence I happen to be tracking it at the moment. I used to use Toggl; currently I just track my time in my notebook by noting the time whenever I start and stop working (where by "working" I mean "actively focusing on work stuff"). I am more careful about time tracking my work on my day job (working on longtermist technical research, as an individual contributor and manager) than working on the EAIF and other movement building stuff.
The first four days this week, I did 8h33m, 8h15m, 7h32m... (read more)
That seems correct, but doesn’t really defend Ben’s point, which is what I was criticizing.
I am glad to have you around, of course.
My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I'd be very interested to hear I was wrong about that.
I think that isn't the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums. So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.
But fundamentally if we're running either of these counterfactuals I think we're already leaving a bunch of value on the table, as expressed by EricHerboso's post about false dilemmas.
I am not sure whether I think it's a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren't obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're inf... (read more)
I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying.[...]If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.
I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying.
If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.
I would guess it depends quite a bit on these people's total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is "not for them").
If we're imagining people who've already had 10 or even 100 hour... (read more)
I bounce off posts like this. Not sure if you'd consider me net positive or not. :)
More generally, I think our disagreement here probably comes down to something like this:
There's a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome. As you say, if we're skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.
But this comes at a cost. I personally feel much less excited about writing about certain topics because I'd have to be super careful about them. And most of t... (read more)
I don't disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?I think that becoming more skillful at doing both well is an important skill for a community l... (read more)
(I'm writing these comments kind of quickly, sorry for sloppiness.)
With regard to
Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.
In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.
I would have no meta-level objection to a comment saying "I disagree that X is bad, I think it's actually fine".
I think that this totally misses the point. The point of this post isn't to inform ACE that some of the things they've done seem bad--they are totally aware that some people think this. It's to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.
I guess I don't know OP's goals but yeah if their goal is to publicly shame ACE then publicly shaming ACE is a good way to accomplish that goal.
My point was a) sending a quick emails to someone about concerns you have with their work often has a very high benefit to cost ratio, and b) despite this, I still regularly talk to people who have concerns about some organization but have not sent them an email.
I think those claims are relatively uncontroversial, but I can say more if you disagree.
I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor, but as an intuition pump imagine the following comment.
"On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem. On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I'm worried about the second-order effects of talking about this misconduct."
I guess my concern is that it seems li... (read more)
Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for 'X' here, and as a result can totally understand where Khorton's response is coming from (although I think 'campaigning against racism' is also a mischaracterisation of X here).Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that l... (read more)
I’d be interested to see comparisons of the rate at which rationalists and EAs have children compared to analogous groups, controlling for example for education, age, religiosity, and income. I think this might make the difference seems smaller.
To this I would add:
Beware of the selection effect where I’d expect people with kids are less likely to come to meetups, less likely to post on this forum, etc. than EAs with overall-similar levels of involvement, so it can look like there are fewer than is actually the case, if you aren’t counting carefully.
For EA clusters in very-high-housing-cost areas specifically (Milan mentioned the Bay), I wouldn’t be surprised if the broader similar demographic is also avoiding children, since housing is usually the largest direct financial cost of having children,... (read more)
Great post, and interesting and surprising result.
An obvious alternative selection criterion would be something like “how good would it be if this person got really into EA”; I wonder if you would be any better at predicting that. This one takes longer to get feedback on, unfortunately.
My instinctual response to this was: "well it is not very helpful to admit someone for whom it would be great if they got into EA if they really seem like they won't".
However, since it seems like we are not particularly good at predicting whether they will get involved or not maybe this is a metric we should incorporate. (My intuition is that we would still want a baseline? There could be someone it would be absolutely amazing to have get involved but if they are extremely against EA ideas and disruptive that might lower the quality of the fellowship... (read more)
I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.
I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
But I don’t think that this prediction is true: I... (read more)
My main objection to this post is that personal fit still seems really important when choosing what to do within a cause. I think that one of EA's main insights is "if you do explicit estimates of impact, you can find really big differences in effectiveness between cause areas, and these differences normally swamp personal fit"; that's basically what you're saying here, and it's totally correct IMO. But I think it's a mistake to try to apply the same style of reasoning within causes, because the effectivenesses between different jobs are much more similar and so personal fit ends up dominating the estimate of which one will be better.
I'd be curious to hear why you think that these charities are excellent; eg I'd be curious for your reply to the arguments here.
I respect cluelessness arguments enough that I've removed "strongly" from "strongly believe" in my response; I was just in an enthusiastic mood.
My giving to charities focused on short-term impact (and GiveWell in particular) is motivated by a few things:
Oh man, I'm so sorry, you're totally right that this edit fixes the problem I was complaining about. When I read this edit, I initially misunderstood it in such a way that it didn't address my concern. My apologies.
How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong?
This kind of stuff is pretty complicated so I might not be making sense here, but here's what I mean: I have some distribution over what model to be using to answer the "are we at HoH" question, and each model has some probability that we're at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it). It seems like your outside view model assi... (read more)
Hmm, interesting. It seems to me that your priors cause you to think that the "naive longtermist" story, where we're in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)
I agree with all this; thanks for the summary.
Your interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.
On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future.
This does make a lot more sense than what you wrote in your post.
Do you agree that as written, the argument as written in your EA Forum post is quite flawed? If so, I think you should edit it to more clearly indicate that it was a mistake, given that people are still linking to it.
The comment I'd be most interested in from you is whether you agree that your argument forces you to believe that x-risk is almost surely zero, or that we are almost surely not going to have a long future.
Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.
I do think we should update away from those priors, and I think that update is sufficient ... (read more)
“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me.
So you are saying that you do think that the evidence for longtermism/x-risk is enough to push ... (read more)
My claim is that patient philanthropy is automatically making the claim that now is the time where patient philanthropy does wildly unusually much expected good, because we're so early in history that the best giving opportunities are almost surely after us.
I've added a link to the article to the top of my post. Those changes seem reasonable.
This is indeed what I meant, thanks.
But if, as this talk suggests, it’s not obvious whether donating to near term interventions is good or bad for the world, why are you interested in whether you can pitch friends and family to donate to them?
I basically agree with the claims and conclusions here, but I think about this kind of differently.
I don’t know whether donating to AMF makes the world better or worse. But this doesn’t seem very important, because I don’t think that AMF is a particularly plausible candidate for the best way to improve the long term future anyway—it would be a reasonably surprising coincidence if the top recommended way to improve human lives right now was also the most leveraged way to improve the long term future.
So our attitude should be more like "I don’t know if... (read more)
I think Carl Shulman makes some persuasive criticisms of this research here :
My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in sur
My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.
If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in sur
My guess is that this feedback would be unhelpful and probably push the grantmakers towards making worse grants that were less time-consuming to justify to uninformed donors.
Inasmuch as you expect people to keep getting richer, it seems reasonable to hope that no generation has to be more frugal than the previous.
when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
I would love to see better evidence about this. Eg it doesn't match my experience of talking to physicists.