All of Buck's Comments + Replies

Buck's Shortform

Yeah but this pledge is kind of weird for an altruist to actually follow, instead of donating more above the 10%. (Unless you think that almost everyone believes that most of the reason for them to do the GWWC pledge is to enforce the norm, and this causes them to donate 10%, which is more than they'd otherwise donate.)

2Linch3dI thought you were making an empirical claim with the quoted sentence, not a normative claim.
Buck's Shortform

[This is an excerpt from a longer post I'm writing]

Suppose someone’s utility function is

U = f(C) + D

Where U is what they’re optimizing, C is their personal consumption, f is their selfish welfare as a function of consumption (log is a classic choice for f), and D is their amount of donations.

Suppose that they have diminishing utility wrt (“with respect to”) consumption (that is, df(C)/dC is strictly monotonically decreasing). Their marginal utility wrt donations is a constant, and their marginal utility wrt consumption is a decreasing function. There has t... (read more)

3Stefan_Schubert3dThe GWWC pledge is akin to a flat tax, as opposed to a progressive tax - which gives you a higher tax rate when you earn more. I agree that there are some arguments in favour of "progressive donations". One consideration is that extremely high "donation rates" - e.g. donating 100% of your income above a certain amount - may affect incentives to earn more adversely, depending on your motivations. But in a progressive donation rate system with a more moderate maximum donation rate that would probably not be as much of a problem.
2Linch3dWait the standard GWWC pledge is a 10% of your income, presumably based on cultural norms like tithing which in themselves might reflect an implicit understanding that (if we assume log utility) a constant fraction of consumption is equally costly to any individual, so made for coordination rather than single-player reasons.
Buck's Shortform

[epistemic status: I'm like 80% sure I'm right here. Will probably post as a main post if no-one points out big holes in this argument, and people seem to think I phrased my points comprehensibly. Feel free to leave comments on the google doc here if that's easier.]

I think a lot of EAs are pretty confused about Shapley values and what they can do for you. In particular Shapley values are basically irrelevant to problems related to coordination between a bunch of people who all have the same values. I want to talk about why. 

So Shapley values are a sol... (read more)

2NunoSempere3dThis seems correct -------------------------------------------------------------------------------- This misses some considerations around cost-efficiency/prioritization. If you look at your distorted "Buck values", you come away that Buck is super cost-effective; responsible for a large fraction of the optimal plan using just one salary. If we didn't have a mechanistic understanding of why that was, trying to get more Buck would become an EA cause area. In contrast, if credit was allocated according to Shapley values, we could look at the groups whose Shapley value is the highest, and try to see if they can be scaled. -------------------------------------------------------------------------------- The section about "purely local" Shapley values might be pointing to something, but I don't quite know what it is, because the example is just Shapley values but missing a term? I don't know. You also say "by symmetry...", and then break that symmetry by saying that one of the parts would have been able to create $6,000 in value and the other $0. Needs a crisper example. -------------------------------------------------------------------------------- Re: coordination between people who have different values using SVs, I have some stuff here [https://forum.effectivealtruism.org/posts/3NYDwGvDbhwenpDHb/shapley-values-ii-philantropic-coordination-theory-and-other#Philantropic_Coordination_Theory_] , but looking back the writting seems too corny. -------------------------------------------------------------------------------- Lastly, to some extent, Shapley values are a reaction to people calculating their impact as their counterfactual impact. This leads to double/triple counting impact for some organizations/opportunities, but not others, which makes comparison between them more tricky. Shapley values solve that by allocating impact such that it sums to the total impact & other nice properties. Then someone like OpenPhilanthropy or some EA fund can come and see
EA Infrastructure Fund: Ask us anything!

I am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.

6Jonas Vollmer20dI'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority). One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.
EA Infrastructure Fund: Ask us anything!

I would personally be pretty down for funding reimbursements for past expenses.

2Linch21dThat's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point?
4Max_Daniel21dI haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this. So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes. If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.
4Habryka21dI would also be in favor of the LTFF doing this.
EA Infrastructure Fund: Ask us anything!

This is indeed my belief about ex ante impact. Thanks for the clarification.

Buck's Shortform

That might achieve the "these might be directly useful goal" and "produce interesting content" goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don't. It wouldn't achieve any of the other goals, though.

4Khorton2moI wonder if there are better ways to encourage and reward talented writers to look for outside ideas - although I agree book reviews are attractive in their simplicity!
Buck's Shortform

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in
... (read more)
4saulius14dI would also ask these people to optionally write or improve a summary of the book in Wikipedia if it has an Wikipedia article (or should have one). In many cases, it's not only EAs who would do more good if they knew ideas in a given book, especially when it's on a subject like pandemics or global warming rather than topics relevant to non-altruistic work too like management or productivity. When you google a book, Wikipedia is often the first result and so these articles receive a quite lot of traffic (you can see here [https://stats.wikimedia.org/#/en.wikipedia.org/reading/total-page-views/normal%7Cbar%7CAll%7C~total] how much traffic a given article receives).
3Jordan Pieters1moPerhaps it would be worthwhile to focus on books like those in this [https://forum.effectivealtruism.org/posts/KNZLGbGevnjStgzHt/i-scraped-all-public-effective-altruists-goodreads-reading#Most_commonly_planned_to_read_books_that_have_not_been_read_by_anyone_yet] list of "most commonly planned to read books that have not been read by anyone yet"
3MichaelA1moYeah, this seems good to me. I also just think in any case [https://forum.effectivealtruism.org/posts/zCJDF6iNSJHnJ6Aq6/a-ranked-list-of-all-ea-relevant-audio-books-i-ve-read#Suggestion__Make_Anki_cards__share_them_as_posts__and_share_key_updates] more people should post their notes, key takeaways, and (if they make them) Anki cards to the Forum, as either top-level posts or shortforms. I think this need only take ~30 mins of extra time on top of the time they spend reading or note-taking or whatever for their own benefit. (But doing what you propose would still add value by incentivising more effortful and even more useful versions of this.) Yeah, I think this is worth emphasising, since: * Those are things existing, non-EA summaries of the books are less likely to provide * Those are things that even another EA reading the same book might not think of * Coming up with key takeaways is an analytical exercise and will often draw on specific other knowledge, intuitions, experiences, etc. the reader has Also, readers of this shortform may find posts tagged effective altruism books [https://forum.effectivealtruism.org/tag/effective-altruism-books] interesting.
5Peter Wildeford2moI've thought about this before and I would also like to see this happen.

I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff.

I share this concern, and I think a culture with more book reviews is a great way to achieve that (I've been happy to see all of Michael Aird's book summaries for that reason).

CEA briefly considered paying for book reviews (I was asked to write this review as a test of that idea). IIRC, the goal at the time was more about getting more engagement from people on the periphery of EA by creating EA-related content they'd find int... (read more)

7casebash2moI'd be interested in this. I've been posting book reviews of the books I read to Facebook - mostly for my own benefit. These have mostly been written quickly, but if there was a decent chance of getting $500 I could pick out the most relevant books and relisten to them and then rewrite them.
1Khorton2moYou can already pay for book reviews - what would make these different?
8Habryka2moYeah, I really like this. SSC currently already has a book-review contest running on SSC, and maybe LW and the EAF could do something similar? (Probably not a contest, but something that creates a bit of momentum behind the idea of doing this)
3reallyeli2moQuick take is this sounds like a pretty good bet, mostly for the indirect effects. You could do it with a 'contest' framing instead of a 'I pay you to produce book reviews' framing; idk whether that's meaningfully better.
2Max_Daniel2moI don't think it's crazy at all. I think this sounds pretty good.
EA Infrastructure Fund: Ask us anything!

I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending.

Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it

.

2Max_Daniel2moHmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%. (I guess there is also the question what exactly we're assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I'm much more inclined to agree with "business as usual + this extra capital adds much less than 20%". In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
EA Infrastructure Fund: Ask us anything!

Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".

I think that the funds' RFMF is only slightly real--I think that giving t... (read more)

9Jonas Vollmer21dJust wanted to flag briefly that I personally disagree with this: * I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive. * I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*. * Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments). I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization [https://reg-charity.org/] and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level. Overall, I want to continue funding good fundraising organizations.
9Linch2moI'm curious how much $s you and others think that longtermist EA has access to right now/will have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
6Max_Daniel2moI think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective. However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/ea-infrastructure-fund-ask-us-anything?commentId=yXcHQWoHEYdZdrNvr] , i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors. One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I'd guess it's much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces "evangelists" rather than just people who'll start giving 1% as a 'hobby', are quiet about it, and otherwise don't think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors. So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF. (I'm also quite uncertain about all of this. E.g., I wouldn't be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving - even in a 'good' way - were significantly net negative.)

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy... (read more)

EA Infrastructure Fund: Ask us anything!

I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.

I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary.

I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard an... (read more)

EA Infrastructure Fund: Ask us anything!

Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.

EA Infrastructure Fund: Ask us anything!

Re 1: I don't think I would have granted more

Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.

Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse;... (read more)

EA Infrastructure Fund: Ask us anything!

Re your 19 interventions, here are my quick takes on all of them

Creating, scaling, and/or improving EA-aligned research orgs

Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.

Creating, scaling, and/or improving EA-aligned research training programs

I am in favor of this. I think one of the biggest bottlenecks here is finding people  who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research... (read more)

4Linch2moSorry, minor confusion about this. By "top 25%," do you mean 75th percentile? Or are you encompassing the full range here?
3MichaelA2moI'm pretty surprised by the strength of that reaction. Some followups: 1. How do you square that with the EA Funds (a) funding things that would increase the amount/quality/impact of EA-aligned research(ers), and (b) indicating in some places (e.g. here [https://forum.effectivealtruism.org/posts/nLxpFeEs6kAdgjRWz/the-long-term-future-fund-has-room-for-more-funding-right] ) the funds have room for more funding? * Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)? * Do you disagree that the funds have room for more funding? 2. Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)? 3. Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
2MichaelA2moFWIW, I agree that your concerns about "Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers" are well worth bearing in mind and that they make at least some versions of this intervention much less valuable or even net negative.
2MichaelA2moI think I agree with this, though part of the aim for the database would be to help people find mentors (or people/resources that fill similar roles). But this wasn't described in the title of that section, and will be described in the post coming out in a few weeks, so I'll leave this topic there :)
2MichaelA2moThanks for this detailed response! Lots of useful food for thought here, and I agree with much of what you say. Regarding Effective Thesis: * I think I agree that "most research areas relevant to longtermism require high context in order to contribute to", at least given our current question lists and support options. * I also think this is the main reason I'm currently useful as a researcher despite (a) having little formal background in the areas I work in and (b) there being a bunch of non-longtermist specialists who already work in roughly those areas. * On the other hand, it seems like we should be able to identify many crisp, useful questions that are relatively easy to delegate to people - particularly specialists - with less context, especially if accompanied with suggested resources, a mentor with more context, etc. * E.g., there are presumably specific technical-ish questions related to pathogens, antivirals, climate modelling, or international relations that could be delegated to people with good subject area knowledge but less longtermist context. * I think in theory Effective Thesis or things like it could contribute to that * After writing that, I saw you said the following, so I think we mostly agree here: "I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff." * OTOH, in terms of examples of this happening, I think at least Luke Muehlhauser seems to believe s
9Max_Daniel2moI would be enthusiastic about this. If you don't do it, I might try doing this myself at some point. I would guess the main challenge is to get sufficient inter-rater reliability; i.e., if different interviewers used this interview to interview the same person (or if different raters watched the same recorded interview), how similar would their ratings be? I.e., I'm worried that the bottleneck might be something like "there are only very few people who are good at assessing other people" as opposed to "people typically use the wrong method to try to assess people".
EA Infrastructure Fund: Ask us anything!

I feel very unsure about this. I don't think my position on this question is very well thought through.

Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail ... (read more)

2Linch2moAm I correct in understanding that this is true for your beliefs about ex ante rather than ex post impact? (in other words, that 1/4 of grants you pre-identified as top-25% will end up accounting for more than 50% of your positive impact) If so, is this a claim about only the positive impact of the grants you make, or also about the absolute value of all grants you make? See related question [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/ea-infrastructure-fund-ask-us-anything?commentId=LTi9G82bJqn2XZnon] .
EA Infrastructure Fund: Ask us anything!

re 1: I expect to write similarly detailed writeups in future.

re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)

re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes ... (read more)

EA Infrastructure Fund: Ask us anything!

I don't think this has much of an advantage over other related things that I do, like

  • telling people that they should definitely tell me if they know about projects that they think I should fund, and asking them why
  • asking people for their thoughts on grant applications that I've been given
  • asking people for ideas for active grantmaking strategies
EA Infrastructure Fund: Ask us anything!

 A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:

  • The donors to the fund
  • The grantmakers
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics)
  • The grantee

Presumably this differs a lot between grants; I'd be interested in some typical figures.

This question is important because you need a sense of these numbers in order to make decisions about which of these parties you sho... (read more)

(I'd be very interested in your answer if you have one btw.)

3Linch1mo(No need for the EAIF folks to respond; I think I would also find it helpful to get comments from other folks) I'm curious about a set of related questions probing this at a more precise level of granularity. For example, for the Suppose for the sake of the argument that the RP internship resulted in better career outcomes than the interns counterfactually would have.* For the difference of the impact from the internship vs the next-best-option, what fraction of credit assignment should be allocated to: * The donors to the fund * The grantmakers * The rest of the EAIF infrastructure * RP for selecting interns and therefore providing a signaling mechanism either to the interns themselves or for future jobs * RP for managing/training/aiding interns to hopefully excel * The work of the interns themselves I'm interested in whether the ratio between the first 3 bullet points has changed (for example, maybe with more $s per grant, donor $s are relatively less important and the grantmaker effort/$ ratio is lower) I also interested in the appropriate credit assignment (breaking down all Jonas' 75%!) of the last 3 bullet points. For example, if most people see the value of RP's internship program to the interns as primarily via RP's selection methods, then it might make sense to invest more management/researcher time into designing better pre-internship work trials. I'm also interested in even more granular takes, but perhaps this is boring to other people. (I work for RP. I do not speak for the org). *(for reasons like it a) speeded up their networking, b) tangible outputs from the RP internship allowed them to counterfactually get jobs where they had more impact, c) it was a faster test for fit and made the interns correctly choose to not go to research, saving time, d) they learned actually valuable skills that made their career trajectory go smoother, etc)

Making up some random numbers:

  • The donors to the fund – 8%
  • The grantmakers – 10%
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics) – 7%
  • The grantee – 75%

This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.

This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.

How much do you (actually) work?

Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don't normally track it.

 

EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.

How much do you (actually) work?
Answer by BuckMay 21, 202116

I occasionally track my work time for a few weeks at a time; by coincidence I happen to be tracking it at the moment. I used to use Toggl; currently I just track my time in my notebook by noting the time whenever I start and stop working (where by "working" I mean "actively focusing on work stuff"). I am more careful about time tracking my work on my day job (working on longtermist technical research, as an individual contributor and manager) than working on the EAIF and other movement building stuff.

The first four days this week, I did 8h33m, 8h15m, 7h32m... (read more)

Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don't normally track it.

 

EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.

Concerns with ACE's Recent Behavior

That seems correct, but doesn’t really defend Ben’s point, which is what I was criticizing.

Concerns with ACE's Recent Behavior

I am glad to have you around, of course.

My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I'd be very interested to hear I was wrong about that.

I think that isn't the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums.  So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.

But fundamentally if we're running either of these counterfactuals I think we're already leaving a bunch of value on the table, as expressed by EricHerboso's post about false dilemmas.

Concerns with ACE's Recent Behavior

I am not sure whether I think it's a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren't obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're inf... (read more)

7tamgent3moI agree it's good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we're already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it's only tangentially relevant to the OP. Aside: I don't know about Will's intentions, I just read his comment and your reply, and don't think 'he could have made a different comment' is good evidence of his intentions. I'm going to assume you know much more about the situation/background than I do, but if not I do think it's important to give people benefit of the doubt on the question of intentions. [Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]

I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying.

[...]

If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.

I would guess it depends quite a bit on these people's total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is "not for them").

If we're imagining people who've already had 10 or even 100 hour... (read more)

I bounce off posts like this.  Not sure if you'd consider me net positive or not. :)

Concerns with ACE's Recent Behavior

More generally, I think our disagreement here probably comes down to something like this:

There's a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome.  As you say, if we're skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.

But this comes at a cost. I personally feel much less excited about writing about certain topics because I'd have to be super careful about them. And most of t... (read more)

9tamgent3moI appreciate you trying to find our true disagreement here.

I don't disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).

I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?

I think that becoming more skillful at doing both well is an important skill for a community l... (read more)

Concerns with ACE's Recent Behavior

(I'm writing these comments kind of quickly, sorry for sloppiness.)

With regard to

Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.

In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.

I would have no meta-level objection to a comment saying "I disagree that X is bad, I think it's actually fine".

1tamgent3moI think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you've responded to our main disagreement though, so I'll respond on that branch.
Concerns with ACE's Recent Behavior

I think that this totally misses the point. The point of this post isn't to inform ACE that some of the things they've done seem bad--they are totally aware that some people think this. It's to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.

8Neel Nanda3moThe key part of running feedback by an org isn't to inform the org of the criticism, it's to hear their point of view, and see whether any events have been misrepresented (from their point of view). And, ideally, to give them a heads up to give a response shortly after the criticism goes up

I guess I don't know OP's goals but yeah if their goal is to publicly shame ACE then publicly shaming ACE is a good way to accomplish that goal.

My point was a) sending a quick emails to someone about concerns you have with their work often has a very high benefit to cost ratio, and b) despite this, I still regularly talk to people who have concerns about some organization but have not sent them an email.

I think those claims are relatively uncontroversial, but I can say more if you disagree.

Concerns with ACE's Recent Behavior

I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor,  but as an intuition pump imagine the following comment.

"On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem.  On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I'm worried about the second-order effects of talking about this misconduct."

I guess my concern is that it seems li... (read more)

Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for 'X' here, and as a result can totally understand where Khorton's response is coming from (although I think 'campaigning against racism' is also a mischaracterisation of X here).

Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that l... (read more)

1Khorton3moNo one is enthusiastic about sexual harassment, and actively campaigning against racism has nothing in common with sexual harassment.
Why do so few EAs and Rationalists have children?

I’d be interested to see comparisons of the rate at which rationalists and EAs have children compared to analogous groups, controlling for example for education, age, religiosity, and income. I think this might make the difference seems smaller.

To this I would add:

Beware of the selection effect where I’d expect people with kids are less likely to come to meetups, less likely to post on this forum, etc. than EAs with overall-similar levels of involvement, so it can look like there are fewer than is actually the case, if you aren’t counting carefully.

For EA clusters in very-high-housing-cost areas specifically (Milan mentioned the Bay), I wouldn’t be surprised if the broader similar demographic is also avoiding children, since housing is usually the largest direct financial cost of having children,... (read more)

2Milan_Griffes4moI believe Mormons and Catholics are punching above their weight in the US.
Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement

Great post, and interesting and surprising result.

An obvious alternative selection criterion would be something like “how good would it be if this person got really into EA”; I wonder if you would be any better at predicting that. This one takes longer to get feedback on, unfortunately.

My instinctual response to this was: "well it is not very helpful to admit someone for whom it would be great if they got into EA if they really seem like they won't".

 However, since it seems like we are not particularly good at predicting whether they will get involved or not maybe this is a metric we should incorporate. (My intuition is that we would still want a baseline? There could be someone it would be absolutely amazing to have get involved but if they are extremely against EA ideas and disruptive that might lower the quality of the fellowship... (read more)

Buck's Shortform

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I... (read more)

3EdoArad6moI tried searching the literature a bit, as I'm sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015) [The cognitive basis of social behavior]. It seems to agree with your hypothesis. From the abstract: Also relevant is This Review (2016) by Rand [https://journals.sagepub.com/doi/full/10.1177/0956797616654455?casa_token=HsJV25cFr2EAAAAA%3A1Hl1Y8q7waIsYao5Cv9wVer6wlEFmhS0zvaHXFqX8q_SIBJqXxRLnTloF7OAbKdl2xHYeuMZqzWI1Q] : And This Paper (2016) on Belief in Altruism and Rationality [http://cess.nyu.edu/schotter/wp-content/uploads/2010/02/%E2%80%9CSelfishness-Altruism-and-Rationality-A-Theory-of-Social-Choice%E2%80%9D.pdf] claims that Where belief in altruism is a measure of how much people believe that other people are acting out of care or compassion to others as opposed to self-interest. Note: I think that this might be a delicate subject in EA and it might be useful to be more careful about alienating people. I definitely agree that better epistemics is very important to the EA community and to doing good generally and that the ties to the rationalist community probably played (and plays) a very important role, and in fact I think that it is sometimes useful to think of EA as rationality applied to altruism. However, many amazing altruistic people have a totally different view on what would be good epistemics (nevermind the question of "are they right?"), and many people already involved in the EA community seem to have a negative view of (at least some aspects of) the rationality community, both of which call for a more kind and appreciative conversation. In this shortform post, the most obvious point where I think that this becomes a problem is the example This is supposed to be an example of a case where people are not behaving rationally since that would stop them from having fun. You could have us
If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant

My main objection to this post is that personal fit still seems really important when choosing what to do within a cause. I think that one of EA's main insights is "if you do explicit estimates of impact, you can find really big differences in effectiveness between cause areas, and these differences normally swamp personal fit"; that's basically what you're saying here, and it's totally correct IMO. But I think it's a mistake to try to apply the same style of reasoning within causes, because the effectivenesses between different jobs are much more similar and so personal fit ends up dominating the estimate of which one will be better.

Where are you donating in 2020 and why?

I'd be curious to hear why you think that these charities are excellent; eg I'd be curious for your reply to the arguments here.

6Aaron Gertler8moAs for replying more directly to the arguments you linked: my views combine a bit of Khorton [https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term?commentId=ZiZ2W8kLj3sFAXEgA] , a bit of both Aidan [https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term?commentId=6dhxCxXgfhEYfPyef] responses [https://forum.effectivealtruism.org/posts/ajZ8AxhEtny7Hhbv7/if-you-value-future-people-why-do-you-consider-near-term?commentId=dKAusaZxxrtfkgCvp] ... ...and also, a lot of credence in most of those arguments. That's why I work a meta job, spend some of my free time on meta projects, and advise people toward meta giving when I can -- including the foundation I work with, which recently made its first meta grant after decades of exclusively near-term giving. (By the way, this was a good question! I didn't even hint at this stuff in my original answer, and I'm glad for the chance to clarify my beliefs.)

I respect cluelessness arguments enough that I've removed "strongly" from "strongly believe" in my response; I was just in an enthusiastic mood.

My giving to charities focused on short-term impact (and GiveWell in particular) is motivated by a few things:

  1. I believe that my work currently generates much more value for CEA than the amount I donate to other charities, which means that almost all of my impact is likely of a meta/longtermist variety. But I am morally uncertain, and place enough credence on moral theories emphasizing short-term value that I want a
... (read more)
Thoughts on whether we're living at the most influential time in history

Oh man, I'm so sorry, you're totally right that this edit fixes the problem I was complaining about. When I read this edit, I initially misunderstood it in such a way that it didn't address my concern. My apologies.

Thoughts on whether we're living at the most influential time in history

How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong? 

This kind of stuff is pretty complicated so I might not be making sense here, but here's what I mean: I have some distribution over what model to be using to answer the "are we at HoH" question, and each model has some probability that we're at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it).  It seems like your outside view model assi... (read more)

Thoughts on whether we're living at the most influential time in history

Hmm, interesting. It seems to me that your priors cause you to think that the "naive longtermist" story, where we're in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)

Thoughts on whether we're living at the most influential time in history

Your interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.

Thoughts on whether we're living at the most influential time in history

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future.

 

This does make a lot more sense than what you wrote in your post. 

Do you agree that as written, the argument as written in your EA Forum post is quite flawed? If so, I think you should edit it to more clearly indicate that it was a mistake, given that people are still linking to it.

3William_MacAskill9moYeah, I do think the priors-based argument given in the post was poorly stated, and therefore led to unnecessary confusion. Your suggestion is very reasonable, and I've now edited the post. [https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1]
Thoughts on whether we're living at the most influential time in history

The comment I'd be most interested in from you is whether you agree that your argument forces you to believe that x-risk is almost surely zero, or that we are almost surely not going to have a long future.

Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.

I do think we should update away from those priors, and I think that update is sufficient ... (read more)

5richard_ngo9moI think you'd want to modify that to preventable x-risk, to get Will to agree; and also to add a third part of the disjunction, that preventing x-risk might not be of overriding moral importance (since he raises the possibility that longtermism is false in a comment below, with the implication that if so, even preventing x-risk wouldn't make us "influential" by his standards). However, if Will thinks his argument as currently phrased holds, then it seems to me that he's forced to agree with similar arguments that use slightly different definitions of influentialness (such as influentialness = the expected amount you can change other people's lives, for better or worse). Or even a similar argument which just tries to calculate directly the probability that we're at the time with the most x-risk, rather than talking about influentialness at all. At that point, the selection effect I described in another comment starts to become a concern.
Thoughts on whether we're living at the most influential time in history

“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me.

So you are saying that you do think that the evidence for longtermism/x-risk is enough to push ... (read more)

6vaniver9moI think you're saying "if you believe that x-risk this century is 0.1%, then survival probability this century is 99.9%, and for total survival probability over the next trillion years to be 0.01%, there can be at most 9200 centuries with risk that high over the next trillion years (.999^9200=0.0001), which means we're in (most generously) a one-in-one-million century, as a trillion years is 10 billion centuries, which divided by ten thousand is a million." That seem right?
Thoughts on whether we're living at the most influential time in history

My claim is that patient philanthropy is automatically making the claim that now is the time where patient philanthropy does wildly unusually much expected good, because we're so early in history that the best giving opportunities are almost surely after us.

Thoughts on whether we're living at the most influential time in history

I've added a link to the article to the top of my post. Those changes seem reasonable.

3jackmalde9moHow does this differ from response 5 in the post?
Evidence, cluelessness, and the long term - Hilary Greaves

But if, as this talk suggests, it’s not obvious whether donating to near term interventions is good or bad for the world, why are you interested in whether you can pitch friends and family to donate to them?

9akrolsmir9moMy rough framing of "why pitch friends and family on donating" is that donating is a credible commitment towards altruism. It's really easy to get people to say "yeah, helping people is a good idea" but really hard to turn that into something actionable. Even granting that the long term and thus actual impact of AMF is uncertain, I feel like the transition from "typical altruistic leaning person" to "EA giver" is much more feasible, and sets up "EA giver" to "Longtermist". Once someone is already donating 10% of their income to one effective charity, it seems easier to make a case like the one OP outlined here. I guess one thing that would change my mind: do you know people who did jump straight into longtermism?
Evidence, cluelessness, and the long term - Hilary Greaves

I basically agree with the claims and conclusions here, but I think about this kind of differently.
 

I don’t know whether donating to AMF makes the world better or worse. But this doesn’t seem very important, because I don’t think that AMF is a particularly plausible candidate for the best way to improve the long term future anyway—it would be a reasonably surprising coincidence if the top recommended way to improve human lives right now was also the most leveraged way to improve the long term future.

So our attitude should be more like "I don’t know if... (read more)

2Milan_Griffes5moDo you agree with the decision-making frame I offered here [https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless] , or are you suggesting doing something different from that?
2Michael_Wiebe8moWhat's your distribution for the value of donating to AMF?
1jackmalde9moWhat do you mean by allocate your time "elsewhere"?
Existential Risk and Economic Growth

I think Carl Shulman makes some persuasive criticisms of this research here :

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in sur

... (read more)

My guess is that this feedback would be unhelpful and probably push the grantmakers towards making worse grants that were less time-consuming to justify to uninformed donors.

1[comment deleted]9mo
Evidence on correlation between making less than parents and welfare/happiness?

Inasmuch as you expect people to keep getting richer, it seems reasonable to hope that no generation has to be more frugal than the previous.

In defence of epistemic modesty

when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

I would love to see better evidence about this. Eg it doesn't match my experience of talking to physicists.

Load More