All of Buck's Comments + Replies

We're Redwood Research, we do applied alignment research, AMA

Additionally, what are/how strong are the track records of Redwood's researchers/advisors?


The people we seek advice from on our research most often are Paul Christiano and Ajeya Cotra. Paul is a somewhat experienced ML researcher, who among other things led some of the applied alignment research projects that I am most excited about.

On our team, the people with the most relevant ML experience are probably Daniel Ziegler, who was involved with GPT-3 and also several OpenAI alignment research projects, and Peter Schmidt-Nielsen. Many of our other staff have ... (read more)

We're Redwood Research, we do applied alignment research, AMA

So one thing to note is that I think that there are varying degrees of solving the technical alignment problem. In particular, you’ve solved the alignment problem more if you’ve made it really convenient for labs to use the alignment techniques you know about. If next week some theory people told me “hey we think we’ve solved the alignment problem, you just need to use IDA, imitative generalization, and this new crazy thing we just invented”, then I’d think that the main focus of the applied alignment community should be trying to apply these alignment tec... (read more)

We're Redwood Research, we do applied alignment research, AMA

We could operationalize this as “How does P(doom) vary as a function of the total amount of quality-adjusted x-risk-motivated AI alignment output?” (A related question is “Of the quality-adjusted AI alignment research, how much will be motivated by x-risk concerns?” This second question feels less well defined.)

I’m pretty unsure here. Today, my guess is like 25% chance of x-risk from AI this century, and maybe I imagine that being 15% if we doubled the quantity of quality-adjusted x-risk-motivated AI alignment output, and 35% if we halved that quantity. Bu... (read more)

We're Redwood Research, we do applied alignment research, AMA

Here are some things I think are fairly likely:

  • I think that there might be a bunch of progress on theoretical alignment, with various consequences:
    • More projects that look like “do applied research on various strategies to make imitative generalization work in practice” -- that is, projects where the theory researchers have specific proposals for ML training schemes that have attractive alignment properties, but which have practical implementation questions that might require a bunch of effort to work out. I think that a lot of the impact from applied align
... (read more)
2casebash18dWhat's the main way that you think resources for onboarding people has improved?
We're Redwood Research, we do applied alignment research, AMA

I think this is a great question.

We are researching techniques that are simpler precursors to adversarial training techniques that seem most likely to work if you assume that it’s possible to build systems that are performance-competitive and training-competitive, and do well on average on their training distribution.

There are a variety of reasons to worry that this assumption won’t hold. In particular, it seems plausible that humanity will only have the ability to produce AGIs that will collude with each other if it’s possible for them to do so. This seem... (read more)

1Lukas_Finnveden18dHm, could you expand on why collusion is one of the most salient ways in which "it’s possible to build systems that are performance-competitive and training-competitive, and do well on average on their training distribution" could fail? Is the thought here that — if models can collude — then they can do badly on the training distribution in an unnoticeable way, because they're being checked by models that they can collude with?
We're Redwood Research, we do applied alignment research, AMA

I think our work is aimed at reducing the theory-practice gap of any alignment schemes that attempt to improve worst-case performance by training the model on data that was selected in the hope of eliciting bad behavior from the model. For example, one of the main ingredients of our project is paying people to try to find inputs that trick the model, then training the model on these adversarial examples.


Many different alignment schemes involve some type of adversarial training. The kind of adversarial training we’re doing, where we just rely on human ingen... (read more)

We're Redwood Research, we do applied alignment research, AMA

So there’s this core question: "how are the results of this project going to help with the superintelligence alignment problem?" My claim can be broken down as follows:

  • "The problem is relevant": There's a part of the superintelligence alignment problem that is analogous to this problem. I think the problem is relevant for reasons I already tried to spell out here.
  • "The solution is relevant": There's something helpful about getting better at solving this problem. This is what I think you’re asking about, and I haven’t talked as much about why I think the sol
... (read more)
We're Redwood Research, we do applied alignment research, AMA

So to start with, I want to note that I imagine something a lot more like “the alignment community as a whole develops promising techniques, probably with substantial collaboration between research organizations” than “Redwood does all the work themselves”. Among other things, we don’t have active plans to do much theoretical alignment work, and I’d be fairly surprised if it was possible to find techniques I was confident in without more theoretical progress--our current plan is to collaborate with theory researchers elsewhere.

In this comment, I mentioned ... (read more)

We're Redwood Research, we do applied alignment research, AMA

It seems definitely good on the margin if we had ways of harnessing academia to do useful work on alignment. Two reasons for this are that 1. perhaps non-x-risk-motivated researchers would produce valuable contributions, and 2. it would mean that x-risk-motivated researchers inside academia would be less constrained and so more able to do useful work.

Three versions of this:

  • Somehow cause academia to intrinsically care about reducing x-risk, and also ensure that the power structures in academia have a good understanding of the problem, so that its own qualit
... (read more)
We're Redwood Research, we do applied alignment research, AMA

This is a great question and I don't have a good answer.

We're Redwood Research, we do applied alignment research, AMA

One simple model for this is: labs build aligned models if the amount of pressure on them to use sufficiently reliable alignment techniques is greater than the inconvenience associated with using those techniques.

Here are various sources of pressure:

  • Lab leadership
  • Employees of the lab
  • Investors
  • Regulators
  • Customers

In practice, all of these sources of pressure are involved in companies spending resources on, eg, improving animal welfare standards, reducing environmental costs, or DEI (diversity, equity, and inclusion).

And here are various sources of inconvenien... (read more)

1Jack R19dThanks for the response! I found the second set of bullet points especially interesting/novel.
We're Redwood Research, we do applied alignment research, AMA

I think that most questions we care about are either technical or related to alignment. Maybe my coworkers will think of some questions that fit your description. Were you thinking of anything in particular?

2Linch19dWell for me, better research on correlates of research performance would be pretty helpful for research hiring. Like it's an open question to me whether I should expect a higher or lower (within-distribution) correlation of {intelligence, work sample tests, structured interviews, resume screenings} to research productivity when compared to the literature on work performance overall. I expect there are similar questions for programming. But the selfish reason I'm interested in asking this is that I plan to work on AI gov/strategy in the near future, and it'll be useful to know if there are specific questions in those domains that you'd like an answer to, as this may help diversify or add to our paths to impact.
We're Redwood Research, we do applied alignment research, AMA

GPT-3 suggests: "We will post the AMA with a disclaimer that the answers are coming from Redwood staff. We will also be sure to include a link to our website in the body of the AMA, with contact information if someone wants to verify with us that an individual is staff."

6Peter Wildeford19dThat's quite a good answer
We're Redwood Research, we do applied alignment research, AMA

I think the main skillsets required to set up organizations like this are: 

  • Generic competence related to setting up any organization--you need to talk to funders, find office space, fill out lots of IRS forms, decide on a compensation policy, make a website, and so on.
  • Ability to lead relevant research. This requires knowledge of running ML research, knowledge of alignment, and management aptitude.
  • Some way of getting a team, unless you want to start the org out pretty small (which is potentially the right strategy).
  • It’s really helpful to have a bunch o
... (read more)
We're Redwood Research, we do applied alignment research, AMA

Thanks for the kind words!

Our biggest bottlenecks are probably going to be some combination of:

  • Difficulty hiring people who are good at some combination of leading ML research projects, executing on ML research, and reasoning through questions about how to best attack prosaic alignment problems with applied research.
  • A lack of sufficiently compelling applied research available, as a result of theory not being well developed enough.
  • Difficulty with making the organization remain functional and coordinated as it scales.
We're Redwood Research, we do applied alignment research, AMA

In most worlds where we fail to produce value, I think we fail before we spend a hundred researcher-years. So I’m also going to include possibilities for wasting 30 researcher-years in this answer.

Here’s some reasons we might have failed to produce useful research: 

  • We failed to execute well on research. For example, maybe we were incompetent at organizing research projects, or maybe our infrastructure was forever bad, or maybe we couldn’t hire a certain type of person who was required to make the work go well.
  • We executed well on research, but failed o
... (read more)
We're Redwood Research, we do applied alignment research, AMA

Re 1:

It’s probably going to be easier to get good at the infrastructure engineering side of things than the ML side of things, so I’ll assume that that’s what you’re going for.

For our infra engineering role, we want to hire people who are really productive and competent at engineering various web systems quickly. (See the bulleted list of engineering responsibilities on the job page.) There are some people who are qualified for this role without having much professional experience, because they’ve done a lot of Python programming and web programming as hob... (read more)

We're Redwood Research, we do applied alignment research, AMA

I think the best examples would be if we tried to practically implement various schemes that seem theoretically doable and potentially helpful, but quite complicated to do in practice. For example, imitative generalization or the two-head proposal here. I can imagine that it might be quite hard to get industry labs to put in the work of getting imitative generalization to work in practice, and so doing that work (which labs could perhaps then adopt) might have a lot of impact.

Buck's Shortform

Redwood Research is looking for people to help us find flaws in our injury-detecting model. We'll pay $30/hour for this, for up to 2 hours; after that, if you’ve found interesting stuff, we’ll pay you for more of this work at the same rate. I expect our demand for this to last for maybe a month (though we'll probably need more in future).

If you’re interested, please email adam@rdwrs.com so he can add you to a Slack or Discord channel with other people who are working on this. This might be a fun task for people who like being creative, being tricky, and fi... (read more)

2Nathan Young1moIf you tweet about this I'll tag it with @effective_jobs.
Why AI alignment could be hard with modern deep learning

In other words, if the disagreement was "bottom-up", then you'd expect that at least some people who are optimistic about misalignment risk would be pessimistic about other kinds of AI risk, such as what I call "human safety problems" (see examples here and here) but in fact I don't seem to see anyone whose position is something like, "AI alignment will be easy or likely solved by default, therefore we should focus our efforts on these other kinds of AI-related x-risks that are much more worrying."

 

 

FWIW I know some people who explicitly think th... (read more)

Sounds like their positions are not public, since you don't cite anyone by name? Is there any reason for that?

Linch's Shortform

What kinds of things do you think it would be helpful to do cost effectiveness analyses of? Are you looking for cost effectiveness analyses of problem areas or specific interventions?

5Linch2moHmm one recent example is that somebody casually floated to me an idea that can potentially entirely solve an existential risk (though the solution might have downside risks of its own) and I realized then that I had no idea how much to price the solution in terms of EA $s, like whether it should be closer to 100M, 1B or $100B. My first gut instinct was to examine the solution and also to probe the downside risks, but then I realized this is thinking about it entirely backwards. The downside risks and operational details don't matter if even the most optimistic cost-effectiveness analyses isn't enough to warrant this being worth funding!
9Denkenberger2moI think it would be valuable to see quantitative estimates of more problem areas and interventions. My order of magnitude estimate would be that if one is considering spending $10,000-$100,000, one should do a simple scale, neglectedness, and tractability analysis. But if one is considering spending $100,000-$1 million, one should do an actual cost-effectiveness analysis. So candidates here would be wild animal welfare, approval voting, improving institutional decision-making, climate change from an existential risk perspective, biodiversity from an existential risk perspective, governance of outer space [https://80000hours.org/problem-profiles/#space-governance]etc. Though it is a significant amount of work to get a cost-effectiveness analysis up to peer review publishable quality (which we have found requires moving beyond Guesstimate, e.g. here [http://allfed.info/wp-content/uploads/2018/11/Cost-effectiveness-of-interventions-for-alternate-food-in-the-United-States-to-address-agricultural-catastrophes.pdf] and here [https://link.springer.com/content/pdf/10.1007%2Fs13753-016-0097-2.pdf] ), I still think that there is value in doing a rougher Guesstimate model and having a discussion about parameters. One could even add to one of our Guesstimate models, allowing a direct comparison with AGI safety and resilient foods [https://www.getguesstimate.com/models/13082] or interventions for loss of electricity/industry [https://www.getguesstimate.com/models/11599] from a long-term perspective.
Buck's Shortform

When I was 19, I moved to San Francisco to do a coding bootcamp. I got a bunch better at Ruby programming and also learned a bunch of web technologies (SQL, Rails, JavaScript, etc).

It was a great experience for me, for a bunch of reasons.

  • I got a bunch better at programming and web development.
    • It was a great learning environment for me. We spent basically all day pair programming, which makes it really easy to stay motivated and engaged. And we had homework and readings in the evenings and weekends. I was living in the office at the time, with a bunch o
... (read more)
2Aaron Gertler1moSee my comment here [https://forum.effectivealtruism.org/posts/Soutcw6ccs8xxyD7v/buck-s-shortform?commentId=6KPrgdc4vYmkT73w3] , which applies to this Shortform as well; I think it would be a strong top-level post, and I'd be interested to see how other users felt about tech bootcamps they attended.
1Jack R2moThis seems like really good advice, thanks for writing this! Also, I'm compiling a list of CS/ML bootcamps here [https://docs.google.com/spreadsheets/d/1pBBo28bCNVlKvmrzbSkkl2pQKDf_els-98i-S0Gdu6A/edit?usp=sharing ] (anyone should feel free to add items).
Buck's Shortform

Doing lots of good vs getting really rich

Here in the EA community, we’re trying to do lots of good. Recently I’ve been thinking about the similarities and differences between a community focused on doing lots of good and a community focused on getting really rich.

I think this is interesting for a few reasons:

  • I found it clarifying to articulate the main differences between how we should behave and how the wealth-seeking community should behave.
  • I think that EAs make mistakes that you can notice by thinking about how the wealth-seeking community would beh
... (read more)
4Aaron Gertler1moI'm commenting on a few Shortforms I think should be top-level posts so that more people see them, they can be tagged, etc. This is one of the clearest cases I've seen; I think the comparison is really interesting, and a lot of people who are promising EA candidates will have "become really rich" as a viable option, such that they'd benefit especially from thinking about this comparisons themselves. Anyway, would you consider making this a top-level post? I don't think the text would need to be edited all — it could be as-is, plus a link to the Shortform comments.
2Ben_West2moThanks for writing this up. At the risk of asking obvious question, I'm interested in why you think entrepreneurship is valuable in EA. One explanation for why entrepreneurship has high financial returns is information asymmetry/adverse selection: it's hard to tell if someone is a good CEO apart from "does their business do well", so they are forced to have their compensation tied closely to business outcomes (instead of something like "does their manager think they are doing a good job"), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs. It's not obvious to me that this information asymmetry exists in EA. E.g. I expect "Buck thinks X is a good group leader" correlates better with "X is a good group leader" than "Buck thinks X will be a successful startup" correlates with "X is a successful startup". It seems like there might be a "market failure" in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
2Jamie_Harris2moMaybe there's some lesson to be learned. And I do think that EAs should often aspire to be more entrepreneurial. But maybe the main lesson is for the people trying to get really rich, not the other way round. I imagine both communities have their biases. I imagine that lots of people try entrepreneurial schemes for similar reasons to why lots of people buy lottery tickets. And Id guess that this often has to do with scope neglect, excessive self confidence / sense of exceptionalism, and/or desperation.
6Ben Pace2moSomething I imagined while reading this was being part of a strangely massive (~1000 person) extended family whose goal was to increase the net wealth of the family. I think it would be natural to join one of the family businesses, it would be natural to make your own startup, and also it would be somewhat natural to provide services for the family that aren't directly about making the money yourself. Helping make connections, find housing, etc.

Thanks, this is an interesting analogy. 

If too few EAs go into more bespoke roles, then one reason could be risk-aversion. Rightly or wrongly, they may view those paths as more insecure and risky (for them personally; though I expect personal and altruistic risk correlate to a fair degree). If so, then one possibility is that EA funders and institutions/orgs should try to make them less risky or otherwise more appealing (there may already be some such projects).

In recent years, EA has put less emphasis on self-sacrifice, arguing that we can't expect p... (read more)

Buck's Shortform

Yeah but this pledge is kind of weird for an altruist to actually follow, instead of donating more above the 10%. (Unless you think that almost everyone believes that most of the reason for them to do the GWWC pledge is to enforce the norm, and this causes them to donate 10%, which is more than they'd otherwise donate.)

2Linch3moI thought you were making an empirical claim with the quoted sentence, not a normative claim.
Buck's Shortform

[This is an excerpt from a longer post I'm writing]

Suppose someone’s utility function is

U = f(C) + D

Where U is what they’re optimizing, C is their personal consumption, f is their selfish welfare as a function of consumption (log is a classic choice for f), and D is their amount of donations.

Suppose that they have diminishing utility wrt (“with respect to”) consumption (that is, df(C)/dC is strictly monotonically decreasing). Their marginal utility wrt donations is a constant, and their marginal utility wrt consumption is a decreasing function. There has t... (read more)

3Stefan_Schubert3moThe GWWC pledge is akin to a flat tax, as opposed to a progressive tax - which gives you a higher tax rate when you earn more. I agree that there are some arguments in favour of "progressive donations". One consideration is that extremely high "donation rates" - e.g. donating 100% of your income above a certain amount - may affect incentives to earn more adversely, depending on your motivations. But in a progressive donation rate system with a more moderate maximum donation rate that would probably not be as much of a problem.
2Linch3moWait the standard GWWC pledge is a 10% of your income, presumably based on cultural norms like tithing which in themselves might reflect an implicit understanding that (if we assume log utility) a constant fraction of consumption is equally costly to any individual, so made for coordination rather than single-player reasons.
Buck's Shortform

[epistemic status: I'm like 80% sure I'm right here. Will probably post as a main post if no-one points out big holes in this argument, and people seem to think I phrased my points comprehensibly. Feel free to leave comments on the google doc here if that's easier.]

I think a lot of EAs are pretty confused about Shapley values and what they can do for you. In particular Shapley values are basically irrelevant to problems related to coordination between a bunch of people who all have the same values. I want to talk about why. 

So Shapley values are a sol... (read more)

2NunoSempere3moThis seems correct -------------------------------------------------------------------------------- This misses some considerations around cost-efficiency/prioritization. If you look at your distorted "Buck values", you come away that Buck is super cost-effective; responsible for a large fraction of the optimal plan using just one salary. If we didn't have a mechanistic understanding of why that was, trying to get more Buck would become an EA cause area. In contrast, if credit was allocated according to Shapley values, we could look at the groups whose Shapley value is the highest, and try to see if they can be scaled. -------------------------------------------------------------------------------- The section about "purely local" Shapley values might be pointing to something, but I don't quite know what it is, because the example is just Shapley values but missing a term? I don't know. You also say "by symmetry...", and then break that symmetry by saying that one of the parts would have been able to create $6,000 in value and the other $0. Needs a crisper example. -------------------------------------------------------------------------------- Re: coordination between people who have different values using SVs, I have some stuff here [https://forum.effectivealtruism.org/posts/3NYDwGvDbhwenpDHb/shapley-values-ii-philantropic-coordination-theory-and-other#Philantropic_Coordination_Theory_] , but looking back the writting seems too corny. -------------------------------------------------------------------------------- Lastly, to some extent, Shapley values are a reaction to people calculating their impact as their counterfactual impact. This leads to double/triple counting impact for some organizations/opportunities, but not others, which makes comparison between them more tricky. Shapley values solve that by allocating impact such that it sums to the total impact & other nice properties. Then someone like OpenPhilanthropy or some EA fund can come and see
EA Infrastructure Fund: Ask us anything!

I am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.

6Jonas Vollmer4moI'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority). One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.
EA Infrastructure Fund: Ask us anything!

I would personally be pretty down for funding reimbursements for past expenses.

2Linch4moThat's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point?
4Max_Daniel4moI haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this. So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes. If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.
4Habryka4moI would also be in favor of the LTFF doing this.
EA Infrastructure Fund: Ask us anything!

This is indeed my belief about ex ante impact. Thanks for the clarification.

Buck's Shortform

That might achieve the "these might be directly useful goal" and "produce interesting content" goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don't. It wouldn't achieve any of the other goals, though.

4Khorton5moI wonder if there are better ways to encourage and reward talented writers to look for outside ideas - although I agree book reviews are attractive in their simplicity!
Buck's Shortform

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in
... (read more)
4saulius3moI would also ask these people to optionally write or improve a summary of the book in Wikipedia if it has an Wikipedia article (or should have one). In many cases, it's not only EAs who would do more good if they knew ideas in a given book, especially when it's on a subject like pandemics or global warming rather than topics relevant to non-altruistic work too like management or productivity. When you google a book, Wikipedia is often the first result and so these articles receive a quite lot of traffic (you can see here [https://stats.wikimedia.org/#/en.wikipedia.org/reading/total-page-views/normal%7Cbar%7CAll%7C~total] how much traffic a given article receives).
5Jordan Pieters4moPerhaps it would be worthwhile to focus on books like those in this [https://forum.effectivealtruism.org/posts/KNZLGbGevnjStgzHt/i-scraped-all-public-effective-altruists-goodreads-reading#Most_commonly_planned_to_read_books_that_have_not_been_read_by_anyone_yet] list of "most commonly planned to read books that have not been read by anyone yet"
3MichaelA4moYeah, this seems good to me. I also just think in any case [https://forum.effectivealtruism.org/posts/zCJDF6iNSJHnJ6Aq6/a-ranked-list-of-all-ea-relevant-audio-books-i-ve-read#Suggestion__Make_Anki_cards__share_them_as_posts__and_share_key_updates] more people should post their notes, key takeaways, and (if they make them) Anki cards to the Forum, as either top-level posts or shortforms. I think this need only take ~30 mins of extra time on top of the time they spend reading or note-taking or whatever for their own benefit. (But doing what you propose would still add value by incentivising more effortful and even more useful versions of this.) Yeah, I think this is worth emphasising, since: * Those are things existing, non-EA summaries of the books are less likely to provide * Those are things that even another EA reading the same book might not think of * Coming up with key takeaways is an analytical exercise and will often draw on specific other knowledge, intuitions, experiences, etc. the reader has Also, readers of this shortform may find posts tagged effective altruism books [https://forum.effectivealtruism.org/tag/effective-altruism-books] interesting.
5Peter Wildeford5moI've thought about this before and I would also like to see this happen.

I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff.

I share this concern, and I think a culture with more book reviews is a great way to achieve that (I've been happy to see all of Michael Aird's book summaries for that reason).

CEA briefly considered paying for book reviews (I was asked to write this review as a test of that idea). IIRC, the goal at the time was more about getting more engagement from people on the periphery of EA by creating EA-related content they'd find int... (read more)

7casebash5moI'd be interested in this. I've been posting book reviews of the books I read to Facebook - mostly for my own benefit. These have mostly been written quickly, but if there was a decent chance of getting $500 I could pick out the most relevant books and relisten to them and then rewrite them.
1Khorton5moYou can already pay for book reviews - what would make these different?
9Habryka5moYeah, I really like this. SSC currently already has a book-review contest running on SSC, and maybe LW and the EAF could do something similar? (Probably not a contest, but something that creates a bit of momentum behind the idea of doing this)
3reallyeli5moQuick take is this sounds like a pretty good bet, mostly for the indirect effects. You could do it with a 'contest' framing instead of a 'I pay you to produce book reviews' framing; idk whether that's meaningfully better.
2Max_Daniel5moI don't think it's crazy at all. I think this sounds pretty good.
EA Infrastructure Fund: Ask us anything!

I think that "business as usual but with more total capital" leads to way less increased impact than 20%; I am taking into account the fact that we'd need to do crazy new types of spending.

Incidentally, you can't buy the New York Times on public markets; you'd have to do a private deal with the family who runs it

.

2Max_Daniel5moHmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%. (I guess there is also the question what exactly we're assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I'm much more inclined to agree with "business as usual + this extra capital adds much less than 20%". In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
EA Infrastructure Fund: Ask us anything!

Re 1: I think that the funds can maybe disburse more money (though I'm a little more bearish on this than Jonas and Max, I think). But I don't feel very excited about increasing the amount of stuff we fund by lowering our bar; as I've said elsewhere on the AMA the limiting factor on a grant to me usually feels more like "is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it" than "is this grant good enough to be worth the money".

I think that the funds' RFMF is only slightly real--I think that giving t... (read more)

9Jonas Vollmer4moJust wanted to flag briefly that I personally disagree with this: * I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive. * I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*. * Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments). I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization [https://reg-charity.org/] and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level. Overall, I want to continue funding good fundraising organizations.
9Linch5moI'm curious how much $s you and others think that longtermist EA has access to right now/will have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
6Max_Daniel5moI think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective. However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/ea-infrastructure-fund-ask-us-anything?commentId=yXcHQWoHEYdZdrNvr] , i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors. One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I'd guess it's much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces "evangelists" rather than just people who'll start giving 1% as a 'hobby', are quiet about it, and otherwise don't think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors. So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF. (I'm also quite uncertain about all of this. E.g., I wouldn't be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving - even in a 'good' way - were significantly net negative.)

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy... (read more)

EA Infrastructure Fund: Ask us anything!

I am planning on checking in with grantees to see how well they've done, mostly so that I can learn more about grantmaking and to know if we ought to renew funding.

I normally didn't make specific forecasts about the outcomes of grants, because operationalization is hard and scary.

I feel vaguely guilty about not trying harder to write down these proxies ahead of time. But I empirically don't, and my intuitions apparently don't feel that optimistic about working on this. I am not sure why. I think it's maybe just that operationationalization is super hard an... (read more)

EA Infrastructure Fund: Ask us anything!

Like Max, I don't know about such a policy. I'd be very excited to fund promising projects to support the rationality community, eg funding local LessWrong/Astral Codex Ten groups.

EA Infrastructure Fund: Ask us anything!

Re 1: I don't think I would have granted more

Re 2: Mostly "good applicants with good proposals for implementing good project ideas" and "grantmaker capacity to solicit or generate new project ideas", where the main bottleneck on the second of those isn't really generating the basic idea but coming up with a more detailed proposal and figuring out who to pitch on it etc.

Re 3: I think I would be happy to evaluate more grant applications and have a correspondingly higher bar. I don't think that low quality applications make my life as a grantmaker much worse;... (read more)

EA Infrastructure Fund: Ask us anything!

Re your 19 interventions, here are my quick takes on all of them

Creating, scaling, and/or improving EA-aligned research orgs

Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.

Creating, scaling, and/or improving EA-aligned research training programs

I am in favor of this. I think one of the biggest bottlenecks here is finding people  who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research... (read more)

4Linch5moSorry, minor confusion about this. By "top 25%," do you mean 75th percentile? Or are you encompassing the full range here?
3MichaelA5moI'm pretty surprised by the strength of that reaction. Some followups: 1. How do you square that with the EA Funds (a) funding things that would increase the amount/quality/impact of EA-aligned research(ers), and (b) indicating in some places (e.g. here [https://forum.effectivealtruism.org/posts/nLxpFeEs6kAdgjRWz/the-long-term-future-fund-has-room-for-more-funding-right] ) the funds have room for more funding? * Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)? * Do you disagree that the funds have room for more funding? 2. Do you think increasing available funding wouldn't help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)? 3. Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
2MichaelA5moFWIW, I agree that your concerns about "Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers" are well worth bearing in mind and that they make at least some versions of this intervention much less valuable or even net negative.
2MichaelA5moI think I agree with this, though part of the aim for the database would be to help people find mentors (or people/resources that fill similar roles). But this wasn't described in the title of that section, and will be described in the post coming out in a few weeks, so I'll leave this topic there :)
2MichaelA5moThanks for this detailed response! Lots of useful food for thought here, and I agree with much of what you say. Regarding Effective Thesis: * I think I agree that "most research areas relevant to longtermism require high context in order to contribute to", at least given our current question lists and support options. * I also think this is the main reason I'm currently useful as a researcher despite (a) having little formal background in the areas I work in and (b) there being a bunch of non-longtermist specialists who already work in roughly those areas. * On the other hand, it seems like we should be able to identify many crisp, useful questions that are relatively easy to delegate to people - particularly specialists - with less context, especially if accompanied with suggested resources, a mentor with more context, etc. * E.g., there are presumably specific technical-ish questions related to pathogens, antivirals, climate modelling, or international relations that could be delegated to people with good subject area knowledge but less longtermist context. * I think in theory Effective Thesis or things like it could contribute to that * After writing that, I saw you said the following, so I think we mostly agree here: "I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff." * OTOH, in terms of examples of this happening, I think at least Luke Muehlhauser seems to believe s
9Max_Daniel5moI would be enthusiastic about this. If you don't do it, I might try doing this myself at some point. I would guess the main challenge is to get sufficient inter-rater reliability; i.e., if different interviewers used this interview to interview the same person (or if different raters watched the same recorded interview), how similar would their ratings be? I.e., I'm worried that the bottleneck might be something like "there are only very few people who are good at assessing other people" as opposed to "people typically use the wrong method to try to assess people".
EA Infrastructure Fund: Ask us anything!

I feel very unsure about this. I don't think my position on this question is very well thought through.

Most of the time, the reason I don't want to make a grant doesn't feel like "this isn't worth the money", it feels like "making this grant would be costly for some other reason". For example, when someone applies for a salary to spend some time researching some question which I don't think they'd be very good at researching, I usually don't want to fund them, but this is mostly because I think it's unhealthy in various ways for EA to fund people to flail ... (read more)

2Linch5moAm I correct in understanding that this is true for your beliefs about ex ante rather than ex post impact? (in other words, that 1/4 of grants you pre-identified as top-25% will end up accounting for more than 50% of your positive impact) If so, is this a claim about only the positive impact of the grants you make, or also about the absolute value of all grants you make? See related question [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/ea-infrastructure-fund-ask-us-anything?commentId=LTi9G82bJqn2XZnon] .
EA Infrastructure Fund: Ask us anything!

re 1: I expect to write similarly detailed writeups in future.

re 2: I think that would take a bunch more of my time and not clearly be worth it, so it seems unlikely that I'll do it by default. (Someone could try to pay me at a high rate to write longer grant reports, if they thought that this was much more valuable than I did.)

re 3: I agree with everyone that there are many pros of writing more detailed grant reports (and these pros are a lot of why I am fine with writing grant reports as long as the ones I wrote). By far the biggest con is that it takes ... (read more)

EA Infrastructure Fund: Ask us anything!

I don't think this has much of an advantage over other related things that I do, like

  • telling people that they should definitely tell me if they know about projects that they think I should fund, and asking them why
  • asking people for their thoughts on grant applications that I've been given
  • asking people for ideas for active grantmaking strategies
EA Infrastructure Fund: Ask us anything!

 A question for the fund managers: When the EAIF funds a project, roughly how should credit should be allocated between the different involved parties, where the involved parties are:

  • The donors to the fund
  • The grantmakers
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics)
  • The grantee

Presumably this differs a lot between grants; I'd be interested in some typical figures.

This question is important because you need a sense of these numbers in order to make decisions about which of these parties you sho... (read more)

(I'd be very interested in your answer if you have one btw.)

3Linch4mo(No need for the EAIF folks to respond; I think I would also find it helpful to get comments from other folks) I'm curious about a set of related questions probing this at a more precise level of granularity. For example, for the Suppose for the sake of the argument that the RP internship resulted in better career outcomes than the interns counterfactually would have.* For the difference of the impact from the internship vs the next-best-option, what fraction of credit assignment should be allocated to: * The donors to the fund * The grantmakers * The rest of the EAIF infrastructure * RP for selecting interns and therefore providing a signaling mechanism either to the interns themselves or for future jobs * RP for managing/training/aiding interns to hopefully excel * The work of the interns themselves I'm interested in whether the ratio between the first 3 bullet points has changed (for example, maybe with more $s per grant, donor $s are relatively less important and the grantmaker effort/$ ratio is lower) I also interested in the appropriate credit assignment (breaking down all Jonas' 75%!) of the last 3 bullet points. For example, if most people see the value of RP's internship program to the interns as primarily via RP's selection methods, then it might make sense to invest more management/researcher time into designing better pre-internship work trials. I'm also interested in even more granular takes, but perhaps this is boring to other people. (I work for RP. I do not speak for the org). *(for reasons like it a) speeded up their networking, b) tangible outputs from the RP internship allowed them to counterfactually get jobs where they had more impact, c) it was a faster test for fit and made the interns correctly choose to not go to research, saving time, d) they learned actually valuable skills that made their career trajectory go smoother, etc)

Making up some random numbers:

  • The donors to the fund – 8%
  • The grantmakers – 10%
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics) – 7%
  • The grantee – 75%

This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.

This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.

How much do you (actually) work?

Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don't normally track it.

 

EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.

How much do you (actually) work?
Answer by BuckMay 21, 202116

I occasionally track my work time for a few weeks at a time; by coincidence I happen to be tracking it at the moment. I used to use Toggl; currently I just track my time in my notebook by noting the time whenever I start and stop working (where by "working" I mean "actively focusing on work stuff"). I am more careful about time tracking my work on my day job (working on longtermist technical research, as an individual contributor and manager) than working on the EAIF and other movement building stuff.

The first four days this week, I did 8h33m, 8h15m, 7h32m... (read more)

Incidentally, I think that tracking work time is a kind of dangerous thing to do, because it makes it really tempting to make bad decisions that will cause you to work more. This is a lot of why I don't normally track it.

 

EDIT: however, it seems so helpful to track it some of the time that I overall strongly doing it for at least a week a year.

Concerns with ACE's Recent Behavior

That seems correct, but doesn’t really defend Ben’s point, which is what I was criticizing.

Concerns with ACE's Recent Behavior

I am glad to have you around, of course.

My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I'd be very interested to hear I was wrong about that.

I think that isn't the right counterfactual since I got into EA circles despite having only minimal (and net negative) impressions of EA-related forums.  So your claim is narrowly true, but if instead the counterfactual was if my first exposure to EA was the EA forum, then I think yes the prominence of this kind of post would have made me substantially less likely to engage.

But fundamentally if we're running either of these counterfactuals I think we're already leaving a bunch of value on the table, as expressed by EricHerboso's post about false dilemmas.

Concerns with ACE's Recent Behavior

I am not sure whether I think it's a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren't obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're inf... (read more)

7tamgent6moI agree it's good for a community to have an immune system that deters people who would hurt its main goals, EA included. But, and I hear you do care about calibrating on this too, we want to avoid false positives. Irving below seems like an example, and he said it better than I could: we're already leaving lots of value on the table. I expect our disagreement is just empirical and about that, so happy to leave it here as it's only tangentially relevant to the OP. Aside: I don't know about Will's intentions, I just read his comment and your reply, and don't think 'he could have made a different comment' is good evidence of his intentions. I'm going to assume you know much more about the situation/background than I do, but if not I do think it's important to give people benefit of the doubt on the question of intentions. [Meta: in case not obvious, I want to round off this thread, happy to chat in private sometime]

I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying.

[...]

If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.

I would guess it depends quite a bit on these people's total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is "not for them").

If we're imagining people who've already had 10 or even 100 hour... (read more)

I bounce off posts like this.  Not sure if you'd consider me net positive or not. :)

Concerns with ACE's Recent Behavior

More generally, I think our disagreement here probably comes down to something like this:

There's a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome.  As you say, if we're skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.

But this comes at a cost. I personally feel much less excited about writing about certain topics because I'd have to be super careful about them. And most of t... (read more)

9tamgent6moI appreciate you trying to find our true disagreement here.

I don't disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).

I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?

I think that becoming more skillful at doing both well is an important skill for a community l... (read more)

Concerns with ACE's Recent Behavior

(I'm writing these comments kind of quickly, sorry for sloppiness.)

With regard to

Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.

In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.

I would have no meta-level objection to a comment saying "I disagree that X is bad, I think it's actually fine".

1tamgent6moI think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you've responded to our main disagreement though, so I'll respond on that branch.
Load More