AdamGleave

Comments

How to PhD

Publishing good papers is not the problem, deluding yourself is.

Big +1 to this. Doing things you don't see as a priority but which other people are excited about is fine. You can view it as kind of a trade: you work on something the research community cares about, and the research community is more likely to listen on (and work on) things you care about in the future.

But to make a difference you do eventually need to work on things that you find impactful, so you don't want to pollute your own research taste by implicitly absorbing incentives or others opinions unquestioningly.

How to PhD

You approximately can't get directly useful/ things done until you have tenure.

At least in CS, the vast majority of professors at top universities in tenure-track positions do get tenure. The hardest part is getting in. Of course all the junior professors I know work extremely hard, but I wouldn't characterize it as a publication rat race. This may not be true in other fields and outside the top universities.

The primary impediment to getting things done that I see is professors are also working as administrator and teaching, and that remains a problem post-tenure.

How to PhD

One important factor of a PhD that I don't see explicitly called out in this post is what I'd describe as "research taste": how to pick what problems to work on. I think this is one of if not the most important part of a PhD. You can only get so much faster at executing routine tasks or editing papers. But the difference between the most and mediam importance research problems can be huge.

Andrej Karpathy has a nice discussion of this:

When it comes to choosing problems you’ll hear academics talk about a mystical sense of “taste”. It’s a real thing. When you pitch a potential problem to your adviser you’ll either see their face contort, their eyes rolling, and their attention drift, or you’ll sense the excitement in their eyes as they contemplate the uncharted territory ripe for exploration. In that split second a lot happens: an evaluation of the problem’s importance, difficulty, its sexiness, its historical context (and possibly also its fit to their active grants). In other words, your adviser is likely to be a master of the outer loop and will have a highly developed sense of taste for problems. During your PhD you’ll get to acquire this sense yourself.

Clearly we might care about some of these criteria (like grants) less than others, but I think the same idea holds. I'd also recommend Chris Olah's exercises on developing research taste.

How to PhD

Thanks for writing this post, it's always useful to hear people's experiences! For others considering a PhD, I just wanted to chime in and say that my experience in a PhD program has been quite different (4th year PhD in ML at UC Berkeley). I don't know how much this is the field, program or just my personality. But I'd encourage everyone to seek a range of perspectives: PhDs are far from uniform.

I hear the point about academic incentives being bad a lot, but I don't really resonate with it. A summary of my view is that incentives are misaligned everywhere, not just academia. Rather than seeking a place with good (in general) incentives, first figure out what you want to do, and then find a place where the incentives happen to be compatible with that (even if for the "wrong" reasons).

I've worked in quant finance, industry AI labs, and academic AI research. There were serious problems with incentives in all three. I found this particularly unforgivable in quantitative finance, where the goal is pretty clear: make money. You can even measure day to day if you're making money! But getting the details right is hard. At one place I'm aware of, people were paid based on their group's profitability, divided by how risky their strategies were. This seems reasonable: profit good, risk bad. The problem was, it measured the risk of your strategy in isolation -- not how it affected the whole firm's risk levels. So different groups colluded to swap strategies, which made each of them seem less risky in isolation (so they could paid more), without changing the firm's overall strategy at all!

Incentivizing research is an unusually hard problem. Agendas can take years to pay off. The best agendas are often really high variance, so someone might fail several times but still be doing great (in expectation) work. Given this backdrop, a PhD actually seems pretty reasonable.

It's pretty hard to get fired doing a PhD, and some (by no means all) advisors will let you work on pretty much whatever you want. So, you have a 3-5 year runway to just work on whatever topics you think are best. At the end of those 3-5 years, you have to convince a panel of experts (who you get to hand-pick!) that you did something that's "worth" a PhD.

As far as things go, this is incredibly flexible, and is evidenced by the large number of people who goof of during their PhD. (This is the pitfall of weak incentives.) It also seems like a pretty reasonable incentive. If after 5 years of work you can't convince people what you did was good, it might be that it's incredibly ahead of it's time, but more likely you either need to communicate it better or the work just wasn't that great by the standards of the field.

The "by the standards of the field" is the key issue here. Some high impact work just doesn't fit well into the taste of a particular field. Perhaps it falls between disciplinary boundaries. Or it's more about distilling existing research, so isn't novel enough. That sucks, and academic research is probably the wrong venue to be pursuing this in -- but it doesn't make academic incentives bad per se. Just bad for that kind of research.

I think the bigger issue are the tacit social pressures to publish and make a name for yourself. These matter a fair bit for the job market, so it's a real pressure. But I think analogous or equal pressures exist outside of academia. If you work at an industry lab, there might be a pressure to deliver flashy results of products. If you work as an independent researcher, funders will want to see publications or other signs of progress.

I'd love to see better incentives, but I think it's important to acknowledge that mechanism design for research is a hard problem, not just that academia is screwing it up uniquely badly.

Long-Term Future Fund: Ask Us Anything!

Thanks for picking up the thread here Asya! I think I largely agree with this, especially about the competitiveness in this space. For example, with AI PhD applications, I often see extremely talented people get rejected who I'm sure would have got an offer a few years ago.

I'm pretty happy to see the LTFF offering effectively "bridge" funding for people who don't quite meet the hiring bar yet, but I think are likely to in the next few years. However, I'd be hesitant about heading towards a large fraction of people working independently long-term. I think there's huge advantages from the structure and mentorship an org can provide. If orgs aren't scaling up fast enough, then I'd prefer to focus on trying to help speed that up.

The main way I could see myself getting more excited about long-term independent research is if we saw flourishing communities forming amongst independent researchers. Efforts like LessWrong and the Alignment Forum help in terms of providing infrastructure. But right now it still seems much worse than working for an org, especially if you want to go down any of the more traditional career paths later. But I'd love to be proven wrong here.

Long-Term Future Fund: Ask Us Anything!

This is an important question. It seems like there's an implicit assumption here that highest impact path for the fund to take is to make grants which the inside view of the fund managers think is highest impact, regardless of if we can explain the grant. This is a reasonable position -- and thank you for your confidence! -- however I think the fund being legible does have some significant advantages:

  1. Accountability generally seems to improve organisations functioning. It'd be surprising if the LTFF was a complete exception to this, and legibility seems necessary for accountability.
  2. There's asymmetric information between us and donors, so less legibility will tend to mean less donations (and I think this is reasonable). So, there's a tradeoff between greater counterfactual impact from scale, v.s. greater impact per $ moved.
  3. There's may be community building value in having a fund that is attractive to people without deep context or trust in the fund managers.

I'm not sure what the right balance of legibility vs inside view is for the LTFF. One possibility would be to split into a more inside view / trust-based fund, and a more legible and "safer" fund. Then donors can choose what kind of worldview they want to buy into.

That said, personally I don't feel like I make any significantly different votes for LTFF money v.s. my own donations. The main difference would be that I am much more cautious about conflicts of interest with LTFF money than my personal money, but I don't think I'd want to change that. However, I do think I tend to have a more conservative taste in grants than some others in the long-termist community.

One thing to flag is that we do occasionally (with applicant's permission) make recommendations to private donors rather than providing funding directly from the LTFF. This is often for logistical reasons, if something is tricky for CEA to fund, but it's also an option if a grant requires a lot of context to understand (which we can provide to an individual highly engaged donor, but not in a brief public write-up). I think this further decreases the number of grant decisions that are influenced by any legibility considerations.

Long-Term Future Fund: Ask Us Anything!

Could you operationalize "more accurately" a bit more? Both sentences match my impression of the fund. The first is more informative as to what our aims are, the second is more informative as to the details of our historical (and immediate future) grant composition.

My sense is that the first will give people an accurate predictive model of the LTFF in a wider range of scenarios. For example, if next round we happen to receive an amazing application for a new biosecurity org, the majority of the round's funding could go on that. The first sentence would predict this, the second not.

But the second will give most people better predictions in a "business as usual" case, where our applications in future rounds are similar to those of current rounds.

My hunch is that knowing what our aims are is more important for most donors. In particular, many people reading this for the first time will be choosing between the LTFF and one of the other EA Funds, which focus on completely different cause areas. The high-level motivation seems more salient than our current grant composition for this purpose.

Ideally of course we'd communicate both. I'll think about if we should add some kind of high-level summary of % of grants to different areas under the "Grantmaking and Impact" section which occurs earlier. My main worry is this kind of thing is hard to keep up to date, and as described above could end up misleading donors in the other direction, if our application pool suddenly changes.

Long-Term Future Fund: Ask Us Anything!

This is a good point, and I do think having multiple large funders would help with this. If the LTFF's budget grew enough I would be very interested in funding scalable interventions, but it doesn't seem like our comparative advantage now.

I do think possible growth rates vary a lot between fields. My hot take is new research fields are particularly hard to grow quickly. The only successful ways I've seen of teaching people how to do research involve apprenticeship-style programs (PhDs, residency programs, learning from a team of more experienced researchers, etc). You can optimize this to allow senior researchers to mentor more people (e.g. lots of peer advice assistants to free up senior staff time, etc), but that seems unlikely to yield more than a 2x increase in growth rate.

Most cases where orgs have scaled up successfully have drawn on a lot of existing talent. Tech startups can grow quickly but they don't teach each new hire how to program from scratch. So I'd love to see scalable ways to get existing researchers to work on priority areas like AI safety, biosecurity, etc.

It can be surprisingly hard to change what researchers work on, though. Researchers tend to be intrinsically motivated, so right now the best way I know is to just do good technical work to show that problems exist (and are tractable to solve), combined with clear communication. Funding can help here a bit: make sure the people doing the good technical work are not funding constrained.

One other approach might be to build better marketing: DeepMind, OpenAI, etc are great at getting their papers a lot of attention. If we could promote relevant technical work that might help draw more researchers to these problems. Although a lot of people in academia really hate these companies self-promotion, so it could backfire if done badly.

The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we'd still have to wait 3-5 years before the talent comes on tap unfortunately.

Long-Term Future Fund: Ask Us Anything!

What types/lines of research do you expect would be particularly useful for informing the LTFF's funding decisions?

I'd be interested in better understanding the trade-off between independent vs established researchers. Relative to other donors we fund a lot of independent research. My hunch here is that most independent researchers are less productive than if they were working at organisations -- although, of course, for many of them that's not an option (geographical constraints, organisational capacity, etc). This makes me place a bit of a higher bar for funding independent research. Some other fund managers disagree with me and think independent researchers tend to be more productive, e.g. due to bad incentives in academic and industry labs.

I expect distillation style work to be particularly useful. I expect there's already relevant research here: e.g. case studies of the most impressive breakthroughs, studies looking at different incentives in academic funding, etc. There probably won't be a definitive answer, so it'd also be important that I trust the judgement of the people involved, or have a variety of people with different priors going in coming to similar conclusions.

Do you have thoughts on what types/lines of research would be particularly useful for informing other funders' funding decisions in the longtermism space?

While larger donors can suffer from diminishing returns, there are sometimes also increasing returns to scale. One important thing larger donors can do that isn't really possible at the LTFF's scale is to found new academic fields. More clarity into how to achieve this and have the field go in a useful direction would be great.

It's still mysterious to me how academic fields actually come into being. Equally importantly, what predicts whether they have good epistemics, whether they have influence, etc? Clearly part of this is the domain of study (it's easier to get rigorous results in category theory than economics; it's easier to get policymakers to care about economics than category theory). But I suspect it's also pretty dependent on the culture created by early founders and the impressions outsiders form of the field. Some evidence for this is that some very closely related fields can end up going in very different directions: e.g. machine learning and statistics.

Do you have thoughts on how the answers to those two questions might differ?

A key difference between the LTFF and some other funders is we receive donations on a rolling basis, and I expect these donations to continue to increase over time. By contrast, many major donors have an endowment to spend down. So for them it's a really important question to know how to time those donations: how much should they give now, v.s. donate later? Whereas I think for us the case for just donating every $ we receive seems pretty strong (except for keeping enough of a buffer to even out short-term fluctuations in application quality and donation revenue).

Load More