I haven't read the comments and this has probably been said many times already, but it doesn't hurt saying it again:From what I understand, you've taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)
At the same time though, it seems like your objection is a fully general argument against fundamental breakthroughs ever being necessary at any point, which seems quite unlikely.
Sorry, what I wanted to say is it seems unclear if fundamental breakthroughs are needed. They might be needed, or not. I personally am pretty uncertain about this and think that both options are possible. I think it's also possible that any breakthroughs that will happen won't change the general picture described in the OP much.I agree on the rest of your comment!
I gave the comment a strong upvote because it's super clear and informative. I also really appreciate it if people spell out their reasons for "scale in not all you need", which doesn't happen that often.
That said, I don't agree with the argument or conclusion. Your argument, at least as stated, seems to be "tasks with the following criteria are hard for current RL with human feedback, so we'll need significant fundamental breakthroughs". The transformer was published 5 years ago. Back then, you could have used a very analogous argument about language models to argue that language models will never do this or that task; but for many of these tasks, language models can perform them now (emergent properties).
Yes, you can absolutely apply for conference and compute funding, separately from an application for salary, or in combination. E.g. if you're applying for salary funding anyway, it would be very common and normal to also apply for funding for a couple of conferences, equipment that you need, and compute. I think you would probably go for cloud compute, but I haven't thought about it much.
Sometimes this can give mild tax issues (if you get the grant in one year but only spend the money on the conference in the next year; or, in some countries, if you just receive the funding as a private person and therefore can't subtract expenses).Some organisations also offer funding via prepaid credit cards, e.g. for compute.Maybe there are also other options, like getting an affiliation with some place and using their servers for compute, but often this will be hard.
I think you could apply for funding from a number of sources. If the budget is small, I'd start with the Longterm Future Fund: https://funds.effectivealtruism.org/funds/far-future
relevant tweet I saw recently: https://twitter.com/scholl_adam/status/1556989092784615424
I'm excited about people thinking about this topic. It's a pretty crucial assumption in the "EA longtermist space", and relatively underexplored.
This post is a response to the thesis of Jan Brauner’s post The Expected Value of Extinction Risk Reduction Is Positive.
The post is by Jan Brauner AND Friederike Grosse-Holz. I think correcting this is particularly important because the EA community struggles with gender diversity, so dropping the female co-author is extra bad.
Given that Greg trained as an MD because he wanted to do good, this here probably counts: https://80000hours.org/2012/08/how-many-lives-does-a-doctor-save/(and the many medical doctors and students who read posts like this and then also changed their minds, including me :-) )