That makes sense, thanks. Although this will not apply to organisations/individuals that were promised funds from the Future Fund but didn't receive any, right? This case is pretty common, AFAICT.
Scott has sent me the following email (reproduced here with his approval). Scott wants to highlight that he doesn't know anything more than reading the public posts on this issue.
I'd encourage people to email Scott, it's probably good for someone to have a list of interested donors.
------------------------------------
Scott's email:
SHORT VERSION
If you want to donate blindly and you can afford more than $250K, read here for details, then consider emailing Open Philanthropy at inquiries@openphilanthropy.org . If less than $250K, read h...
I haven't read the comments and this has probably been said many times already, but it doesn't hurt saying it again:
From what I understand, you've taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)
At the same time though, it seems like your objection is a fully general argument against fundamental breakthroughs ever being necessary at any point, which seems quite unlikely.
Sorry, what I wanted to say is it seems unclear if fundamental breakthroughs are needed. They might be needed, or not. I personally am pretty uncertain about this and think that both options are possible. I think it's also possible that any breakthroughs that will happen won't change the general picture described in the OP much.
I agree on the rest of your comment!
I gave the comment a strong upvote because it's super clear and informative. I also really appreciate it if people spell out their reasons for "scale in not all you need", which doesn't happen that often.
That said, I don't agree with the argument or conclusion. Your argument, at least as stated, seems to be "tasks with the following criteria are hard for current RL with human feedback, so we'll need significant fundamental breakthroughs". The transformer was published 5 years ago. Back then, you could have used a very analogous argument about language models to argue that language models will never do this or that task; but for many of these tasks, language models can perform them now (emergent properties).
Yes, you can absolutely apply for conference and compute funding, separately from an application for salary, or in combination. E.g. if you're applying for salary funding anyway, it would be very common and normal to also apply for funding for a couple of conferences, equipment that you need, and compute. I think you would probably go for cloud compute, but I haven't thought about it much.
Sometimes this can give mild tax issues (if you get the grant in one year but only spend the money on the conference in the next year; or, in some countries, if you...
I think you could apply for funding from a number of sources. If the budget is small, I'd start with the Longterm Future Fund: https://funds.effectivealtruism.org/funds/far-future
relevant tweet I saw recently: https://twitter.com/scholl_adam/status/1556989092784615424
I'm excited about people thinking about this topic. It's a pretty crucial assumption in the "EA longtermist space", and relatively underexplored.
This post is a response to the thesis of Jan Brauner’s post The Expected Value of Extinction Risk Reduction Is Positive.
The post is by Jan Brauner AND Friederike Grosse-Holz. I think correcting this is particularly important because the EA community struggles with gender diversity, so dropping the female co-author is extra bad.
Given that Greg trained as an MD because he wanted to do good, this here probably counts: https://80000hours.org/2012/08/how-many-lives-does-a-doctor-save/
(and the many medical doctors and students who read posts like this and then also changed their minds, including me :-) )
This is a bit of a summary of what other people have said, and a bit of my own conceptualisation:
A) If the work is not competitive (not a winner-takes-all market), then:
I'd guess that quite often you'd either win anyway or lose anyway, and that the 20% don't make the difference. There are so many factors that matter for startup founder success (talent, hard-workingness, network, credentials, luck) that it would be surprising if the competition was often so close that a 20% reduction in working time changes things.
Another way to put this: it seems likely that Facebook would still be worth hundreds of billions of dollars, and Myspace ~$0, had the Facebook founders worked 20% less).
There has also been this post on cognitive enhancement research:
https://forum.effectivealtruism.org/posts/MojiqNw5MN6WMXETc/cause-profile-cognitive-enhancement-research-1
Here is my take on the value of extinction risk reduction, from some years ago: https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive
This posts also contains links to many other posts related to the topic.
Some other posts, that come to different conclusions:
&nb...
Thanks so much for writing this! I expect this will be quite useful for many people.
I actually spent some time this week worrying a bit about a nuclear attack on the UK, bought some preparation stuff, figured out where I would seek shelter or when I’d move to the countryside, and so on. One key thing is that it’s just so hard to know which probability to assign. Is it 1%? Then I should GTFO! Is it 0.001% Then I shouldn’t worry at all.
Enlightenment at scale (provocative title :-) )
Values and Reflective Processes (?), X-risk (?)
A strong meditation practice promises enticing benefits to the meditator---less suffering, more control over ones attention and awareness, more insight, more equanimity. Brahmavihara practice promises the cultivation of loving-kindness, compassion, and empathetic joy. The world would be a much better place if everybody suffered less, had more equanimity, and felt strong compassion and empathy with other beings. But meditation is hard! Becoming a skilled meditator,...
AI alignment prize suggestion: Improve our ability to evaluate (and provide training signal for) fuzzy tasks
Artificial Intelligence
There are many tasks that we want AI systems to do, for which performance cannot be evaluated automatically (and thus training signal provision is hard). If we don't make progress on our ability to train systems for such tasks, we might end up in a world full of systems that optimise for that which is easy to measure, rather than what we actually want. One example of such a task is the evaluation of free-form text; there is cur...
AI alignment prize suggestion: Demonstrate a true sandwiching project
Artificial Intelligence
Sandwiching projects are a concrete way for how to make progress on aligning narrowly superhuman models. They “sandwich” the model in between one set of humans which is less capable than it and another set of humans which is more capable than it at the fuzzy task in question, and b) figure out how to help the less-capable set of humans reproduce the judgments of the more-capable set of humans. For example, first fine-tune a coding model to write short functions solv...
AI alignment prize suggestion: Introduce AI Safety concepts into the ML community
Artificial Intelligence
Recently, there have been several papers published at top ML conferences that introduced concepts from the AI safety community into the broader ML community. Such papers often define a problem, explain why it matters, sometimes formalise it, often include extensive experiments to showcase the problem, sometimes include some initial suggestions for remedies. Such papers are useful in several ways: they popularise AI alignment concepts, pave the way for fu...
Refinement of project idea #22, Prediction Markets
Add: "In particular, we'd like to see prediction platforms that do all of the following three: use real money, are very easy to use, allow very easy creation of markets.
Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles
Effective Altruism
When it comes to enhancement of productivity, health, and wellbeing, the EA community does not sufficiently utilise division of labour. Currently, community members need to obtain the relevant knowledge themselves and do related research, e.g. on health issues, themselves. We would like to see dedicated experts on these issues that offer optimal productivity, health, and wellbeing, as a service. As a vision, a person working in a high-impact...
Reducing gain-of-function research on potentially pandemic pathogens
Biorisk
Lab outbreaks and other lab accidents with infectious pathogens happen regularly. When such accidents happen in labs that work on gain-of-function research (on potentially pandemic pathogens), the outcome could be catastrophic. At the same time, the usefulness of gain-of-function research seems limited; for example, none of the major technological innovations that helped us fight COVID-19 (vaccines, testing, better treatment, infectious disease modelling) was enabled by gain-of-func...
Cognitive enhancement research and development (nootropics, devices, ...)
Values and Reflective Processes, Economic Growth
Improving people's ability to think has many positive effects on innovation, reflection, and potentially individual happiness. We'd like to see more rigorous research on nootropics, devices that improve cognitive performance, and similar fields. This could target any aspect of thinking ability---such as long/short term memory, abstract reasoning, creativity---and any stage of the research and development pipeline---from wet lab research ...
This page, from Rob Wiblin, has been shared on Twitter recently. Contains some advice some minimal version of preparation (e.g. buy potassium iodide tablets): https://nuclearadvice.org/
I have a similar knee-jerk reaction whenever I read a post "on research", so I wrote up my experience with different types of research: https://forum.effectivealtruism.org/posts/pHnMXaKEstJGcKP2m/different-types-of-research-are-different
(I'm not at all trying to imply that Rose should have caveated more in her post.)
First, check out this post: https://www.lesswrong.com/posts/YDF7XhMThhNfHfim9/ai-safety-needs-great-engineers
Are you talking about adversarial ML/adversarial examples? If so, that is certainly an area that's relevant to long-term AI safety; e.g. many proposals for aligning AGI include some adversarial training. In general, I'd say many areas of ML have some relevance to safety, and it mostly depends on how you pick your research project within an area.
Maybe this? Probably not, though https://www.lesswrong.com/posts/puYfAEJJomeodeSsi/an-observation-of-vavilov-day
yes, when we did the calculation, it was something like €2 per day (for ~6-8 hours per day). Still very cheap for a depression treatment :-)
Great to see this initiative, it seems like there is probably valuable work to be done in this area. I would make extra sure not to conflate "EA jobs" with "jobs at EA orgs" (not implying that you do conflate them). The latter just don't have that much capacity in the medium-term.
In a way, it's easier to offer specific training for skills that are needed by EA orgs, and maybe this is more tractable. But I'd also be very excited about programmes that equip many people with the resources they need to pursue high-impact careers outside of the few main EA orgs (whatever these resources are: skills? personality traits? money? cultural shift in the EA community?).
I personally have benefitted massively from coaching. E.g. I recently wrote this about one of my coaches:
"Paul is a truly excellent coach. I had 40 sessions with him over the course of 2 years. In these, I made transformational progress on topics as broad as motivation/procrastination, communication/teamwork/leadership, time and project management, and decision-making. Paul's science-based and no-bullshit approach aims at long-term growth, not only fixing this week's issues."
Coaching increased my productivity a lot, but also helped me improve a lot in othe...
Jan's enthusiasm triggered me to start with coaching. I second deeper changes as the truly important ones to my personal coaching journey. I started coaching without having particular "issues" but quickly realized how much space between current me and best-possible me there is. Among other things ,coaching helped me to get in the habit of deliberate practice on minimizing this space. Nowadays I regularly do self-coaching sessions that have many elements of the previous "two brain" coaching sessions.
I just quickly wanted to say that this seems related to impact certificates: https://forum.effectivealtruism.org/tag/certificate-of-impact
There have been a few forum posts on this topic, you can just search the forum (or google) for "impact certificate" and will probably find some interesting arguments.
Hi Michael, I wrote this 2 years ago and have not worked in this area afterwards. To give a really good answer, I'd probably have to spend several hours reading the text again. But from memory, I think that most arguments don't rest on the assumption of future agents being total utilitarians. In particular, none of the arguments requires the assumption that future agents will create lots of high welfare beings. So I guess the same conclusions follow if you assume deontologist future agents, or ones with asymmetric population ethics. This is particularly true if you think that your idealised, reflected preferences would be close to that of the future agents.
I'm not completely sure if I understand what you are looking for, but:
http://epidemicforecasting.org/containment
https://www.bsg.ox.ac.uk/research/research-projects/oxford-covid-19-government-response-tracker
I wrote down some musings about this (including a few relevant links) in appendix 2 here.
I think I overheard Toby saying that the footnotes and appendices were dropped in the audiobook and that, yes, the footnotes and appendices (which make up 50% of the book) should be the most interesting part for people already familiar with the X-risk literature.
So this is my very personal impression. I might be super wrong about this, that's why I asked this question. Also, I remember liking the main EA facebook group quite a bit in the past, so maybe I just can't properly relate to how useful the group is for people that are newer to EA thinking.
Currently, I avoid reading the EA facebook group the same way I avoid reading comments under youtube videos. Reading the group makes me angry and sad because of the ignorance and aggression displayed in the posts and especially in the comments. I think many co...
I agree that the main EA Facebook group has many low quality comments which "do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with." That said, it seems that one of the main reasons for this is that the Facebook group contains many more people with very low or tangential involvement with EA. I think we should be pretty cautious about more heavily moderating or trying to exclude the contributions of newer or less involved members
As an illustration: the 2018 EA Survey found >50% of respondents were memb
...I first thought that "counterproposal passed" means that a proposal very different to the one you suggested passed the ballot. But skimming the links, it seems that the counterproposals were actually similar to your original proposals?
Thanks for bringing this to my attention, I modified the title and a respective part in the post.
I didn't have the time to check in with CEA before writing the post so I had to choose between writing the post as is or not writing it at all. That's why the first line says (in italics) "I’m not entirely sure that there is really no other official source for local group funding. Please correct me in the comments. "
I think I could have predicted that this is not enough to keep people from walking away with a false impression so I think I should have chosen a different headline.
That mostly seems to be semantics to me. There could be other things that we are currently "deficient" in and we could figure that out by doing cognitive enhancement research.
As far as I know, the term "cognitive enhancement" is often used in the sense that I used it here, e.g. relating to exercise (we are currently deficient in exercise compared to our ancestors), taking melatonin (we are deficient in melatonin compared to our ancestors), and so on...
Great to hear that several people are involved with making the grant decisions. I also want to stress that my post is not at all intended as a critique of the CBG programme.
I agree that there is more to movement building than local groups and that the comparison to AI safety was not on the right level.
I still stand by my main point and think that it deserves consideration:
My main point is that there is a certain set of movement building efforts for which the CEA community building grant programme seems to be the only option. This set includes local groups and national EA networks but also other things. Some common characteristics might be that these efforts are oriented towards the earlier stages of the movement building fu...
FWIW, I am excited about Future Matters. I have experienced them as having great perspectives on how to affect change via policy and how to make movements successful and effective. I think they have a sufficiently different lense and expertise from many EA orgs that I'm really happy to have them working on these causes. I've also repeatedly donated to them over the years (one of my main donation targets)