80000 hours use three factors to measure the effectiveness of working on different cause areas: Scale, neglectness and solvabilty. But maybe urgency is important, too. Some areas can be waited for a longer time for humans to work on, name it, animal welfare, transhumanism. We can work on this 500 years later (if we're alive) But some problems have urgency, like: AI safety and biorisk. Should we work more on areas that are more urgent for us to solve?
Yonadav Shavit (CS PhD student at Harvard) recently released a paper titled What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring.
The paper describes a compute monitoring regime that could allow governments to monitor training runs and detect deviations from training run regulations.
I think it's one of the most detailed public write-ups about compute governance, and I recommend AI governance folks read (or skim) it. A few highlights below (bolding mine).
As advanced machine learning systems' capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other's compliance...
I think it has potential!
Finally, I think the two approaches require very different sets of skills. My guess is that there are many more people in the EA community today (which skews young and quantitatively-inclined) with skills that are a good fit for evaluation-and-support than have skills that are an equally good fit for design-and-execution. I worry that this skills gap might increase the risk that people in the EA community might accidentally cause harm while attempting the design-and-execution approach.
This paragraph is a critical component of...
I spent the last month or so trying to write a blog post that tries to capture what I view as the core argument for why advanced AI could be dangerous. As it turns out, this is not an easy thing to do. I went way over my ideal word count, and I still think there are a few missing details that may be important, and some arguments I might not have fleshed out well enough. In any case, I'm happy to have something I can finally point my friends and family to when they ask me what I do for work, even if it is flawed. I hope you find it similarly helpful.
Artificial intelligence — which describes machines that have learned to perform tasks typically associated...
Which issue are you referring to? (External credibility?)
I don’t see a reason to not share the paper, although I will caveat that it definitely was a rushed job. https://docs.google.com/document/d/1ctTGcmbmjJlsTQHWXxQmhMNqtnVFRPz10rfCGTore7g/edit
Note: manually cross-posted from LessWrong. See here for discussion on LW.
I recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered.
Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.
As an AI "alignment insider" whose current estimate of doom is around 5%,...
It could just be attention. If something would otherwise be too sweet, but some other part of it is salient (coldness, carbonization, bitterness, saltiness), those other parts will take some of your attention away from its sweetness, and it'll seem less sweet.
Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda?) be useful?
We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea.
If it was written...
In this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp).
Thanks for posting this.
Just to check my understand - did the participants actually donate their own money? Or were they asked how many frictional units of money they would theoretically donate?
As part of our work at EASE, we created a directory and community of EA service providers so that we can best support our EA organizations (see intro post). In our group meeting today, we identified the service areas in which we currently don’t have enough providers. We would very much like to grow our community of providers so that we can partner with professionals to make sure that all the EA org needs are accounted for.
The problem we discussed this week was closing the supply gap for important org services that can help all our orgs achieve maximal impact. We find ourselves in an interesting situation: on the one hand, we are convinced there is too little demand for services inside EA (such as marketing and executive coaching...
How did you identify "services that there is a high demand for but not enough supply"? Is it simply based on the "quick look" you did, or is there some other evidence?
The absence of EA services could simply be evidence of sufficient non-EA services, in which case it's probably worth thinking about the pros and cons of having EA services.
The most obvious justification seems to be to keep money in the community, and/or to provide services at a relative discount.
However, by relying on EA services there is a risk of missing out on the highest...
We are pleased to introduce Cause Innovation Bootcamp (CIB), a project that aims to train researchers interested in EA, while vetting new potential cause areas in Global Health and Development. We achieve this by taking research fellows through a training bootcamp that upskills them on the basics of evidence-based research and then getting them to produce a shallow report (using a standardised template) of a cause area, all whilst being supported by a senior mentor. These reports will then be posted on the EA Forum, and be sent to relevant organisations who the research might be of particular interest to, and for whom it might inform their decision-making. Cause areas are selected through a rough prioritisation which helps us identify which ones we think are most likely...
when will the next fellowship take place? I am interested
Urgency in the sense you seem to have in mind is indeed a relevant consideration in cause prioritization, but I think it should be regarded as a heuristic for finding promising causes rather than as an additional factor in the ITN framework. See BrownHairedEevee's comment for one approach to doing this, proposed by Toby Ord. If you instead wanted to build 'urgency' into the framework, you would need to revise one of the existing factors so that the relevant units are canceled out when the three existing terms and this fourth new term are multiplied togethe... (read more)