New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Recent comments

80000 hours use three factors to measure the effectiveness of working on different cause areas: Scale, neglectness and solvabilty. But maybe urgency is important, too. Some areas can be waited for a longer time for humans to work on, name it, animal welfare, transhumanism. We can work on this 500 years later (if we're alive) But some problems have urgency, like: AI safety and biorisk. Should we work more on areas that are more urgent for us to solve?

Answer by PabloMar 26, 202320

Urgency in the sense you seem to have in mind is indeed a relevant consideration in cause prioritization, but I think it should be regarded as a heuristic for finding promising causes rather than as an additional factor in the ITN framework. See BrownHairedEevee's comment for one approach to doing this, proposed by Toby Ord. If you instead wanted to build 'urgency' into the framework, you would need to revise one of the existing factors so that the relevant units are canceled out when the three existing terms and this fourth new term are multiplied togethe... (read more)

2Answer by david_reinstein5h
This example seems a bit under-specified; maybe you could flesh it out more? There seem to be a few things going on: 1. Some 'cause areas' (or 'problems') may be relevant now, but only relevant in the future with a certain probability * e.g., 'animal welfare after the year 2200' is only relevant if humans make it to 2200) * But 'animal welfare between 2023 and 2200' is relevant as long as we make it until then 1. Some cause areas (e.g., preventing a big meteor from hitting the earth in 2200) will affect the probability that the others are relevant (or for how long they are relevant) 2. Some problems may be deferred to 'solve later' without much cost. * Hard to find an example here ... maybe 'preventing suffering from meteor strikes predicted between the years 2200-2300, presuming we can do little to improve the technology to avoid that before say, 2150'
1Charlie_Guthmann5h
1. You can only press one button per year due to time/resource/ etc constraints. Moreover you can only press each button once. 2. No I wasn’t
This is a linkpost for https://arxiv.org/abs/2303.11341

Yonadav Shavit (CS PhD student at Harvard) recently released a paper titled What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring.

The paper describes a compute monitoring regime that could allow governments to monitor training runs and detect deviations from training run regulations.

I think it's one of the most detailed public write-ups about compute governance, and I recommend AI governance folks read (or skim) it. A few highlights below (bolding mine). 

Abstract:

As advanced machine learning systems' capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other's compliance...

I think it has potential!

Finally, I think the two approaches require very different sets of skills.  My guess is that there are many more people in the EA community today (which skews young and quantitatively-inclined) with skills that are a good fit for evaluation-and-support than have skills that are an equally good fit for design-and-execution. I worry that this skills gap might increase the risk that people in the EA community might accidentally cause harm while attempting the design-and-execution approach.

This paragraph is a critical component of... (read more)

4Matt_Sharp7h
I liked this and would encourage you to publish it as a top-level post.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

I spent the last month or so trying to write a blog post that tries to capture what I view as the core argument for why advanced AI could be dangerous. As it turns out, this is not an easy thing to do. I went way over my ideal word count, and I still think there are a few missing details that may be important, and some arguments I might not have fleshed out well enough. In any case, I'm happy to have something I can finally point my friends and family to when they ask me what I do for work, even if it is flawed. I hope you find it similarly helpful.


Summary

Artificial intelligence — which describes machines that have learned to perform tasks typically associated...

2Harrison Durland18h
I think that another major problem is simply that there is no one-size-fits-all intro guide. I think I saw some guides by Daniel Eth (or someone else?) and a few other people that were denser than the guide you’ve written here, and yeah the intro by Kelsey Piper is also quite good. I’ve wondered if it could be possible/valuable to have a curated list of the best intros, and perhaps even to make a modular system, so people can customize better for specific contexts. (Or maybe having numerous good articles would be valuable if eventually someone wanted to and could use them as part of a language model prompt to help them write a guide tailored to a specific audience??)
1Darren McKee3h
Interesting points. I'm working on a book which is not quite a solution to your issue but hopefully goes in the same direction.  And I'm now curious to see that memo :)

Which issue are you referring to? (External credibility?) 

I don’t see a reason to not share the paper, although I will caveat that it definitely was a rushed job. https://docs.google.com/document/d/1ctTGcmbmjJlsTQHWXxQmhMNqtnVFRPz10rfCGTore7g/edit

Note: manually cross-posted from LessWrong. See here for discussion on LW.

Introduction

I recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered. 

Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments.

As an AI "alignment insider" whose current estimate of doom is around 5%,...

It could just be attention. If something would otherwise be too sweet, but some other part of it is salient (coldness, carbonization, bitterness, saltiness), those other parts will take some of your attention away from its sweetness, and it'll seem less sweet.

2Vasco Grilo4h
Thanks for the post, Quintin! Jaime Sevilla from Epoch [https://epochai.org/] mentioned here [https://hearthisidea.com/episodes/sevilla] that scaling of compute and algorithms are both responsible for half of the progress: Jaime also mentions that data has not been a bottleneck.

Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda?) be useful?

We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea.

If it was written... (read more)

SUMMARY

In this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp). 

  • We ran two pre-registered experiments with a total sample size of n=3700 participants.
  • We compared a control treatment to a moral argument treatment, and we also varied the level of moral demandingness to donate after they read the moral argument. We found that the moral argument increased the frequency and amount of donations. However, increasing the levels of moral demandingness did not translate into higher or lower giving. 

 

BACKGROUND

The...

Thanks for posting this.

Just to check my understand - did the participants actually donate their own money? Or were they asked how many frictional units of money they would theoretically donate?

2Jack Lewars6h
This is my intuition as well - the phrasing of the 'strong demandingness' seemed quite jarring compared to the usual language of donation page copy.

As part of our work at EASE, we created a directory and community of EA service providers so that we can best support our EA organizations (see intro post). In our group meeting today, we identified the service areas in which we currently don’t have enough providers. We would very much like to grow our community of providers so that we can partner with professionals to make sure that all the EA org needs are accounted for.
 

The problem we discussed this week was closing the supply gap for important org services that can help all our orgs achieve maximal impact. We find ourselves in an interesting situation: on the one hand, we are convinced there is too little demand for services inside EA (such as marketing and executive coaching...

How did you identify "services that there is a high demand for but not enough supply"? Is it simply based on the "quick look" you did, or is there some other evidence? 

The absence of EA services could simply be evidence of sufficient non-EA services, in which case it's probably worth thinking about the pros and cons of having EA services. 

The most obvious justification seems to be to keep money in the community, and/or to provide services at a relative discount. 

However, by relying on EA services there is a risk of missing out on the highest... (read more)

4alex lawsen (previously alexrjl)9h
I'm a little confused about what "too little demand" means in the second paragraph. Both of the below seem like they might be the thing you are claiming: * There is not yet enough demand for a business only serving EA orgs to be self sustaining. * EA orgs are making a mistake by not wanting to pay for these things even though they would be worth paying for. I'd separately be curious to see more detail on why your guess at the optimal structure for the provision of the kind of services you are interested in is "EA-specific provider". I'm not confident that it's not, but my low confidence guess would be that "EA orgs" are not similar enough that "context on how to with with EA orgs" becomes a hugely important factor.
2Jonny Spicer14h
Could you expand a bit on "software implementation" being a missing service? At first glance I would've thought the Altruistic Agency would provide that, am I mistaken?

We are pleased to introduce Cause Innovation Bootcamp (CIB), a project that aims to train researchers interested in EA, while vetting new potential cause areas in Global Health and Development. We achieve this by taking research fellows through a training bootcamp that upskills them on the basics of evidence-based research and then getting them to produce a shallow report (using a standardised template) of a cause area, all whilst being supported by a senior mentor. These reports will then be posted on the EA Forum, and be sent to relevant organisations who the research might be of particular interest to, and for whom it might inform their decision-making. Cause areas are selected through a rough prioritisation which helps us identify which ones we think are most likely...

when will the next fellowship take place? I am interested