The smart people were selected for having a good predictive track record on geopolitical questions with resolution times measured in months, a track record equaled or bettered by several* members of the concerned group. I think this is much less strong evidence of forecasting ability on the kinds of question discussed than you do.
*For what it's worth, I'd expect the skeptical group to do slightly better overall on e.g. non-AI GJP questions over the next 2 years, they do have better forecasting track records as a group on this kind of question, it's just not a stark difference.
I agree this is quite different from the standard GJ forecasting problem. And that GJ forecasters* are primarily selected for and experienced with forecasting quite different sorts of questions.
But my claim is not "trust them, they are well-calibrated on this". It's more "if your reason for thinking X will happen is a complex multi-stage argument, and a bunch of smart people with no particular reason to be biased, who are also selected for being careful and rational on at least some complicated emotive stuff, spend hours and hours on your argument an...
The first bullet point of the concerned group summarizing their own position was "non-extinction requires many things to go right, some of which seem unlikely".
This point was notably absent from the sceptics summary of the concerned position.
Both sceptics and concerned agreed that a different important point on the concerned side was that it's harder to use base rates for unprecedented events with unclear reference classes.
I think these both provide a much better characterisation of the difference than the quote you're responding to.
I'm still saving for retirement in various ways, including by making pension contributions.
If you're working on GCR reduction, you can always consider your pension savings a performance bonus for good work :)
I'm not officially part of the AMA but I'm one of the disagreevotes so I'll chime in.
As someone who's only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/or push the org to do things differently, meaning my only role is to 'just push out more money along the OP party line', is just miles away from what I've experienced.
If anything, I think how much ownership I've needed to take for the projects I'm working on has been the biggest challenge of starting the role. It's one that (I hope) I'm rising to...
I think 1 unfortunately ends up not being true in the intensive farming case. Lots of things are spread by close enough contact that even intense uvc wouldn't do much (and it would be really expensive)
I wouldn't expect the attitude of the team to have shifted much in my absence. I learned a huge amount from Michelle, who's still leading the team, especially about management. To the extent you were impressed with my answers, I think she should take a large amount of the credit.
On feedback specifically, I've retained a small (voluntary) advisory role at 80k, and continue to give feedback as part of that, though I also think that the advisors have been deliberately giving more to each other.
The work I mentioned on how we make introductions to others and tr...
This seems extremely uncharitable. It's impossible for every good thing to be the top priority, and I really dislike the rhetorical move of criticising someone who says their top priority is X for not caring at all about Y.
In the post you're replying to Chana makes the (in my view) virtuous move of actually being transparent about what CH's top priorities are, a move which I think is unfortunately rare because of dynamics like this. You've chosen to interpret this as 'a decision not to have' [other nice things that you want], apparently realised that...
I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.
I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.
Thank you for all of your work organizing the event, communicating about it, and answering people's questions. None of these seem like easy tasks!
I'm no longer on the team but my hot take here is that a good bet is just going to be trying really hard to work out which tools you can use to accelerate/automate/improve your work. This interview with Riley Goodside might be interesting to listen to, not only for tips on how to get more out of AI tools, but also to hear about how the work he does in prompting those tools has rapidly changed, but that he's stayed on the frontier because the things he learned have transferred.
Hey, it's not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it's one that frequently came up in some form or other when I was advising), in particular, I'd recommend skimming the sections on 'trying to have an impact right now', 'needing to work on AI immediately', and 'ignoring conventional career wisdom'.
It's not a full answer but I think the section of my discussion with Luisa Rodriguez on 'not trying hard enough to fail' might be interesting to read/listen to if you're wondering about this.
Responding here to parts of the third point not covered by "yep, not everyone needs identical advice, writing for a big audience is hard" (same caveats as the other reply):
..."And for years it just meant I ended up being in a role for a bit, and someone suggested I apply for another one. In some cases, I got those roles, and then I’d switch because of a bunch of these biases, and then spent very little time getting actually very good at one thing because I’ve done it for years or something." - are you sure this is actually bad? If each time you moved to somet
I don't think it's worth me going back and forth on specific details, especially as I'm not on the web team (or even still at 80k), but these proposals are different to the first thing you suggested. Without taking a position on whether this structure would overall be an improvement, it's obviously not the case that just having different sections for different possible users ensures that everyone gets the advice they need.
For what it's worth, one of the main motivations for this being an after-hours episode, which was promoted on the EA forum and my twitte...
[I left 80k ~a month ago, and am writing this in a personal capacity, though I showed a draft of this answer to Michelle (who runs the team) before posting and she agrees it provides an accurate representation. Before I left, I was line-managing the 4 advisors, two of whom I also hired.]
Hey, I wanted to chime in with a couple of thoughts on your followup, and then answer the first question (what mechanisms do we have in place to prevent this). Most of the thoughts on the followup can be summarised by ‘yeah, I think doing advising well is really hard’.
...Advis
Thanks for asking these! Quick reaction to the first couple of questions, I'll get to the rest later if I can (personal opinions, I haven't worked on the web team, no longer at 80k etc. etc.):
I don't think it's possible to write a single page that gives the right message to every user - having looked at the pressing problems page - the second paragraph visible on that page is entirely caveat. It also links to an FAQ, where multiple parts of the FAQ directly talk about whether people should just take the rankings as given. When you then click through to the...
[not on the LTFF and also not speaking for Open Phil, just giving a personal take]
A few reactions:
Can confirm that:
"sr EAs [not taking someone seriously if they were] sloppy in their justification for agreeing with them"
sounds right based on my experience being on both sides of the "meeting senior EAs" equation at various times.
(I don't think I've met Quinn, so this isn't a comment on anyone's impression of them or their reasoning)
So there's now a bunch of speculation in the comments here about what might have caused me and others to criticise this post.
I think this speculation puts me (and, FWIW, HLI) in a pretty uncomfortable spot for reasons that I don't think are obvious, so I've tried to articulate some of them:
- There are many reasons people might want to discuss others' claims but not accuse them of motivated reasoning/deliberately being deceptive/other bad faith stuff, including (but importantly not limited to):
a) not thinking that the mistake (or any other behav...
My comment wasn't about whether there are any positives in using WELLBYs (I think there are), it was about whether I thought that sentence and set of links gave an accurate impression. It sounds like you agree that it didn't, given you've changed the wording and removed one of the links. Thanks for updating it.
I think there's room to include a little more context around the quote from TLYCs.
...
In short, we do not seek to duplicate the excellent work of other charity evaluators. Our approach is meant to complement that work, in order to expand the list o
[Speaking for myself here]
I also thought this claim by HLI was misleading. I clicked several of the links and don't think James is the only person being misrepresented. I also don't think this is all the "major actors in EA's GHW space" - TLYCS, for example, meet reasonable definitions of "major" but their methodology makes no mention of wellbys
I find this surprising, given that I've heard numbers more like 100-200 $/h claimed by people considerably more senior than top-uni community builders (and who are working in similar fields/with similar goals).
(I'm straight up guessing, and would be keen for an answer from someone familiar with this kind of study)
This also confused me. Skimming the study, I think they're calculating efficacy from something like how long it takes people to get malaria after the booster, which makes sense because you can get it more than once. Simplifying a lot (and still guessing), I think this means that if e.g. on average people get malaria once a week, and you reduce it to once every 10 weeks, you could say this has a 90% efficacy, even though if you looked at how many people ...
This is a useful consideration to point out, thanks. I push back a bit below on some specifics, but this effect is definitely one I'd want to include if I do end up carving out time to add a bunch more factors to the model.
I don't think having skipped the neglectedness considerations you mention is enough to call the specific example you quote misleading though, as it's very far from the only thing I skipped, and many of the other things point the other way. Some other things that were skipped:
Work after AGI likely isn't worth 0, especially with e.g. Me
Most podcast apps let you subscribe to an RSS feed, and an RSS feed of the audio is available on the site
I'm a little confused about what "too little demand" means in the second paragraph. Both of the below seem like they might be the thing you are claiming:
I'd separately be curious to see more detail on why your guess at the optimal structure for the provision of the kind of services you are interested in is "EA-specific provider". I'm not confident that it...
I think "different timelines don't change the EV of different options very much" plus "personal fit considerations can change the EV of a PhD by a ton" does end up resulting in an argument for the PhD decision not depending much on timelines. I think that you're mostly disagreeing with the first claim, but I'm not entirely sure.
In terms of your point about optimal allocation, my guess is that we disagree to some extent about how much the optimal allocation has changed, but that the much more important disagreement is about whether some kind of centrally pl...
(I'm excited to think more about the rest of the ideas in this post and might have further comments when I do)
Commenting briefly to endorse the description of my course as an MVP. I'd love for someone to make a better produced version, and am happy for people to use any ideas from it that they think would be useful in producing the better version
[context: I'm one of the advisors, and manage some of the others, but am describing my individual attitude below]
FWIW I don't think the balance you indicated is that tricky, and think that conceiving of what I'm doing when I speak to people as 'charismatic persuasion' would be a big mistake for me to make. I try to:
Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.
I read this comment as implying that HLI's reasoning transparency is currently better than Givewell's, and think that this is both:
False.
Not the sort of thing it is reasonable to bring up before immediately hiding behind "that's just my opinion and I don't want to get into a debate about it here".
I therefore downvoted, as well as disagree voting. I don't think downvotes always need comments, but this one seemed worth explaining as the comment contains several statements people might reasonably disagree with.
I'm keen to listen to this, thanks for recording it! Are you planning to make the podcast available on other platforms (stitcher, Google podcasts etc - I haven't found it)
whether you have a 5-10 year timeline or a 15-20 year timeline
Something that I'd like this post to address that it doesn't is that to have "a timeline" rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it's not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a...
Huh, I took 'confidently' to mean you'd be willing to offer much better odds than 1:1.
I'm going to try to stop paying so much attention to the story while it unfolds, which means I'm retracting my interest in betting. Feel free to call this a win (as with Joel).
No worries on the acknowledgement front (though I'm glad you found chatting helpful)!
One failure mode of the filtering idea is that the AGI corporation does not use it because of the alignment tax, or because they don't want to admit that they are creating something that is potentially dangerous
I think it's several orders of magnitude easier to get AGI corporations to use filtered safe data than to agree to stop using any electronic communication for safety research. Why is it appropriate to consider the alignment tax of "train on data that someone h...
the two-player zero-sum game can be a decent model of the by-default adversarial interaction
I think this is the key crux between you and the several people who've brought up points 1-3. The model you're operating with here is roughly that the alignment game we need to play goes something like this:
1. Train an unaligned ASI
2. Apply "alignment technique"
3. ASI either 'dodges' the technique (having anticipated it), or fails to doge the technique and is now aligned.
I think most of the other people thinking about alignment are trying to prevent step...
If you haven't already seen them, you might find some of the posts tagged "task y" interesting to read.
EA fellowships and summer programmes should have (possibly more competitive) "early entry" cohorts with deadlines in September/October, where if you apply by then you get a guaranteed place, funding, and maybe some extra perk to encourage it, could literally be a slack with the other participants.
Consulting, finance etc have really early processes which people feel pressure to accept in case they don't get anything else, and then don't want to back out of.
That last comment seems very far from the original post which claimed
We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route.
If we don't have a biological representation of how BNNs can represent and perform symbolic representation, why do we have reason to believe that we know ANNs can't?
Without an ability to point to the difference, this isn't anything close to a reductio, it's just saying "yeah I don't buy it dude, I don't reckon AI will be that good"
Could you mechanistically explain how any of the 'very many ways' biological neurons are different mean that the the capacity for symbol manipulation is unique to them?
They're obviously very different, but what I don't think you've done is show that the differences are responsible for the impossibility of symbolic manipulation in artificial neural networks.
I live in London and have quite a lot of EA and non-EA friends/colleagues/acquaintances, and my impression is that group houses "by choice" are much more common among the EAs. It's noteworthy that group houses are common among students and lower paid/early stage working professionals for financial reasons though.
If you agree that bundles of biological neurons can have the capacity for symbolic thought, and that non-classical systems can create something symbolic, I don't understand why you think anything you've said shows that DL cannot scale to AGI, even granting your unstated assumption that symbolic thought is necessary for AGI.
(I think that last assumption is false, but don't think it's a crux here so I'm keen to grant it for now, and only discuss once we've cleared up the other thing)
Why do you think superforecasters who were selected specifically for assigning a low probability to AI x-risk are well described as "a bunch of smart people with no particular reason to be biased"?
For the avoidance of doubt, I'm not upset that the supers were selected in this way, it's the whole point of the study, made very clear in the write-up, and was clear to me as a participant. It's just that "your arguments failed to convince randomly selected superforecasters" and "your arguments failed to convince a group of superforecasters who were specifically selected for confidentiality disagreeing with you" are very different pieces of evidence.
One small clarification: the skeptical group was not all superforecasters. There were two domain experts as well. I was one of them.
I'm sympathetic to David's point here. Even though the skeptic camp was selected for their skepticism, I think we still get some information from the fact that many hours of research and debate didn't move their opinions. I think there are plausible alternative worlds where the skeptics come in with low probabilities (by construction), but update upward by a few points after deeper engagement reveals holes in their early thinking.