I work on AI Grantmaking at Open Philanthropy. Comments here are posted in a personal capacity.
Why do you think superforecasters who were selected specifically for assigning a low probability to AI x-risk are well described as "a bunch of smart people with no particular reason to be biased"?
For the avoidance of doubt, I'm not upset that the supers were selected in this way, it's the whole point of the study, made very clear in the write-up, and was clear to me as a participant. It's just that "your arguments failed to convince randomly selected superforecasters" and "your arguments failed to convince a group of superforecasters who were specifically selected for confidentiality disagreeing with you" are very different pieces of evidence.
The smart people were selected for having a good predictive track record on geopolitical questions with resolution times measured in months, a track record equaled or bettered by several* members of the concerned group. I think this is much less strong evidence of forecasting ability on the kinds of question discussed than you do.
*For what it's worth, I'd expect the skeptical group to do slightly better overall on e.g. non-AI GJP questions over the next 2 years, they do have better forecasting track records as a group on this kind of question, it's just not a stark difference.
The first bullet point of the concerned group summarizing their own position was "non-extinction requires many things to go right, some of which seem unlikely".
This point was notably absent from the sceptics summary of the concerned position.
Both sceptics and concerned agreed that a different important point on the concerned side was that it's harder to use base rates for unprecedented events with unclear reference classes.
I think these both provide a much better characterisation of the difference than the quote you're responding to.
I'm not officially part of the AMA but I'm one of the disagreevotes so I'll chime in.
As someone who's only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/or push the org to do things differently, meaning my only role is to 'just push out more money along the OP party line', is just miles away from what I've experienced.
If anything, I think how much ownership I've needed to take for the projects I'm working on has been the biggest challenge of starting the role. It's one that (I hope) I'm rising to, but it's hard!
In terms of how open OP is to steering from within, it seems worth distinguishing 'how likely is a random junior person to substantially shift the worldview of the org', and 'what would the experience of that person be like if they tried to'. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and it's something I really appreciate about his management.
I wouldn't expect the attitude of the team to have shifted much in my absence. I learned a huge amount from Michelle, who's still leading the team, especially about management. To the extent you were impressed with my answers, I think she should take a large amount of the credit.
On feedback specifically, I've retained a small (voluntary) advisory role at 80k, and continue to give feedback as part of that, though I also think that the advisors have been deliberately giving more to each other.
The work I mentioned on how we make introductions to others and track the effects of those, including collaborating with CH, was passed on to someone else a couple of months before I left, and in my view the robustness of those processes has improved substantially as a result.
This seems extremely uncharitable. It's impossible for every good thing to be the top priority, and I really dislike the rhetorical move of criticising someone who says their top priority is X for not caring at all about Y.
In the post you're replying to Chana makes the (in my view) virtuous move of actually being transparent about what CH's top priorities are, a move which I think is unfortunately rare because of dynamics like this. You've chosen to interpret this as 'a decision not to have' [other nice things that you want], apparently realised that it's possible the thinking here isn't actually extremely shallow, but then dismissed the possibility of anyone on the team being capable of non-shallow thinking anyway for currently unspecified reasons.
editing this in rather than continuing a thread as I don't feel able to do protracted discussion at the moment:
Inspect is open-source, and should be exactly what you're looking for given your stated interest in METR