Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism.
I agree with this. Failing that, I feel strongly that CEA should change its name. There are costs to having a leader / manager / "coordinator-in-chief", and costs to not having such an entity; but the worst of both worlds is to have ambiguity about whether a person or org is filling that role. Then you end up with situations like "a bunch of EAs sit on their hands because they expect so...
Update Apr. 15: I talked to a CEA employee and got some more context on why CEA hasn't done an SBF investigation and postmortem. In addition to the 'this might be really difficult and it might not be very useful' concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)
Although the Commission has said "there is no indication of wron...
The pendency of the CC statutory inquiry would explain hesitancy on the part of EVF UK or its projects to conduct or cooperate with an "EA" inquiry. A third-party inquiry is unlikely to be protected by any sort of privilege, and the CC may have means to require or persuade EVF UK to turn over anything it produced in connection with a third-party "EA" inquiry. However, it doesn't seem that this should be an impediment to proceeding with other parts of an "EA inquiry," especially to the extent this would be done outside the UK.
However, in the abstract -- if ...
I feel like "people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed" is being classed as "rumour" here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word "rumour" conjures.
I agree with this.
...[...] I feel like we still want to know if any one in leadership argued "oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside". That's a signal someone is a bad leader in my view
"Just focus on the arguments" isn't a decision-making algorithm, but I think informal processes like "just talk about it and individually do what makes sense" perform better than rigid algorithms in cases like this.
If we want something more formal, I tend to prefer approaches like "delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments" or "subsidize a prediction market to resolve the question" over "just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are".
Knowing what people think is useful, especially if it's a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)
Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn't want to assume that something's a good idea just because most EAs agree with it; I'd rather focus on the arguments for and against.
4/4 Update: An EA who was involved in EA's early response to the FTX disaster has give me their take on why there hasn't yet been an investigation. They think EA leaders (at least, the ones they talked to a lot at the time) had "little to do with a desire to protect the reputation of EA or of individual EAs", and had more to do with things like "general time constraints and various exogenous logistical difficulties".
See this comment for a lot more details, and a short response from Habryka.
Also, some corrections: I said that "there was a narrow investigati...
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
the choice is like "should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?"
I'm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.
One key reason: I think rumor mills are not very effective fraud detection mechanisms.
(This seems almost definitionally true: if something w...
(I'm going to wrap up a few disparate threads together here, and will probably be my last comment on this post ~modulo a reply for clarification's sake. happy to discuss further with you Rob or anyone via DMs/Forum Dialogue/whatever)
(to Rob & Oli - there is a lot of inferential distance between us and that's ok, the world is wide enough to handle that! I don't mean to come off as rude/hostile and apologies if I did get the tone wrong)
Thanks for the update Rob, I appreciate you tying this information together in a single place. And yet... I can't help b...
Fair! I definitely don't want to imply that there's been zero reflection or inquiry in the wake of FTX. I just think "what actually happened within EA networks, and could we have done better with different processes or norms?" is a really large and central piece of the puzzle.
The issue is that there are degrees of naiveness. Oliver's view, as I understand it, is that there are at least three positions:
I mean "done enough" in the sense that 80K is at fault for falling short, not in the sense that they should necessarily stop sharing that message.
I haven't heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I'll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I'd guess I'm missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn't EV's 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: Th...
I think I agree with Hypothetical EA that we basically know the broad picture.
I guess I'm just... satisfied with that? You say:
But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded.
.. w...
It now turns out that this has changed into podcasts, which is better than nothing, but doesn't give room to conversation or accountability.
Formatting error; this is something Siebe is saying, not part of the Will quotation.
"Better vet risks from funders/leaders, have lower tolerance for bad behavior, and remove people responsible for the crisis from leadership roles."
I don't think any such removals have happened, and my sense is tolerance of bad behavior of the type that seems to me most responsible for FTX has gone up (in-particular heavy optimization for optics and large tolerance for divergences between public narratives and what is actually going on behind the scenes).
I'd like to single out this part of your comment for extra discussion. On the Sam Harris podcast, Will M...
So, I think it's clear that a lot of leadership turnover has happened. However, my sense is that the kind of leadership turnover that has occurred is anti-correlated with what I would consider good. Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn't live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful (or burned out for other reasons, unrelated to tryi...
I'm a pretty big fan of Nate's public write-up on his relationship to Sam and FTX. Though, sure, this is going to be scarier for people who were way more involved and who did stuff that twitter mobs can more easily get mad about.
This is part of why the main thing I'm asking for is a professional investigation, not a tell-all blog post by every person involved in this mess (though the latter are great too). An investigation can discover useful facts and share them privately, and its public write-up can accurately convey the broad strokes of what happened, a...
Here's a post with me asking the question flat out: Why hasn't EA done an SBF investigation and postmortem?
This seems like an incredibly obvious first step from my perspective, not something I'd have expected a community like EA to be dragging its heels on years after the fact.
We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anyt...
Update Apr. 15: I talked to a CEA employee and got some more context on why CEA hasn't done an SBF investigation and postmortem. In addition to the 'this might be really difficult and it might not be very useful' concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)
Although the Commission has said "there is no indication of wron...
I've made a first attempt at this here: To what extent & how did EA indirectly contribute to financial crime - and what can be done now? One attempt at a review
I'd highlight that I found taking quite a structured approach helpful: breaking things down chronologically, and trying to answer specific questions like what's the mechanism, how much did this contribute, and what's a concrete recommendation?
..."I’ll suggest a framework for how that broader review might be conducted: for each topic the review could:
- Establish the details of EA involvement,
- Ind
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anything we should change in response?
I disagree with this framing.
Something that I believe I got wrong pre-FTX was base rates/priors: I had assumed that if a company was making billions of dollars, had received investment from top-tier firms, complied with a bunch of regulat...
I'm against doing further investigation. I expressed why I think we have already spent too much time on this here.
I also think your comments are falling into the trap of referring to "EA" like it was an entity. Who specifically should do an investigation, and who specifically should they be investigating? (This less monolithic view of EA is also part of why I don't feel as bothered by the the whole thing: so maybe some people in "senior" positions made some bad judgement calls about Sam. They should maybe feel bad. I'm not sure we should feel much collecti...
Overall I feel relatively supportive of more investigation and (especially) postmortem work. I also don't fully understand why more wasn't shared from the EV investigation[1].
However, I think it's all a bit more fraught and less obvious than you imply. The main reasons are:
I haven't heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I'll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.
Obviously this argument may not be compelling compared to what an actual proponent would say, and I'd guess I'm missing at least one key consideration here, so treat this as a mere conversation-starter.
Hypothetical EA: Why isn't EV's 2023 investigation enough? You want us to investigate; well, we investigated.
Rob: Th...
To be fair, this could trigger lawsuits. I hope someone is reflecting on FTX, but I wouldn't expect anyone to be keen on discussing their own involvement with FTX publicly and in great detail.
Not to state the obvious but the 'criticism of EA' posts didn't pose a real risk to the power structure. It is uhhhhh quite common for 'criticism' to be a lot more encouraged/tolerated when it isnt threatening.
4/2 Update: A former board member of Effective Ventures US, Rebecca Kagan, has shared that she resigned from the board in protest last year, evidently in part because of various core EAs' resistance to there being any investigation into what happened with Sam Bankman-Fried. "These mistakes make me very concerned about the amount of harm EA might do in the future."
Oliver Habryka says that I'm correct that EA still hasn't yet conducted any sort of investigation about what happened re SBF/FTX, beyond the narrow investigation into whether EV was facing legal r...
I'd add that I think 80K has done an awful lot to communicate "EA isn't just about earning-to-give" over the years. At some point it surely has to be the case that they've done enough. This is part of why I want to distinguish the question "did we play a causal role here?" from questions like "did we foreseeably screw up?" and "should we do things differently going forward?".
Yes, there are many indirect ways EA might have had a causal impact here, including by influencing SBF's ideology, funneling certain kinds of people to FTX, improving SBF's reputation with funders, etc. Not all of these should necessarily cause EAs to hand-wring or soul-search — sometimes you can do all the right things and still contribute to a rare disaster by sheer chance. But a disaster like this is a good opportunity to double-check whether we're living up to our own principles in practice, and also to double-check whether our principles and strategies are as beneficial as they sounded on paper.
It directly claims that the investigation was part of an "internal reflection process" and "instutitional reform", and I have been shared on documents by CEA employees where the legal investigation was explicitly called out as not being helpful for facilitating a reflection process and institutional reform.
This seems like the headline claim to me. EAs should not be claiming false things in the Washington Post, of all things.
Every aspect of that summary of how MIRI's strategy has shifted seems misleading or inaccurate to me.
I find myself agreeing with Nora on temporary pauses - and I don't really understand the model by which a 6-month, or a 2-year, pause helps, unless you think we're less than 6 months, or 2-years, from doom.
This doesn't make a lot of sense to me. If we're 3 years away from doom, I should oppose a 2-year pause because of the risk that (a) it might not work and (b) it will make progress more discontinuous?
In real life, if smarter-than-human AI is coming that soon then we're almost certainly dead. More discontinuity implies more alignment difficulty, but...
Daniel Wyrzykowski replies:
...The contract is signed for when bad things and disagreements happen, not for when everything is going good. In my opinion “I had no contract and everything was good” is not as good example as “we didn’t have a contract, had major disagreement, and everything still worked out” would be.
Even though I hate bureaucracy and admin work and I prefer to skip as much as reasonable to move faster, my default is to have a written agreement, especially if working with a given person/org for the first time. Generally, the weaker party should
Elizabeth van Nostrand replies:
...I feel like people are talking about written records like it's a huge headache, but they don't need to be. When freelancing I often negotiate verbally, then write an email with terms to the client., who can confirm or correct them. I don't start work until they've confirmed acceptance of some set of terms. This has enough legal significance that it lowers my business insurance rates, and takes seconds if people are genuinely on the same page.
What my lawyer parent taught me was that contracts can't prevent people f
Duncan Sabien replies:
...[...]
While I think Linda's experience is valid, and probably more representative than mine, I want to balance it by pointing out that I deeply, deeply, deeply regret taking a(n explicit, unambiguous, crystal clear) verbal agreement, and not having a signed contract, with an org pretty central to the EA and rationality communities. As a result of having the-kind-of-trust that Linda describes above, I got overtly fucked over to the tune of many thousands of dollars and many months of misery and confusion and alienation, and all of
Cross-posting Linda Linsefors' take from LessWrong:
...I have worked without legal contracts for people in EA I trust, and it has worked well.
Even if all the accusation of Nonlinear is true, I still have pretty high trust for people in EA or LW circles, such that I would probably agree to work with no formal contract again.
The reason I trust people in my ingroup is that if either of us screw over the other person, I expect the victim to tell their friends, which would ruin the reputation of the wrongdoer. For this reason both people have strong incentive to ac
Elizabeth van Nostrand replies:
...I feel like people are talking about written records like it's a huge headache, but they don't need to be. When freelancing I often negotiate verbally, then write an email with terms to the client., who can confirm or correct them. I don't start work until they've confirmed acceptance of some set of terms. This has enough legal significance that it lowers my business insurance rates, and takes seconds if people are genuinely on the same page.
What my lawyer parent taught me was that contracts can't prevent people f
Duncan Sabien replies:
...[...]
While I think Linda's experience is valid, and probably more representative than mine, I want to balance it by pointing out that I deeply, deeply, deeply regret taking a(n explicit, unambiguous, crystal clear) verbal agreement, and not having a signed contract, with an org pretty central to the EA and rationality communities. As a result of having the-kind-of-trust that Linda describes above, I got overtly fucked over to the tune of many thousands of dollars and many months of misery and confusion and alienation, and all of
I'd be happy to talk with you way more about rationalists' integrity fastidiousness, since (a) I'd expect this to feel less scary if you have a clearer picture of rats' norms, and (b) talking about it would give you a chance to talk me out of those norms (which I'd then want to try to transmit to the other rats), and (c) if you ended up liking some of the norms then that might address the problem from the other direction.
In your previous comment you said "it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation", "That’s a h...
Emerson approaches me to ask if I can set up the trip. I tell him I really need the vacation day for myself. He says something like “but organizing stuff is fun for you!”.
[...]
She kept insisting that I’m saying that because I’m being silly and worry too much and that buying weed is really easy, everybody does it.
😬 There's a ton of awful stuff here, but these two parts really jumped out at me. Trying to push past someone's boundaries by imposing a narrative about the type of person they are ('but you're the type of person who loves doing X!' 'you're only s...
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation.
I think it's totally normal and reasonable to care about your reputation, and there are tons of actions someone could take for reputational reasons (e.g., "I'll wash the dishes so my roommate doesn't think I'm a slob", or "I'll tweet about my latest paper because I'm proud of it and I want people to see what I accompl...
I'm not sure I've imagined a realistic justifying scenario yet, but in my experience it's very easy to just fail to think of an example even though one exists. (Especially when I'm baking in some assumptions without realizing I'm baking them in.)
I do think the phrase is a bit childish and lacks some rigor
I think the phrase is imprecise, relative to phrases like "prevent human extinction" or "maximize the probability that the reachable universe ends up colonized by happy flourishing civilizations". But most of those phrases are long-winded, and it often doesn't matter in conversation exactly which version of "saving the world" you have in mind.
(Though it does matter, if you're working on existential risk, that people know you're being relatively literal and serious. A lot of people talk about "savi...
I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong.
Yep, I think this is a big problem.
More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, "adopt a hits-based approach", "if you're never failing then you're playing it too safe", etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think...
Maybe I’m wrong— I really don’t know, and there have been a lot of “I don’t know” kind of incidents around Nonlinear, which does give me pause— but it doesn’t seem obviously unethical to me for Nonlinear to try to protect its reputation. That’s a huge rationalist no-no, to try to protect a narrative, or to try to affect what another person says about you, but I see the text where Kat is saying she could ruin Alice’s reputation as just a response to Alice’s threat to ruin Nonlinear’s reputation. What would you have thought if Nonlinear just shared, without ...
Can you give examples of EAs harshly punishing visible failures that weren't matters of genuine unethical conduct? I can think of some pretty big visible failures that didn't lead to any significant backlash (and actually get held up as positive examples of orgs taking responsibility). For example, Evidence Action discovering that No Lean Season didn't work and terminating it, or GiveDirectly's recent fraud problems after suspending some of their standard processes to get out money in a war zone. Maybe people have different standards for failure in longtermist/meta EA stuff?
I was also surprised by this, and I wonder how many people interpreted "It is acceptable for an EA org to break minor laws" as "It is acceptable for an EA org to break laws willy-nilly as long as it feels like the laws are 'minor'", rather than interpreting it as "It is acceptable for an EA org to break at least one minor law ever".
How easy is it to break literally zero laws? There are an awful lot of laws on the books in the US, many of which aren't enforced.
If someone uses the phrase "saving the world" on any level approaching consistent, run.
I use this phrase a lot, so if you think this phrase is a red flag, well, include me on the list of people who have that flag.
- If someone pitches you on something that makes you uncomfortable, but for which you can't figure out your exact objection - or if their argument seems wrong but you don't see the precise hole in their logic - it is not abandoning your rationality to listen to your instinct.
Agreed (here, and with most of your other points). Instincts like those can...
Yeah a quick search finds 10,000+ hits for comments about "saving the world") on this forum, many of which are by me.
I do think the phrase is a bit childish and lacks some rigor, but I'm not sure what's a good replacement. "This project can avert 10^-9 to 10^-5 dooms defined as unendorsed human extinction or worse at 80% resilience" just doesn't quite have the same ring to it.
Yeah, though if I learned "Alice is just not the sort of person to loudly advocate for herself" it wouldn't update me much about Nonlinear at this point, because (a) I already have a fair amount of probability on that based on the publicly shared information, and (b) my main concerns are about stuff like "is Nonlinear super cutthroat and manipulative?" and "does Nonlinear try to scare people into not criticizing Nonlinear?".
Those concerns are less directly connected to the vegan-food thing, and might be tricky to empirically distinguish from the hypothesis...
I'd appreciate it if Nonlinear spent their limited resources on the claims that I think are most shocking and most important, such as the claim that Woods said "your career in EA would be over with a few DMs" to a former employee after the former employee was rumored to have complained about the company.
I agree that this is a way more important incident, but I downvoted this comment because:
This also updates me about Kat's take (as summarized by Ben Pace in the OP):
Kat doesn’t trust Alice to tell the truth, and that Alice has a history of “catastrophic misunderstandings”.
When I read the post, I didn't see any particular reason for Kat to think this, and I worried it might be just be an attempt to dismiss a critic, given the aggressive way Nonlinear otherwise seems to have responded to criticisms.
With this new info, it now seems plausible to me that Kat was correct (even though I don't think this justifies threatening Alice or Ben in the way K...
I think that there's a big difference between telling everyone "I didn't get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!" and "they refused to get me vegan food and I barely ate for 2 days".
Agreed.
This also updates me about Kat's take (as summarized by Ben Pace in the OP):
Kat doesn’t trust Alice to tell the truth, and that Alice has a history of “catastrophic misunderstandings”.
When I read the post, I didn't see any particular reason for Kat to think this, and I worried it might be just be an attempt to dismiss a critic, given the aggressive way Nonlinear otherwise seems to have responded to criticisms.
With this new info, it now seems plausible to me that Kat was correct (even though I don't think this justifies threatening Alice or Ben in the way K...
It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice's language throughout emphasizes how she'll be fine, it's no big deal [...] I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people.
100% agreed with this. The chat log paints a wildly different picture than what was included in Ben's original post.
...Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute d
I think a crux in this is what you think the reason is for Alice being so unassertive towards Kat in the messages - was it because she's worried, based on experience, about angering her employers and causing negative consequences for herself, e.g. them saying she's being too difficult and refuse to help her at all, or some other, more favourable to Kat and Emerson, reason?
Without making any comment about the accuracy or inaccuracy of this post, I would just point out that nobody in EA should be shocked that an organization (e.g. Nonlinear) that is being libeled (in its view) would threaten a libel suit to deter the false accusations (as they see them), to nudge the author(e.g. Ben Pace) towards making sure that their negative claims are factually correct and contextually fair.
Per https://www.dmlp.org/legal-guide/proving-fault-actual-malice-and-negligence (h/t kave):
...[...] To support a claim for defamation, in most stat
This seems particularly clear in the case of non-anonymous posts like Ben's. Ben posted a thing that risks damaging Nonlinear's reputation. In the process, he's put his own reputation at risk: Nonlinear can publicly respond with information that shows Ben was wrong (and perhaps negligent, unfair, etc.), causing us to put a lot less stock in Ben's word next time around.
Alice and Chloe are anonymous, but by having a named individual vouch for them to some degree, we create a situation where ordinary reputational costs can do a good job of incentivizing honesty on everyone's part.
Dustin Moskovitz comments on Twitter:
The deployment problem is part of societal response to me, not separate.
[...] Eg race dynamics, regulation (including ability to cooperate with competitors), societal pressure on leaders, investment in watchdogs (human and machine), safety testing norms, whether things get open sourced, infohazards.
"The deployment problem is hard and weird" comes from a mix of claims about AI (AGI is extremely dangerous, you don't need a planet-sized computer to run it, software and hardware can and will improve and proliferate by defau...
To chime in, I think it would be helpful to distinguish between:
1. AI risks on a 'business as usual' model, where society continues as it was before, ie not doing much
and
2. AI risks given different levels of society response.
I like this! Richard Ngo and Eliezer discuss this a bit in Ngo's view on alignment difficulty:
[Ngo] (Sep. 25 [2021] Google Doc) Perhaps the best way to pin down disagreements in our expectations about the effects of the strategic landscape is to identify some measures that could help to reduce AGI risk, and ask how seriously |
Note that if it were costless to make the title way longer, I'd change this post's title from "AGI ruin mostly rests on strong claims about alignment and deployment, not about society" to the clearer:
The AGI ruin argument mostly rests on claims that the alignment and deployment problems are difficult and/or weird and novel, not on strong claims about society
Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other.
I would much more say that it fails because humans have human values.
Maybe a hunter-gatherer would have worried that building airplanes would somehow cause a catastrophe? I don't exactly see why; the obvious hunter-gatherer rejoinder could be 'we built fire and spears and our lives only improved; why would building wings to fly make anything bad happen?'.
Regardless, it doesn't seem like you can get much mileage via an analogy that...
For a STEM-capable AGI (or any intelligence for that matter) to do new science, it would have to interact with the physical environment to conduct experiments.
Or read arXiv papers and draw inferences that humans failed to draw, etc.
Doesn't this significantly throttles the speed of AGI gaining advantage over humanity, giving us more time for alignment?
I expect there's a ton of useful stuff you can learn (that humanity is currently ignorant about) just from looking at existing data on the Internet. But I agree that AGI will destroy the world a little slower ...
I expect there's a ton of useful stuff you can learn (that humanity is currently ignorant about) just from looking at existing data on the Internet.
Thank you for the reply, I agree with this point. Now that I think about it, protein folding is a good example of how the data was already available but before AlphaFold, nobody could predict sequence to structure with high accuracy. Maybe a sufficiently smart AGI can get more knowledge out of existing data on the internet without performing too many new experiments.
How much more can it squeeze out of exi...
Gordon Worley adds:
yeah been complaining about this for a while. I'm not sure exactly when things started to fall apart, but it's been in about the last year. the quality of discussion there has fallen off a cliff because it now seems to be full of folks unfamiliar with the basics of rationality or even ea thought. ea has always not been exactly rationality, but historically there was enough overlap to make eaf a cool place. now it's full of people who don't share a common desire to understand the world.
(obviously still good folks on the forum, just enough others to make it less fun and productive to post there)
Metaculus isn't a prediction market; it's just an opinion poll of people who use the Metaculus website.
agree with "not a prediction market" but think "just an opinion poll" undersells it; people are evaluated and rewarded on their accuracy