I think rather than say that Eliezer is wrong about decision theory, you should say that Eliezer's goal is to come up with a decision theory that helps him get utility, and your goal is something else, and you have both come up with very nice decision theories for achieving your goal.
(what is your goal?)
My opinion on your response to the demon question is "The demon would never create you in the first place, so who cares what you think?" That is, I think your formulation of the problem includes a paradox - we assume the demon is always right, but also, tha...
I guess any omniscient demon reading this to assess my ability to precommit will have learned I can't even precommit effectively to not having long back-and-forth discussions, let alone cutting my legs off. But I'm still interested in where you're coming from here since I don't think I've heard your exact position before.
Have you read https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality ? Do you agree that this is our crux?
Would you endorse the statement "Eliezer, using his decision theory, will usually end out with...
Sorry if I misunderstood your point. I agree this is the strongest objection against FDT. I think there is some sense in which I can become the kind of agent who cuts off their legs (ie by choosing to cut off my legs), but I admit this is poorly specified.
I think there's a stronger case for, right now, having heard about FDT for the first time, deciding I will follow FDT in the future. Various gods and demons can observe this and condition on my decision, so when the actual future comes around, they will treat me as an FDT-following agent rather than a non...
Were there bright people who said they had checked his work, understood it, agreed with him, and were trying to build on it? Or just people who weren't yet sure he was wrong?
'Were there bright people who said they had checked his work, understood it, agreed with him, and were trying to build on it?'
Yes, I think. Though my impression (Guy can make a better guess of this than me, since he has maths background) is that they were an extreme minority in the field, and all socially connected to Mochizuki: https://www.wired.com/story/math-titans-clash-over-epic-proof-of-the-abc-conjecture/
'Between 12 and 18 mathematicians who have studied the proof in depth believe it is correct, wrote Ivan Fesenko of the University of Nottingh...
I don't want to get into a long back-and-forth here, but for the record I still think you're misunderstanding what I flippantly described as "other Everett branches" and missing the entire motivation behind Counterfactual Mugging. It is definitely not supposed to directly make sense in the exact situation you're in. I think this is part of why a variant of it is called "updateless", because it makes a principled refusal to update on which world you find yourself in in order to (more flippant not-quite-right description) program the type of AIs that would w...
I won't comment on the overall advisability of this piece, but I think you're confused about the decision theory (I'm about ten years behind state of the art here, and only barely understood it ten years ago, so I might be wrong).
The blackmail situation seems analogous to the Counterfactual Mugging, which was created to highlight how Eliezer's decision theories sometimes (my flippant summary) suggest you make locally bad decisions in order to benefit versions of you in different Everett branches. Schwartz objecting "But look how locally bad this decision i...
'The blackmail situation seems analogous to the Counterfactual Mugging, which was created to highlight how Eliezer's decision theories sometimes (my flippant summary) suggest you make locally bad decisions in order to benefit versions of you in different Everett branches. Schwartz objecting "But look how locally bad this decision is!" isn't telling Eliezer anything he doesn't already know, and isn't engaging with the reasoning'
I just control-F searched the paper Schwarz reviewed, for "Everett", "quantum", "many-worlds" and "branch" and found zero hits. Can...
>that makes extremely bright people with math PhDs make simple dumb mistakes that any rando can notice
Bright math PhDs that have already been selected for largely buying into Eliezer's philosophy/worldview, which changes how you should view this evidence. Personally I don't think FDT is wrong as much as just talking past the other theories and being confused about that, and that's a much more subtle mistake that very smart math PhDs could very understandably make
This is starting to sound less like "Eliezer is a uniquely bad reasoner" and more like "there's something in the water supply here that makes extremely bright people with math PhDs make simple dumb mistakes that any rando can notice."
Independently of all the wild decision theory stuff, I don't think this is true at all. It's more akin to how for a few good years, people thought Mochizuki might have proven the ABC conjecture. It's not that he was right - just that he wrapped everything in so much new theory and terminology, that it took years for people to understand what he meant well enough to debunk him. He was still entirely wrong.
Thanks for writing this.
I understand why you can't go public with applicant-related information, but is there a reason grantmakers shouldn't have a private Slack channel where they can ask things like "Please PM me if any of you have any thoughts on John Smith, I'm evaluating a grant request for him now"?
Yeah we're working on something like this! There are a few logistical and legal details, but I think we can at least make something like this work between legible-to-us grantmakers (from my lights, LTFF, EAIF, OP longtermism, Lightspeed, Manifund, and maybe a few of the European groups like Longview and Effective Giving). Obviously there are still limitations (eg we can't systematically coordinate with academic groups, government bodies, and individual rich donors), but I think an expectation that longtermist nonprofit grantmakers talk to each other by default would be an improvement over the status quo.
(Note that weaker versions of this already happens, just not very systematically)
Okay, so GWWC, LW, and GiveWell, what are we going to do to reverse the trend?
Seriously, should we be thinking of this as "these sites are actually getting less effective at recruiting EAs" or as "there are so many more recruitment pipelines now that it makes sense that each one would drop in relative importance" or as "any site will naturally do better in its early years as it picks the low-hanging fruit in converting its target population, then do worse later"?
GWWC's membership has steadily grown in the recent years, so it's not that GWWC isn't getting more people to give significantly and effectively! I think this highlights broader questions about what the focus of the current effective altruism community is, and what it should be.
GWWC team members have advocated for a "big tent" effective altruism where everyone who wants to do good effectively should feel that they can be a part of the community - but anecdotally we hear sometimes that people who are primarily interested in giving don't feel like the broader...
I think the elephant in the room might be OpenPhil spending at least $211m on "Effective Altruism Community Growth (Longtermism)", including 80k spending $2.6m in marketing in 2022.[1]
As those efforts get results I expect the % of EA growth from those sources to increase.
I also expect EA™ spaces where these surveys are advertised to over-represent "longermism"/"x-risk reduction" (in part because of donor preferences, and in part because they are more useful for some EAs), so that would impact the % of people coming to these spaces from things like Gi...
The first point seems to be saying that we should factor in the chance that a program works into cost-effectiveness analysis. Isn't this already a part of all such analyses? If it isn't, I'm very surprised by that and think it would be a much more important topic for an essay than anything about PEPFAR in particular.
The second point, that people should consider whether a project is politically feasible, is well taken. It sounds like the lesson here is "if you find yourself in a situation where you have to recommend either project A or B, and both are good,...
Good points. Agree that "always go for a big push instead of incrementalism" is waaayyyy too simple and sweeping a lesson to draw from PEPFAR. Also, three cheers for not lying. I think the World Bank was right not to suppress its data on the low cost-effectiveness of ARV drugs circa the mid-2000s. But in retrospect, I think people drew bad policy conclusions from that data.
My piece above is largely a plea for a little bit of intellectual humility and introspection on the part of the cost-effectiveness crowd (of which I'm often an active participant). If we...
Update: I think Bing passes the high school essay bar, based on the section "B- Essays No More" at https://oneusefulthing.substack.com/p/i-hope-you-werent-getting-too-comfortable
...I find this interesting, but also somewhat hard to identify any meaningful patterns. For example, one could expect red points to be clustered at the top for Manifold, indicating that more forecasts equal better performance. But we don't see that here. The comparison may be somewhat limited anyway: In the eyes of the Metaculus community prediction, all forecasts are created equal. On Manifold, however, users can invest different amounts of money. A single user can therefore in principle have an outsized influence on the overall market price if they are will
Thank you. I misremebered the transcription question. I now agree with all of your resolutions, with the most remaining uncertainty on translation.
Thank you for doing this! I was working on a similar project and mostly came up with the same headline finding as you: the experts seemed well-calibrated. I did decide a few of the milestones a little differently, and would like to hear why you chose the way you did so I can decide whether or not to change mine.
This is great - thanks for this comment! I've gone through each to explain my reasoning. Your comments/sources changed my opinion on Starcraft and Explain - I've updated the post and scores to reflect this, and think the conclusion is now the same but slightly weaker, because the experts' Brier score is 0.2 points worse, but the comparative Brier scores are also worse to a similar amount. There's also my reasoning for other milestones in the appendix (and I've copy-pasted some of them below).
...Zach Stein-Perlman from AI Impacts said that he thought "efficien
Thanks for asking. One reason we decided to start with forecasting was because we think it has comparatively low risks compared to other fields like AI or biotech.
If this goes well and we move on to a more generic round, we'll include our thoughts on this, which will probably include a commitment not to oracular-fund projects that seem like they were risky when proposed, and maybe to ban some extremely risky projects from the market entirely. I realize we didn't explicitly say that here, which is because this is a simplified test round and we think t...
In 2018, I collected data about several types of sexual harassment on the SSC survey, which I will report here to help inform the discussion. I'm going to simplify by assuming that only cis women are victims and only cis men are perpetrators, even though that's bad and wrong.
Women who identified as EA were less likely report lifetime sexual harassed at work than other women, 18% vs. 20%. They were also less likely to report being sexually harassed outside of work, 57% vs. 61%.
Men who identified as EA were less likely to admit to sexually harassing pe...
...Conditional on being a woman in California, being EA did make someone more likely to experience sexual harassment, consistently, as measured in many different ways. But Californian EAs were also younger, much more bisexual, and much more polyamorous than Californian non-EAs; adjusting for sexuality and polyamory didn't remove the gap, but age was harder to adjust for and I didn't try. EAs who said they were working at charitable jobs that they explicitly calculated were effective had lower harassment rates than the average person, but those working a
Minor object-level objection: you say we should predict that crypto exchanges like FTX to fail, but I tried to calculate the risk of this in the second part of my post, and the average FTX-sized exchange fails only very rarely.
I don't think this is our main point of disagreement though. My main point of disagreement is about how actionable this is and what real effects it can have.
I think that the main way EA is "affiliated with" crypto is that it has accepted successful crypto investors' money. Of people who have donated the most to EA, I thin...
Thanks for your thoughtful response.
I'm trying to figure out how much of a response to give, and how to balance saying what I believe vs. avoiding any chance to make people feel unwelcome, or inflicting an unpleasant politicized debate on people who don't want to read it. This comment is a bad compromise between all these things and I apologize for it, but:
I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. I think "everyone knows" (in Zvi's sense of the term, where it's such strong co...
Thanks, I realize this is a tricky thing to talk about publicly (certainly trickier for you, as someone whose name people actually know, than for me, who can say whatever I want!). I'm coming in with a stronger prior from "the outside world", where I've seen multiple friends ignored/disbelieved/attacked for telling their stories of sexual violence, so maybe I need to better calibrate for intra-EA-community response. I agree/hope that our goals shouldn't be at odds, and that's what I was trying to say that maybe did not come across: I didn't want people to ...
Predictably, I disagree with this in the strongest possible terms.
If someone says false and horrible things to destroy other people's reputation, the story is "someone said false and horrible things to destroy other people's reputation". Not "in some other situation this could have been true". It might be true! But discussion around the false rumors isn't the time to talk about that.
Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman (Alice). Maybe he says that she only got a good position in...
Thank you, this is clarifying for me and I hope for others.
Responses to me, including yours, have helped me update my thinking on how the EA community handles gendered violence. I wasn't aware of these cases and am glad, and hope that other women seeing this might also feel more supported within EA knowing this. I realize there are obvious reasons why these things aren't very public, but I hope that somehow we can make it clearer to women that Kathy's case, and the community's response, was an outlier.
I would still push back against the gender-reversal fal...
EDIT: After some time to cool down, I've removed that sentence from the comment, and somewhat edited this comment which was originally defending it.
I do think the sentence was true. By that I mean that (this is just a guess, not something I know from specifically asking them) the main reason other people were unwilling to post the information they had, was because they were worried that someone would write a public essay saying "X doesn't believe sexual assault victims" or "EA has a culture of doubting sexual assault victims". And they all hoped some...
Hi Scott,
Thank you for both of your comments. I appreciate you explaining why you wrote a post about Kathy and I think it's useful context for people to understand as they are thinking about these issues. My intention was not to call anybody out, rather, to point to a pattern of behavior that I observed and describe how it made me (and could make others) feel.
Thanks for removing the sentence.
I'm sorry you've gotten flak. I don't think you deserve it. I think you did the right thing, and the silence of other people "in the know" doesn't reflect particularly well on them. (Not in the sense that we should call them out, but in the sense that they should maybe think about whether they knowingly let a likely-innocent person suffer unjust reputation harm.)
I think there's a culture of fear around these kinds of issues that it's useful to bring to the foreground if we want to model them correctly.
Agreed. I thin...
...I read about Kathy Forth, a woman who was heavily involved in the Effective Altruism and Rationalist communities. She committed suicide in 2018, attributing large portions of her suffering to her experiences of sexual harassment and sexual assault in these communities. She accuses several people of harassment, at least one of whom is an incredibly prominent figure in the EA community. It is unclear to me what, if any, actions were taken in response to (some) of her claims and her suicide. What is clear is the pages and pages of tumblr posts and Reddit thre
For the record, I knew Kathy for several years, initially through a local Less Wrong community, and considered her a friend for some time. I endorse Scott's assessment, but I'll emphasise that I think she believed the accusations she made.
Relevant to this post: Many people tried to help Kathy, from 3 groups that I'm aware of. People gave a lot of time and energy. Speaking for myself and what I observed in our local community, I believe we prioritised helping her over protecting our community and over our own wellbeing.
In the end things went poorly on all t...
I came to the comments here to also comment quickly on Kathy Forth's unfortunate death and her allegations. I knew her personally (she subletted in my apartment in Australia for 7 months in 2014, but more meaningfully in terms of knowing her, we also we overlapped at Melbourne meetups many times, and knew many mutual people). Like Scott, I believe she was not making true accusations (though I think she genuinely thought they were true).
I would have said more, but will follow Scott's lead in not sharing more details. Feel free to DM me.
(some of) Kathy's accusations were false
just to draw some attention to the "(some of)", Kathy claimed in her suicide note that her actions had led to one person being banned from EA events. My understanding is that she made a mixture of accusations that were corroborated and ones that weren't, including the ones you refer to. I think this is interesting because it means both:
Regardless of the accuracy of this comment, it makes me sad that the top comment on this post is adversarial/argumentative and showing little emotional understanding/empathy (particularly the line "getting called out in posts like this one"). I think it unfortunately demonstrates well the point the author made about EA having an emotions problem:
...On the forum in particular and in EA discourse in general, there is a tendency to give less weight/be more critical of posts that are more emotion-heavy and less rational. This tendency makes sense based on EA prin
I'm glad you made your post about how Kathy's accusations were false. I believe that was the right thing to do -- certainly given the information you had available.
But I wish you had left this sentence out, or written it more carefully:
But they wouldn't do that, I'm guessing because they were all terrified of getting called out in posts like this one.
It was obvious to me reading this post that the author made a really serious effort to stay constructive. (Thanks for that, Maya!) It seems to me that we should recognize that, and you're erasi...
While this is important (clarifying of misinformation), I want to mention that I don't think this takes away from the main message of the post. I think it's important to remember that even with a culture of rationality, there are times when we won't have enough information to say what happened (unlike in Scotts case), and for that reason Mayas post is very relevant and I am glad it was shared.
It also doesn't seem appropriate to mention this post as "calling out". While it's legitimate to fear reputations being damaged with unsubstantiated claims, this post doesn't strike me as doing such.
I want to strong agree with this post, but a forum glitch is preventing me from doing so, so mentally add +x agreement karma to the tally. [Edit: fixed and upvoted now]
I have also heard from at least one very credible source that at least one of Kathy's accusations had been professionally investigated and found without any merit.
Maybe also worth adding that the way she wrote the post would in a healthy person be intentionally misleading, and was at least incredibly careless for the strength of accusation. Eg there was some line to the effect of 'CFAR are i...
I think I wrote that piece in 2010 (based on timestamp on version I have saved, though I'm not 100% sure that's the earliest draft). I would have been 25-26 then. I agree that's the first EA-relevant thing I wrote.
For what it's worth, I still don't feel like I understand CEA's model of how having extra people present hurts the most prestigious attendees.
If you are (say) a plant-based meat expert, you are already surrounded by many AI researchers, epidemiologists, developmental economists, biosecurity analyists, community-builders, PR people, journalists, anti-factory-farm-activists, et cetera. You are probably going to have to plan your conversations pretty deliberately to stick to people within your same field, or who you are likely to have interesting things to sa...
I also don't get this. I can;t help thinking about the Inner Ring essay by C.S. Lewis. I hope that's not what's happening.
Thanks for your response. I agree that the goal should be trying to hold the conference in a way that's best for the world and for EA's goals. If I were to frame my argument more formally, it would be something like - suppose that you reject 1000 people per year (I have no idea if this is close to the right number). 5% get either angry or discouraged and drop out of EA. Another 5% leave EA on their own for unrelated reasons, but would have stayed if they had gone to the conference because of some good experience they had there. So my totally made up Fermi ...
Hi Scott — it’s hard to talk about these things publicly, but yes a big concern of opening up the conference is that attendees’ time won’t end up spent on the most valuable conversations they could be having. I also worry that a two-tiered app system would cause more tension and hurt feelings than it would prevent. A lot of conversations aren’t scheduled through the app but happen serendipitously throughout the event. (Of the things you mentioned, I’m not particularly worried about attendees disrupting talks.)
We’ve thought a fair bit about the “how costly ...
I'm having trouble figuring out how to respond to this. I understand that it's kind of an academic exercise to see how cause prioritization might work out if you got very very rough numbers and took utilitarianism very seriously without allowing any emotional considerations to creep in. But I feel like that potentially makes it irrelevant to any possible question.
If we're talking about how normal people should prioritize...well, the only near-term cause close to x-risk here is animal welfare. If you tell a normal person "You can either work to preven...
Yes, I'm sorry, I talked to Claire about it and updated, sorry for the mixed messages and any stress this caused.
Thanks for the link, which I had previously missed and which does contain some important considerations.
I've been assuming that the people who set up the first impact market will have the opportunity to affect the "culture" around certificates, especially since many people will be learning what they are for the first time after the market starts to exist, but I agree that eventually it will depend on what buyers and sellers naturally converge to.
...One way that preference could be satisfied is to give each share a number. Funders will value the first shares m
Thanks, I had read that but failed to internalize how much it was saying this same thing. Sorry to Neel for accidentally plagiarizing him.
I didn't mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman's podcast) are pointing out that the existential risk argument can go through without the longtermism argument.
I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn't mea...
No worries, I'm excited to see more people saying this! (Though I did have some eerie deja vu when reading your post initially...)
I'd be curious if you have any easy-to-articulate feedback re why my post didn't feel like it was saying the same thing, or how to edit it to be better?
(EDIT: I guess the easiest object-level fix is to edit in a link at the top to your's, and say that I consider you to be making substantially the same point...)
Thank you for doing this. I know many people debating this question, with some taking actions based on their conclusions, and this looks like a really good analysis.
I see the site lists "our bloggers", including Aria Babu, Sam Enright, Stian Weslake, etc. Are these people who are on your team (and not competing for the prize), or are these people who have already entered the competition?
The first two issues are the whole point of laundering your opinions through bloggers.
I don't mean the bloggers should post the documents publicly, or even a play-by-play of the documents ("First Will MacAskill said, then Peter Singer said...") . I mean the bloggers should read the documents, understand the arguments, and post the key points/conclusions, perhaps with a "thanks to some anonymous people who helped me develop these ideas".
I agree the last issue is important, but this could be solved by good channels of communication and explanation about what should/shouldn't be posted.
EA is producing a ton of thoughtful writing, but the majority takes place in internal discussions and private documents. For some discussions, this would be the only sensible way to have them. But having other discussions in public should help to raise the salience of EA in the broader discourse and bring more people in. It could also help spark new ideas.
Any thoughts about making some of this discussion available to bloggers so they can popularize it? Asking bloggers unconnected to the EA network to reinvent or equal the level of discourse that the top people have among themselves sounds much harder than figuring out a way to get the originals to the public.
I agree that this would be very valuable. I work at an EA org and even I miss out on a lot of discussions that happen between top people, on googledocs or over lunch. Things must be much worse for people not at EA orgs.
It would be useful if some of the top people could share why they prefer not to make these discussions public. I would guess that one reason is that people don't want arguments which they haven't backed up in formal ways to be classed as "the official view of EA leaders". Creating a forum for posts with shakier epistemic status seems valuable
The coronavirus Fast Grants were great, but their competitive advantage seems to have been that they were they first (and fastest) people to move in a crisis.
The overall Emergent Ventures idea is interesting and worth exploring (I say, while running a copy of it), but has it had proven cost-effective impact yet? I haven't been following the people involved but I don't remember MR formally following up.
Thank you for writing this. I've seen a lot of people get confused around this, and it's genuinely pretty confusing, and it's good to have a really good summary all in one place by someone who knows what's going on.
Thanks for asking!
On some of your graphs, eg https://ourworldindata.org/grapher/gdp-per-capita-maddison-2020, you have a box you can tick to get "relative change". On other graphs, eg https://ourworldindata.org/grapher/children-per-woman-un?tab=chart&time=1950..2015&country=OWID_WRL~HUN, you don't have that box. You can force the chart to do this by adding "?stackMode=relative" to the URL, but that is annoying and hard to remember. Please add the box to all graphs.
If you generate a graph like https://ourworldindata.org/grapher/children-per-woman-un...
Thanks for doing this. It's really interesting to see someone try to quantify the effects of activism. A few questions:
1. Can you further explain your estimate of a 0.5% - 10% higher chance of a bill passing because of climate activism?
2. Does that number claim that the Sunrise Movement in particular increased the chance that much, or that all activism (compared to some world with no active pro-climate grassroots movement) increased it that much? If the latter, is this being divided by the Sunrise Movement's budget, or to something else? Is the claim that ...
Whether or not you go this route, you might also want to talk to Lama, a rationalist-adjacent Saudi student who was in the last Emergent Ventures cohort. She might be able to give you some advice and connect you to any existing Saudi community. You can find her at https://lamaalrajih.com
Vox is looking for EA journalists. This is an opportunity to publicize EA and help shape its public perception. Their ad hints that they want people who are already in the movement, so take a look if you have any writing or journalism related skills.
I support the change. I mean, I would, as someone who's taken advantage of the ambiguity in the current pledge to donate to x-risk-related causes, but I think even independent of that I support the change.
The GWWC pledge is a good institution. It provides a unified community norm of "at least ten percent" and helps keep people honest. It's a piece of "social technology" that makes effective altruism easier.
As such, if GWWC restricted it to the developing world, I would expect and encourage the animal rights movement and the x-risk movem...
I thought we already agreed the demon case showed that FDT wins in real life, since FDT agents will consistently end up with more utility than other agents.
Eliezer's argument is that you can become the kind of entity that is programmed to do X, by choosing to do X. This is in some ways a claim about demons (they are good enough to predict even the choices you made with "your free will"). But it sounds like we're in fact positing that demons are that good - I don't know how to explain how they have 999,999/million success rate otherwise - so I think he is r... (read more)