if the AI is scheming against us, reading those posts won’t be very helpful to it, because those ideas have evidently already failed.
Pulling this sentence out for emphasis because it seems like the crux to me.
saving money while searching for the maximum seems bad
In the sense of "maximizing" you're using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc.
However, I think the sense of "maximizing" used in the post you're responding to, and more broadly in EA when people talk about "maximizing ethics", is quite different. I understand it to mean something more like "doing the most g...
Like the idea of having the place in the name, but I think we can keep that while also making the name cool/fun?
Personally I wouldn't be opposed to calling EA spaces "constellations" in general, and just calling this one the "Harvard Constellation" or something. This is mostly because I think Constellation is an extraordinarily good name - it's when a bunch of stars get together to create something bigger that'll shine light into the darkness :)
Alternatively, "Harvard Hub" is both easy and very punchy.
I'm broadly on board with the points made here, but I would prefer to frame this as an addition to the pitch playbook, not a tweak to "the pitch".
Different people do need to hear different things. Some people probably do have the intuition that we should care about future people, and would react negatively to something like MacAskill's bottle example. But personally, I find that lots of people do react to longtermism with something like "why worry about the future when there are so many problems now?", and I think the bottle example might be a helpful intuition pump for those people.
The more I think about EA pitches the more I wonder if anyone has just done focus group testing or something...
yup sounds like we're on the same page - I think I steelmanned a little too hard. I agree that the people making these criticisms probably do in fact think that being shot by robots or something would be bad.
I propose we Taboo the phrase "most important", and agree that it's quite vague. The claim I read Karnofsky as making, phrased more precisely, is something like:
In approximately this century, it seems likely that humanity will be exposed to a high level of X-risk, while also developing technology capable of eliminating almost all known X-risks.
This is the Precipice view of things - we're in a brief dangerous bottleneck, after which it seems like things will be much safer. I agree it takes a leap to forecast that no further X-risks will arise in the trillio...
tldr: I think this argument is in danger of begging the question, and rejecting criticisms that implicitly just say "EA isn't that important" by asserting "EA is important!"
There’s an analogy I think is instructive here
I think the fireman analogy is really fun, but I do have a problem with it. The analogy is built around the "fire = EA cause areas" analogy, and gets almost all of its mileage out of the implicit assumption that fires are important and need to be put out.
This is why the first class of critics in the analogy look reasonable, and the sec...
Agree that the impactfulness of working on better government is an important claim, and one you don't provide much evidence for. In the interest of avoiding an asymmetric burden of proof, I want to note that I personally don't have strong evidence against this claim either. I would love to see it further investigated and/or tried out more.
All else equal I definitely like the idea to popularize some sort of longtermist sentiment. I'm still unsure about the usefulness - I have some doubts about the paths to impact proposed. Personally, I think that a world with a mass-appeal version of longtermism would be a lot more pleasant for me to live in, but not necessarily much better off on the metrics that matter.
Thanks for this post - dealing with this phenomenon seems pretty important for the future of epistemics vs dogma in EA. I want to do some serious thinking about ways to reduce infatuation, accelerate doubt, and/or get feedback from distancing. Hopefully that'll become a post sometime in the near-ish future.
So, pulling out this sentence, because it feels like it's by far the most important and not that well highlighted by the format of the post:
what is desired is a superficial critique that stays within and affirms the EA paradigm while it also checks off the boxes of what ‘good criticism’ looks like and it also tells a story of a concrete win that justifies the prize award. Then everyone can feel good about the whole thing, and affirm that EA is seeking out criticism.
This reminds me a lot of a point mentioned in Bad Omens, about a certain aspect of EA which ...
This seems great! I really like the list of perspectives, it gave me good labels for some rough concepts I had floating around, and listed plenty of approaches I hadn't given much thought. Two bits of feedback:
Personal check-for-understanding: would this be a fair bullet-point summary?
Yup, existing EA's do not disappear if we go bust in this way. But I'm pretty convinced that it would still be very bad. Roughly, the community dies, even if the people making it up don't vanish. Trust/discussion/reputation dry up, the cluster of people who consider themselves "EA" are now very different from the current thing, and that cluster kinda starts doing different stuff on its own. Further community-building efforts just grow the new thing, not "real" EA.
I think in this scenario the best thing to do is for the core of old-fashioned EA's to basically disassociate with this new thing, come up with a different name/brand, and start the community-building project over again.
But I am also afraid that ... we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them
I've had a model of community building at the back of my mind for a while that's something like this:
"New folks come in, and pick up knowledge/epistemics/heuristics/culture/aesthetics from the existing group, for as long as their "state" (wrapping all these things up in one number for simplicity) is "less than the community average". But this is essentially a one way diffusion sort of dynamic, which m...
I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.
This is a great sentence, I will be stealing it :)
However, I think "having good legible epistemics" being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.
I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I've fou...
Hey, I really like this re-framing! I'm not sure what you meant to say in the second and third sentences tho :/
Question for anyone who has interest/means/time to look into it: which topics on the EA forum are overrepresented/underrepresented? I would be interested in comparisons of (posts/views/karma/comments) per (person/dollar/survey interest) in various cause areas. Mostly interested in the situation now, but viewing changes over time would be great!
My hypothesis [DO NOT VIEW IF YOU INTEND TO INVESTIGATE]:
I expect longtermism to be WILDLY, like 20x, overrepresented. If this is the case I think it may be responsible for a lot of the recent angst about the relationship between longtermism and EA more broadly, and would point to some concrete actions to take.
This is a thing I and a lot of other organizers I've talked to have really struggled with. My pet theory that I'll eventually write up and post (I really will, I promise!) is that you need Alignment, Agency, and Ability to have a high impact. Would definitely be interested in actual research on this.
Nice work! Lots of interesting results in here that I think lead to concrete strategy insights.
only 7.4% of New York University students knew what effective altruism (EA) is. At the same time, 8.8% were extremely sympathetic to EA ... Interestingly, these EA-sympathetic students were largely ignorant about EA; only 14.5% knew about it before the survey.
This is a great core finding! I think I got a couple important lessons from these three numbers alone. Outreach could probably be a few times bigger without the proportion of EA students who know about it ge...
I'm unsure if I agree or not. I think this could benefit from a bit of clarification on the "why this needs to be retired" parts.
For the first slogan, it seems like you're saying that this is not a complete argument for longtermism - just because the future is big doesn't mean its tractable, or neglected, or valuable at the margin. I agree that it's not a complete argument, and if I saw someone framing it that way I would object. But I don't think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It'...
Yes, 100% agree. I'm just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don't know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.
This is good and I want to see explicit discussion of it. One framing that I think might be helpful:
It seems like the cause of a lot of the recent "identity crisis" in EA is that we're violating good heuristics. It seems like if you're trying to do the most good, really a lot of the time that means you should be very frugal, and inclusive, and beware the in-group, and stuff like that.
However, it seems like we might live in a really unusual world. If we are in fact massively talent constrained, and the majority of impact comes from really high-powered talen...
Talk to u/Infinity, I see them on the EA subreddit every now and then. They singlehandedly provide like 90% of the memes on there, and they're pretty good 👍
Hi Organizers! The US requires proof of a negative COVID test to enter the country, even for citizens. Will/could you provide some advice or facilities at the conference for getting this? I (and I imagine many others) know literally nothing about the UK health system, am going to have to fly back to the US after the conference, and really don't want to get stuck in airport hell :/
Oop, thanks for correction. To be honest I'm not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho >:/
I'll hop on the "I'd love to see sources" train to a certain extent, but honestly we don't really need them. If this is happening it's super important, and even if it isn't happening right now it'll probably start happening somewhat soon. We should have a plan for this.
Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:
S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.
Why not?
Suggestion: the Future Fund should take ideas on a rolling basis, and assess them in rounds. EA is the kind of community where potentially good ideas bubble up all the time, and it would be a real shame if those were wasted because the funders only listen during narrow windows. Having an open drop-box to submit ideas costs FF almost nothing, and makes a bias-towards-action and constant passive brainstorming much easier.
Context: this idea
There's a 5-minute video about this kind of thing from Rob Miles:
I guess the takeaway is something like:
Counter-framing: AI alignment [via ambitious value learning] as analogous to [figuring out how to build a system that won't destroy the world when you try and train it like] raising a child.
If this were fiction that would make Buck your manic-pixie-dream-girl and I find that hilarious.
+1 to all the other resources in these answers, but never underestimate how useful it is to just get started! I keep this link bookmarked, which shows the currently-open Metaculus questions which will close soonest. Making quick predictions on these questions keeps the feedback loop as tight as possible (although it's still not that tight to be honest).
Also, Superforecasting is great but longer than it needs to be, I've heard that there are good summaries out there but don't personally know where they are.
This looks great! I'm concerned that it won't get the traffic it needs to be useful to people. Have you considered/attempted reaching out to 80K to put a link on the job board or something? That's my go-to careers resource, and I think the main way I could learn about the existence of something like this once this post is off the front page.
Anecdotally, I've found that describing EA as "a community of people trying to do as much good as possible with our time and money" gets good response.
Agree that this is worth a shot, would be Huge if it worked. But it seems like Mr Beast and Mark Rober might be selecting causes to avoid controversy, which would make it hard to get EA through. Both of their platforms are mainly built on mass appeal. Planting trees and cleaning up the oceans are extremely uncontroversial causes - nobody is out there arguing that they do net harm. This is not the case with EA.
That said, if any of you folks went to high school with Mark Rober or something, I would still be extremely excited to try this. I have a 3rd or 4th degree connection to him, but that seems a bit too far to do much of anything.
Not entirely sure if I interpreted your intentions right when I tried to write an answer. In particular, I'm confused by the line "I could create just a little more hedonium". My understanding is that hedonium refers to the arrangement of matter that produces utility most efficiently. Is the narrator deciding whether to convert themselves into hedonium?
I ended up interpreting things as if "hedonium" was meant to mean "utility", and the narrator is deciding what their last thought should be - how to produce just a little more utility with their last few computations before the universe winds down. Hopefully I interpreted correctly - or if I was incorrect, I hope this feedback is helpful :)
Bro this is really scary. Well done.
Observation: prion-catalysis or not, any vaccine-evasion measures at all seem extraordinarily dangerous. For a highly infectious threat, the fastest response we have right now is mass vaccine manufacture, and that seems just barely fast enough. But our vaccine tech is public knowledge, and an apocalyptic actor can take all the time they want to design a countermeasure.
Once a threat with any sort of countermeasure is released, we first have to go through a vaccine development cycle to find that out in the first plac...
I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable.
What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.
There's so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.
Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).
I agree with most of your assessment here. But I think rather than "simple altruism", it would be better to focus on "altruistic intent". Making this substitution doesn't change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.
That said, increasing altruistic intent...
I think this definition of "cause area" is roughly how the EA community uses the term in practice, and explains a lot of why/how it's useful. It helps facilitate good discussion by pointing towards the best people to talk to, since others in my cause area will have common knowledge and interests with myself and each other. On this view, "cause area" is just EA-speak for a subcommunity.
That makes it a bit hard to justify the common EA practice of "cause prioritization" though, since causes aren't really particularly homogeneous with regard to their impact. I think doing "intervention prioritization" would be a lot more useful, even though there's way more interventions than causes.
Is there some kind of up-to-date dashboard or central source for GiveWell's main "cost-per-expected-life" figure?
I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I'm not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to pr...
If it costs $4000 to prevent a death from malaria, malaria deaths happen at age 20 on average, and life expectancy in Africa is 62 years, then the cost per lifetime saved is $0.0109/hour.
If you make the average US income of $15.35/hour, this means that every marginal hour you work to donate can be expected to save 1,412 hours of life, if you take the very thoroughly researched, very scalable, low-risk baseline option. If you can only donate 10% of your income, then your leverage is reduced to a mere 141.2. Just by virtue of having been born in a deve...
Hm. I think I agree with the point you're making, but not the language it's expressed in? I notice that your suggestion is a change in endorsed moral principles, but you make an instrumental argument, not a moral one. To me, the core of the issue is here:
This seems to me more of a matter of high-fidelity communication than a matt... (read more)