Sounds to me like that would count! Perhaps you could submit the entire sequence but highlight the critical posts.
Replying in personal capacity:
I hope the contest will consider lower effort but insightful or impactful submissions to account for this?
Yes, very short submissions count. And so should "low effort" posts, in the sense of "I have a criticism I've thought through, but I don't have time to put together a meticulous writeup, so I can either write something short/scrappy, or nothing at all." I'd much rather see unpolished ideas than nothing at all.
... (read more)Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find bli
Just commenting to say this was a really useful resource summarising an important topic — thanks for the time you put into it!
This (and your other comments) is incredibly useful, thanks so much. Not going to respond to particular points right now, other than to say many of them stick out as well worth pursuing.
Thanks for this, I think I agree with the broad point you're making.
That is, I agree that basically all the worlds in which space ends up really mattering this century are worlds in which we get transformative AI (because scenarios in which we start to settle widely and quickly are scenarios in which we get TAI). So, for instance, I agree that there doesn't seem to be much value in accelerating progress on space technology. And I also agree that getting alignment right is basically a prerequisite to any of the longer-term 'flowthrough' considerations.
If I'... (read more)
I agree that fusion is feasible and will likely account for a large fraction (>20%) of energy supply by the end of the century, if all goes well. I agree that would be pretty great. And yeah, Helion looks promising.
But I don't think we should be updating much on headlines about achieving ignition or breakeven soon. In particular, I don't think these headlines should be significantly shifting forecasts like this one from Metaculus about timelines to >10% of energy supply coming from fusion. The main reason is that there is a very large gap between pro... (read more)
Thanks, that's a very good example.
I don't think this actually describes the curve of EA impact per $ overall
For sure.
Just wanted to comment that this was a really thoughtful and enjoyable post. I learned a lot.
In particular, I loved the way the point about how the relative value of trajctory change should depend on the smoothness of your probability distribution over the value of the long-run future.
I'm also now curious to know more about the contingency of the caste system in India. My (original) impression was that the formation of the caste system was somewhat gradual and not especially contingent.
For what it's worth I think I basically endorse that comment.
I definitely think an investigation that starts with a questioning attitude, and ends up less negative than the author's initial priors, should count.
That said, some people probably do already just have useful, considered critiques in their heads that they just need to write out. It'd be good to hear them.
Also, presumably (convincing) negative conclusions for key claims are more informationally valuable than confirmatory ones, so it makes sense to explicitly encourage the kind of investigations that have the best chance of yielding those conclusions (because the claims they address look under-scrutinised).
makes sense! yeah as long as this is explicit in the final announcement it seems fine. I also think "what's the best argument against X (and then separately do you buy it?)" could be a good format.
Thank you, this is a really good point. By 'critical' I definitely intended to convey something more like "beginning with a critical mindset" (per JackM's comment) and less like "definitely ending with a negative conclusion in cases where you're critically assessing a claim you're initially unsure about".
This might not always be relevant. For instance, you might set out to find the strongest case against some claim, whether or not you end up endorsing it. As long as that's explicit, it seems fine.
But in cases where someone is embarking on something l... (read more)
Yes, totally. I think a bunch of the ideas in the comments on that post would be a great fit for this contest.
Thanks, great points. I agree that we should only be interested in good faith arguments — we should be clear about that in the judging criteria, and clear about what counts as a bad faith criticism. I think the Forum guidelines are really good on this.
Of course, it is possible to strongly disagree with a claim without resorting to bad faith arguments, and I'm hopeful that the best entrants can lead by example.
The downweighting of AI in DGB was a deliberate choice for an introductory text.
Thanks, that's useful to know.
I guess that kind of confirms the complaint that there isn't an obvious, popular book to recommend on the topic!
Embarrassingly wasn't aware of the last three items on this list; thanks for flagging!
Oh cool, wasn't aware other people were thinking about the QF idea!
Re your question about imprints — I think I just don't know enough about how they're typically structured to answer properly.
Thanks for sharing — you should post this as a shortform or top-level post, otherwise I'm worried it'll just get lost in the comments here :)
Thanks, this is a useful clarification. I think my original claim was unclear. Read as "very few people were thinking about these topics at the time when DGB came out", then you are correct.
(I think) I had in mind something like "at the time when DGB came out it wasn't the case that, say, > 25% of either funding, person-hours, or general discussion squarely within effective altruism concerned the topics I mentioned, but now it is".
I'm actually not fully confident in that second claim, but it does seem true to me.
AI alignment and existential risks have been key components from the very beginning. Remember, Toby worked for FHI before founding GWWC, and even from the earliest days MIRI was seen as an acceptable donation target to fulfill the pledge. The downweighting of AI in DGB was a deliberate choice for an introductory text.
I was aware but should have mentioned it in the post
— thanks for pointing it out :)
Like Max mentioned, I'm not sure The Methods of Ethics is a good introduction to utilitarianism; I expect most people would find it difficult to read. But thanks for the pointer to the Very Short Introduction, I'll check it out!
Thanks very much for the pointer, just changed to something more sensible!
(For what it's worth, I had in mind this was much more of a 'dumb nerdy flourish' than 'the clearest way to convey this point')
Amazing! Just sent you a message.
Big fan of utilitarianism.net — not sure how I forgot to mention it!
Thanks Ed, this is really thoughtful.
+1 to the doomscrolling point — sometimes I feel like I have an obligation or responsibility to read the news, especially when it's serious. But this is almost always a mistake: in close to every instance, the world will not be a worse place if you take time away from the news.
Thanks for sharing Rose, this looks like an important and (hopefully) fruitful list. Would love to see more historians taking a shot at some of these questions.
My guess is that divesting your private investments isn't going to be an especially leveraged/impactful way to address the situation, and that you might consider spending the time you would spend researching this might be better spent finding direct donation opportunities, and sharing the results. But don't put a lot of weight on that.
This is a good analysis of divestment in general.
Thanks Alex, I appreciate this. Donated.
Thanks very much for putting this together. This section stood out to me —
... (read more)He is however optimistic on innovation in new social technologies and building new institutions. He believes that there are very few functional institutions and that most institutions are attempts at mimicking these functional institutions. He believes innovation in social technology is highly undersupplied today, and that individual founders have a significant shot at building them. He also believes that civilisation makes logistical jumps in complexity and scale in very short perio
Thanks for the pointer, fixed now. I meant for an average century.
Thanks, these are great points.
Thank you for the kind words!
I think this is strong enough as a factor that I now update to the position that derisking our exposure to natural extinction risks via increasing the sophistication of our knowledge and capability to control those risks is actually bad and we should not do it.
I would feel a bit wary about making a sweeping statement like this. I agree that there might be a more general dyanmic where (i) natural risks are typically small per century, and (ii) the technologies capable of controlling those risks might often be powerful enough to ... (read more)
The pedant in me wants to ask to point out that your third definition doesn’t seem to be a definition of existential risk? You say —
Approximate Definition: On track to getting to the best possible future, or only within a small fraction of value away from the best possible future.
It does make (grammatical) sense to define existential risk as the "drastic and irrevocable curtailing of our potential". But I don’t think it makes sense to literally define existential risk as “(Not) on track to getting to the best possible future, or only within a small fractio... (read more)
As you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Seems to me that pretty much whenever anyone would actually considering 'splitting the timeline' on some big uncertain question, then even if they didn't decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions?
Thanks for writing this — in general I am pro thinking more about what MWI could entail!
But I think it's worth being clear about what this kind of intervention would achieve. Importantly (as I'm sure you're aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here), or decrease the overall (subjective) chance of existential catastrophe.
But it could increase the chance of something like "at least [some small fraction]% of'branches' survive catastrophe", or at the extreme "at least one 'branc... (read more)
Noting that this is a question I'm also interested in
Awesome, thanks so much for putting in the time to make this. Obviously this kind of resource is a great shortcut for people who haven't read the books they're summarising, but I think it's easy to underrate how useful they also are for people who have already read the books, as a device for consolidating and refreshing your memory of its contents.
Ok, thanks for the reply Lukas. I think this clarifies some things, although I expect I should read some of your other posts to get fully clear.
The time seems right for more competent+ambitious EA entrepreneurship, and this seems like an excellent list. Thanks for putting it together!
Thanks for this post, it seems really well researched.
As I understand, it sounds like you're saying moral uncertainty implies or requires moral realism to make sense, but since moral uncertainty means "having a vague or unclear understanding of that reality", it's not clear you can justify moral realism from a position of moral uncertainty. And you're saying this tension is problematic for moral realism because it's hard to resolve.
But I'm not sure what makes you say that moral uncertainty implies or requires moral realism? I do think that moral unce... (read more)
Just want to note that a project like this seems very good, and I'm interested in helping make something like it happen.
Just want to say this sounds great!
Thanks! Sounds right on both fronts.
Cool idea, I'll have a think about doing this for Hear This Idea. I expect writing the threads ourselves could take less time than setting up a bounty, finding the threads, paying out etc. But a norm of trying to summarise (e.g. 80K) episodes in 10 or so tweets sounds hugely valuable. Maybe they could all use a similar hashtag to find them — something like #EAPodcastRecap or #EAPodcastSummary
Broadly agreed with this, but I'm a bit worried that contests with large prizes can have distortionary effects. That is, they might pull EAs towards using their time in ways which are not altruistically/impartially best. This would happen when an EA switches her marginal time to some contest with a big prize, where she otherwise would have been doing something expected to be more impactful (e.g. because she's a better fit for it), but which doesn't stand to win her as much money or acclaim.
For instance, I think the creative writing prize was a really great... (read more)
Thanks for speaking with us Mike!
Seconding the suggestion to check out the ALLFED jobs portal if anyone's interested in getting involved with the projects Mike talks about in the episode.
Seconded! I would maybe use the site 20% more if it had a good dark mode.
Good shout— iirc adding Coil is easy enough to be worth doing (it's just a <meta> tag). But I doubt it'll raise much money!
Amazing, thanks so much!
Great, thanks for letting me know!
Thanks very much for writing this — I'm inclined to agree that results from the happiness literature are often surprising and underrated for finding promising neartermist interventions and thinking about the value of economic growth. I also enjoyed hearing this talk in person!
The "aren't people's scales adjusting over time?" story ('scale norming') is most compelling to me, and I think I'm less sure that we can rule it out. For instance — if I'm reading you right, you suggest that one reason to be skeptical that people are adjusting their scales over time ... (read more)