All of finm's Comments + Replies

Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies

Thanks very much for writing this — I'm inclined to agree that results from the happiness literature are often surprising and underrated for finding promising neartermist interventions and thinking about the value of economic growth. I also enjoyed hearing this talk in person!

The "aren't people's scales adjusting over time?" story ('scale norming') is most compelling to me, and I think I'm less sure that we can rule it out. For instance — if I'm reading you right, you suggest that one reason to be skeptical that people are adjusting their scales over time ... (read more)

Announcing a contest: EA Criticism and Red Teaming

Sounds to me like that would count! Perhaps you could submit the entire sequence but highlight the critical posts.

Announcing a contest: EA Criticism and Red Teaming

Replying in personal capacity:

I hope the contest will consider lower effort but insightful or impactful submissions to account for this?

Yes, very short submissions count. And so should "low effort" posts, in the sense of "I have a criticism I've thought through, but I don't have time to put together a meticulous writeup, so I can either write something short/scrappy, or nothing at all." I'd much rather see unpolished ideas than nothing at all.

Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find bli

... (read more)
The pandemic threat of DEEP VZN - notes on a podcast with Kevin Esvelt

Just commenting to say this was a really useful resource summarising an important topic — thanks for the time you put into it!

Space governance - problem profile

This (and your other comments) is incredibly useful, thanks so much. Not going to respond to particular points right now, other than to say many of them stick out as well worth pursuing.

Space governance - problem profile

Thanks for this, I think I agree with the broad point you're making.

That is, I agree that basically all the worlds in which space ends up really mattering this century are worlds in which we get transformative AI (because scenarios in which we start to settle widely and quickly are scenarios in which we get TAI). So, for instance, I agree that there doesn't seem to be much value in accelerating progress on space technology. And I also agree that getting alignment right is basically a prerequisite to any of the longer-term 'flowthrough' considerations.

If I'... (read more)

1Harrison Durland2mo
That is mostly correct: I wasn't trying to respond to near-term space governance concerns, such as how to prevent space development or space-based arms races, which I think could indeed play into long-term/x-risk considerations (e.g., undermining cooperation in AI or biosecurity), and may also have near-term consequences (e.g., destruction of space satellites which undermines living standards and other issues). To summarize the point [https://forum.effectivealtruism.org/posts/6fFuPpENfBrjrywLj/space-governance-problem-profile-1?commentId=huaJTJjGdQ7ZmBM4A] I made in response to Charles (which I think is similar, but correct me if I'm misunderstanding): I think that if an action is trying to improve things now (e.g., health and development, animal welfare, improving current institutional decision-making or social values), it can be justified under neartermist values (even if it might get swamped by longtermist calculations). But it seems that if one is trying to figure out "how do we improve governance of space settlements and interstellar travel that could begin 80–200 years from now," they run the strong risk of their efforts having effectively no impact on affairs 80–200 years from now because AGI might develop before their efforts ever matter towards the goal, and humanity either goes extinct or the research is quickly obsolesced. Ultimately, any model of the future needs to take into account the potential for transformative AI, and many of the pushes such as for Mars colonization just do not seem to do that, presuming that human-driven (vs. AI-driven) research and efforts will still matter 200 years from now. I'm not super familiar with these discussions, but to me this point stands out so starkly as 1) relatively easy to explain (although it may require introductions to superintelligence for some people); 2) substantially impactful on ultimate conclusions/recommendations, and 3) frequently neglected in the discussions/models I've heard so far. Personally,
Nuclear Fusion Energy coming within 5 years

I agree that fusion is feasible and will likely account for a large fraction (>20%) of energy supply by the end of the century, if all goes well. I agree that would be pretty great. And yeah, Helion looks promising.

But I don't think we should be updating much on headlines about achieving ignition or breakeven soon. In particular, I don't think these headlines should be significantly shifting forecasts like this one from Metaculus about timelines to >10% of energy supply coming from fusion. The main reason is that there is a very large gap between pro... (read more)

1Guy Raveh2mo
Another response could be that abundant energy means more destructive power for humanity, and so even more risks. Though in reality I do tend towards the "sounds good but there's nothing we in particular should do about it" side.
Concave and convex altruism

Thanks, that's a very good example.

I don't think this actually describes the curve of EA impact per $ overall

For sure.

Past and Future Trajectory Changes

Just wanted to comment that this was a really thoughtful and enjoyable post. I learned a lot.

In particular, I loved the way the point about how the relative value of trajctory change should depend on the smoothness of your probability distribution over the value of the long-run future.

I'm also now curious to know more about the contingency of the caste system in India. My (original) impression was that the formation of the caste system was somewhat gradual and not especially contingent.

1N N3mo
Thank you!
Pre-announcing a contest for critiques and red teaming

For what it's worth I think I basically endorse that comment.

I definitely think an investigation that starts with a questioning attitude, and ends up less negative than the author's initial priors, should count.

That said, some people probably do already just have useful, considered critiques in their heads that they just need to write out. It'd be good to hear them.

Also, presumably (convincing) negative conclusions for key claims are more informationally valuable than confirmatory ones, so it makes sense to explicitly encourage the kind of investigations that have the best chance of yielding those conclusions (because the claims they address look under-scrutinised).

makes sense! yeah as long as this is explicit in the final announcement it seems fine. I also think "what's the best argument against X (and then separately do you buy it?)" could be a good format.

Pre-announcing a contest for critiques and red teaming

Thank you, this is a really good point. By 'critical' I definitely intended to convey something more like "beginning with a critical mindset" (per JackM's comment) and less like "definitely ending with a negative conclusion in cases where you're critically assessing a claim you're initially unsure about". 

This might not always be relevant. For instance, you might set out to find the strongest case against some claim, whether or not you end up endorsing it. As long as that's explicit, it seems fine.

But in cases where someone is embarking on something l... (read more)

Pre-announcing a contest for critiques and red teaming

Yes, totally. I think a bunch of the ideas in the comments on that post would be a great fit for this contest.

Pre-announcing a contest for critiques and red teaming

Thanks, great points. I agree that we should only be interested in good faith arguments — we should be clear about that in the judging criteria, and clear about what counts as a bad faith criticism. I think the Forum guidelines are really good on this.

Of course, it is possible to strongly disagree with a claim without resorting to bad faith arguments, and I'm hopeful that the best entrants can lead by example.

1Chris Leong2mo
"Clear about what counts as a bad faith criticism" I guess one of my points was that there's a limit to how "clear" you can be about what counts as "bad faith", because someone can always find a loophole in any rules you set.
EA Projects I'd Like to See

The downweighting of AI in DGB was a deliberate choice for an introductory text.

Thanks, that's useful to know.

EA Projects I'd Like to See

I guess that kind of confirms the complaint that there isn't an obvious, popular book to recommend on the topic!

EA Projects I'd Like to See

Embarrassingly wasn't aware of the last three items on this list; thanks for flagging!

2Jack Malde3mo
Neither was I before I looked at the bibliography in the first book!
EA Projects I'd Like to See

Oh cool, wasn't aware other people were thinking about the QF idea! 

Re your question about imprints — I think I just don't know enough about how they're typically structured to answer properly.

EA Projects I'd Like to See

Thanks for sharing — you should post this as a shortform or top-level post, otherwise I'm worried it'll just get lost in the comments here :)

2aogara3mo
So true. Scared of being stupid on the front page I guess. Compromised by moving to my shortform, thanks again for the inspiration!
EA Projects I'd Like to See

Thanks, this is a useful clarification. I think my original claim was unclear. Read as "very few people were thinking about these topics at the time when DGB came out", then you are correct.

(I think) I had in mind something like "at the time when DGB came out it wasn't the case that, say, > 25% of either funding, person-hours, or general discussion squarely within effective altruism concerned the topics I mentioned, but now it is".

I'm actually not fully confident in that second claim, but it does seem true to me.

AI alignment and existential risks have been key components from the very beginning. Remember, Toby worked for FHI before founding GWWC, and even from the earliest days MIRI was seen as an acceptable donation target to fulfill the pledge. The downweighting of AI in DGB was a deliberate choice for an introductory text.

EA Projects I'd Like to See

I was aware but should have mentioned it in the post

 — thanks for pointing it out :)

EA Projects I'd Like to See

Like Max mentioned, I'm not sure The Methods of Ethics  is a good introduction to utilitarianism; I expect most people would find it difficult to read. But thanks for the pointer to the Very Short Introduction, I'll check it out!

2Jack Malde3mo
Also just copying from my kindle version of the very short introduction book:
2Jack Malde3mo
That's fair enough. I haven't read the Singer book based on Sidgwick, but I suspect it would be far more accessible and a good book for someone to read if they are already familiar with the key ideas of utilitarianism. Interestingly the books I mentioned aren't in the utilitarianism.net list of books [https://www.utilitarianism.net/books]. Not sure why.
EA Projects I'd Like to See

Thanks very much for the pointer, just changed to something more sensible!

(For what it's worth, I had in mind this was much more of a 'dumb nerdy flourish' than 'the clearest way to convey this point')

2Benjamin Stewart3mo
Fair enough!
EA Projects I'd Like to See

Amazing! Just sent you a message.

EA Projects I'd Like to See

Big fan of utilitarianism.net — not sure how I forgot to mention it!

How are you keeping it together?

Thanks Ed, this is really thoughtful.

+1 to the doomscrolling point — sometimes I feel like I have an obligation or responsibility to read the news, especially when it's serious. But this is almost always a mistake: in close to every instance, the world will not be a worse place if you take time away from the news.

4DonyChristie4mo
I deleted my Twitter app again and I haven't been reading the Facebook news feed for a long while.
Some research ideas on the history of social movements

Thanks for sharing Rose, this looks like an important and (hopefully) fruitful list. Would love to see more historians taking a shot at some of these questions.

Punishing Russia through private disinvestment?
Answer by finmFeb 27, 20222

My guess is that divesting your private investments isn't going to be an especially leveraged/impactful way to address the situation, and that you might consider spending the time you would spend researching this might be better spent finding direct donation opportunities, and sharing the results. But don't put a lot of weight on that.

This is a good analysis of divestment in general.

Samo Burja on Effective Altruism

Thanks very much for putting this together. This section stood out to me —

He is however optimistic on innovation in new social technologies and building new institutions. He believes that there are very few functional institutions and that most institutions are attempts at mimicking these functional institutions. He believes innovation in social technology is highly undersupplied today, and that individual founders have a significant shot at building them. He also believes that civilisation makes logistical jumps in complexity and scale in very short perio

... (read more)
Risks from Asteroids

Thanks for the pointer, fixed now. I meant for an average century.

Risks from Asteroids

Thanks, these are great points.

Risks from Asteroids

Thank you for the kind words!

I think this is strong enough as a factor that I now update to the position that derisking our exposure to natural extinction risks via increasing the sophistication of our knowledge and capability to control those risks is actually bad and we should not do it.

I would feel a bit wary about making a sweeping statement like this. I agree that there might be a more general dyanmic where (i) natural risks are typically small per century, and (ii) the technologies capable of controlling those risks might often be powerful enough to ... (read more)

4DonyChristie4mo
I think the meme of x-risk and related will spread and degrade beyond careful thinkers such as readers of this forum, and a likely subset of responses to a perception of impending doom are to take drastic actions to gain perceived control, exacerbating risk. The concept of x-risk is itself dual-use.
Linch's Shortform

The pedant in me wants to ask to point out that your third definition doesn’t seem to be a definition of existential risk? You say —

Approximate Definition: On track to getting to the best possible future, or only within a small fraction of value away from the best possible future.

It does make (grammatical) sense to define existential risk as the "drastic and irrevocable curtailing of our potential". But I don’t think it makes sense to literally define existential risk as “(Not) on track to getting to the best possible future, or only within a small fractio... (read more)

2Linch5mo
Yeah I think you raise a good point. After I wrote the shortform (and after our initial discussion), I now lean more towards just defining "existential risk" as something in the cluster of "reducing P(doom)" and treat alternative methods of increasing the probability of utopia as a separate consideration. I still think highlighting the difference is valuable. For example, I know others disagree, and consider (e.g) theoretically non-irrevocable flawed realizations as form of existential risk even in the classical sense.
Splitting the timeline as an extinction risk intervention

As you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.

Seems to me that pretty much whenever anyone would actually considering 'splitting the timeline' on some big uncertain question, then even if they didn't decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions? 

3NunoSempere5mo
But this requires a quantum event/events to influences the decision, which seems more and more unlikely the closer you are to the decision. Though per this comment [https://forum.effectivealtruism.org/posts/LKdwFsJXaFKHCE9ms/splitting-the-timeline-as-an-extinction-risk-intervention?commentId=LwoszJEaZ3DugiHBo#comments] , you could also imagine that different people were born and would probably make different decisions.
Splitting the timeline as an extinction risk intervention

Thanks for writing this — in general I am pro thinking more about what MWI could entail!

But I think it's worth being clear about what this kind of intervention would achieve. Importantly (as I'm sure you're aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here), or decrease the overall (subjective) chance of existential catastrophe.

But it could increase the chance of something like "at least [some small fraction]% of'branches' survive catastrophe", or at the extreme "at least one 'branc... (read more)

1Derek Shiller5mo
What makes you think that? So long as value can change with the distribution of events across branches (as perhaps with the Mona Lisa) the expected value of the future could easily change.
2NunoSempere5mo
Yes, I agree This analogy isn't perfect. I'd prefer the analogy that, in a trolley problem in which the hostages were your family, one may care some small amount about ensuring at least one family-member survives (in opposition/contrast to maximizing the number of family members which survive) Yeah, when thinking more about this, this does seem like the strongest objection, and here is where I'd like an actual physicist to chip in. If I had to defend why that is wrong, I'd say something like: * Yeah, but because quantum effects don't really interact that much with macroscopic effects all that much, this huge number of worlds are all incredibly correlated * as you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds. * Like, when I imagine a world in which quantum effects prevent an x-risk (AGI for illustration purposes) in the absence of human nudging, I imagine something like: quantum effects become large enough that the first few researchers who come up with how to program an AGI mysteriously die from aneurisms until the world notices and creates a world-government to prevent AGI research (?) * I notice that I don't actually think this is the scenario that requires the least quantum intervention, but I think that the general point kind of stands
New EA Cause Area: Run Blackwell's Bookstore

Noting that this is a question I'm also interested in

Ray Dalio's Principles (full list)

Awesome, thanks so much for putting in the time to make this. Obviously this kind of resource is a great shortcut for people who haven't read the books they're summarising, but I think it's easy to underrate how useful they also are for people who have already read the books, as a device for consolidating and refreshing your memory of its contents.

Moral Uncertainty and Moral Realism Are in Tension

Ok, thanks for the reply Lukas. I think this clarifies some things, although I expect I should read some of your other posts to get fully clear.

So you want to be a charity entrepreneur. Read these first.

The time seems right for more competent+ambitious EA entrepreneurship, and this seems like an excellent list. Thanks for putting it together!

4Mathieu Putz5mo
Thanks for saying that!
Moral Uncertainty and Moral Realism Are in Tension

Thanks for this post, it seems really well researched.

As I understand, it sounds like you're saying moral uncertainty implies or requires moral realism to make sense, but since moral uncertainty means "having a vague or unclear understanding of that reality", it's not clear you can justify moral realism from a position of moral uncertainty. And you're saying this tension is problematic for moral realism because it's hard to resolve.

But I'm not sure what makes you say that moral uncertainty implies or requires moral realism? I do  think that moral unce... (read more)

5Lukas_Gloor5mo
I don't say that moral uncertainty implies or requires moral realism to make sense. Primarily, my post is about how the only pathway to confident moral realism requires moral certainty. (So the post is primarily against confident moral realism, not against moral uncertainty.) I do say that moral uncertainty often comes up in a moral realist context. Related to that, perhaps the part you’re replying to is this part: "Since moral uncertainty often comes up in a moral realist context, I think this causes some problems for the concept.” By “problems" (I think that phrasing was potentially misleading), I don’t mean that moral uncertainty is altogether unworkable or not useful. I mean only that, if we make explicit that moral uncertainty also includes uncertainty between moral realism vs. moral anti-realism, it potentially changes the way we'd want to deal with our uncertainty (because it changes what we're uncertain about). A further premise here is that anti-realism doesn’t deserve the connotations of the term “nihilism.” (I argue for that in previous [https://forum.effectivealtruism.org/posts/6nPnqXCaYsmXCtjTk/why-realists-and-anti-realists-disagree#Points_of_agreement] posts [https://forum.effectivealtruism.org/posts/C2GpA894CfLcTXL2L/against-irreducible-normativity#2__Normative_anti_realism_is_existentially_satisfying__at_least_it_can_be_] .) If someone thought anti-realism is the same as nihilism, in the sense of "nothing matters under nihilism and we may as well ignore the possibility, for all practical purposes," then my point wouldn't have any interesting implications. However, if the way things can matter under anti-realism is still relevant for effective altruists, then it makes a difference how much of our "moral uncertainty" expects moral realism vs. how much of it expects anti-realism. To summarize, the "problem" with moral uncertainty is just that it's not precise enough, it doesn't quite carve reality at its joints. Ideally, we'd want more precise
List of important ways we may be wrong

Just want to note that a project like this seems very good, and I'm interested in helping make something like it happen.

Nathan Young's Shortform

Thanks! Sounds right on both fronts.

Nathan Young's Shortform

Cool idea, I'll have a think about doing this for Hear This Idea. I expect writing the threads ourselves could take less time than setting up a bounty, finding the threads, paying out etc. But a norm of trying to summarise (e.g. 80K) episodes in 10 or so tweets sounds hugely valuable. Maybe they could all use a similar hashtag to find them — something like #EAPodcastRecap or #EAPodcastSummary

2Nathan Young5mo
I recommend a thread of them. I rarely see poeple using hashtags currently. And I probably agree you could/should write them yourselves but: - other people might think different things are interesting than you do
We Should Run More EA Contests

Broadly agreed with this, but I'm a bit worried that contests with large prizes can have distortionary effects. That is, they might pull EAs towards using their time in ways which are not altruistically/impartially best. This would happen when an EA switches her marginal time to some contest with a big prize, where she otherwise would have been doing something expected to be more impactful (e.g. because she's a better fit for it), but which doesn't stand to win her as much money or acclaim.

For instance, I think the creative writing prize was a really great... (read more)

3Chris Leong5mo
I think the forum prize should have focused on EAs not at orgs b/c those EAs are already sufficiently incentivised to do good work and when the prizes are dominated by people already at orgs this dilutes the ability of the forum prizes to highlight and encourage new talent.
Hear This Idea - Mike on Nuclear Winter and ensuring mid to long term food security

Thanks for speaking with us Mike!

Seconding the suggestion to check out the ALLFED jobs portal if anyone's interested in getting involved with the projects Mike talks about in the episode.

EA Forum feature suggestion thread

Seconded! I would maybe use the site 20% more if it had a good dark mode.

EA Forum feature suggestion thread

Good shout— iirc adding Coil is easy enough to be worth doing (it's just a <meta> tag). But I doubt it'll raise much money!

Two Podcast Opportunities

Amazing, thanks so much!

Two Podcast Opportunities

Great, thanks for letting me know!

Load More