Denise_Melchin

Wiki Contributions

Comments

Why I am probably not a longtermist

On your second bullet point what I would add to Carl's and Ben's posts you link to is that suffering is not the only type of disvalue or at least "nonvalue" (e.g. meaninglessness comes to mind). Framing this in Haidt's moral foundations theory, suffering is only addressing the care/harm foundation.

Also, I absolutely value positive experiences! More so for making existing people happy, but also somewhat for creating happy people. I think I just prioritise it a bit less than the longtermists around me compared to avoiding misery.

I will try to respond to the s-risk point elsewhere.

Why I am probably not a longtermist

Thank you everyone for the many responses! I will address one point which came up in multiple comments here as a top-level comment, and otherwise respond to comments.

Regarding the length of the long-term future: My main concern here is that it seems really hard to reach existential security (i.e. extinction risks falling to smaller and smaller levels), especially given that extinction risks have been rising in recent decades. If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously - but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from 'extinction risks are rising so much, we must prioritize them!' to 'there is lots of value in the long-term future'. The latter is only true if we manage to get rid of those extinction risks.

The line about totalitarianism is not central for me. Oops. Clearly should not have opened the section with a reference to it.

I think even with totalitarianism reaching existential security is really hard - the world would need to be permanently locked into a totalitarian state.

I recommend reading this shortform discussion on reaching existential security.

Something that stood out to me in that discussion (in a comment by Paul Christiano: "Stepping back, I think the key object-level questions are something like "Is there any way to build a civilization that is very stable?" and "Will people try?" It seems to me you should have a fairly high probability on "yes" to both questions.")

as well as Toby's EAG Reconnect AMA is how much of the belief that we can reach existential security might be based on a higher level of baseline optimism than I have about humanity.

Why I am probably not a longtermist

Thanks for trying to summarise my views! This is helpful for me to see where I got the communication right and where I did not. I'll edit your summary accordingly where you are off:

  1. You have person-affecting tendencies which make you unconcerned less concerned with reducing extinction risks than longtermists, although you are still concerned about the nearterm impacts and put at least some value on the loss of future generations (which also depends on how long/big we can expect the future to be)
  2. You are suffering-focused [Edit: I would not have previously described my views that way, but I guess it is an accurate enough description]
  3. You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiastic want to prioritise about making the future good over making it long or big
  4. You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
  5. You’re sceptical clueless whether there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
UK's new 10-year "National AI Strategy," released today

Wow. I am still reading through this, but I am impressed with the quality input the UK government has clearly received and how well they wrote up their considerations and conclusions. Maybe this is normal for the reference class for UK gov strategy documents (if so I was unaware), but it is not something I was expecting.

Does the Forum Prize lead people to write more posts?

More on that qualitative feedback: While people generally react quite positively to winning the prize, few people have explicitly told us it made them want to write more (even when we asked directly), and our surveys of the Forum's userbase haven’t found many people saying that the chance of winning a prize leads them to invest more time in writing.

Previously I think I responded that it did not motivate me to write more/better, but in retrospect I think this is just false. At least to me, it feels very arrogant to be hopeful that I could win a prize and therefore encourages dishonesty with myself. I expect this to be similar for other people.

Pandemic prevention in German parties' federal election platforms

Thank you, this was really interesting! I voted already, mostly based on global aid, refugee, animal welfare and climate change considerations, but I would have wanted to look at pandemic considerations too if I had known a quick way to do it at the time. So I expect this to be a very helpful overview for others!

Frank Feedback Given To Very Junior Researchers

I agree with the gist of this comment, but just a brief note that you do not need to do direct work to be "part of the EA community". Donating is good as well. :-)

More EAs should consider “non-EA” jobs

I didn't originally, but then did when I could not get an offer for an EA job.

I do think in many cases EA org jobs will be better in terms of impact (or more specifically: high impact non-EA org jobs are hard to find) so I do not necessarily consider this approach wrong. Once you fail to get an EA job, you will eventually be forced to consider getting a non-EA job.

Denise_Melchin's Shortform

Thank you for providing more colour on your view, that's useful!

Denise_Melchin's Shortform

I am still confused whether you are talking about full-time work. I'd very much hope a full-time community builder produces more value than a donation of a couple of thousand dollars to the EA Funds.

But if you are not discussing full-time work and instead part-time activities like occasionally hosting dinners on EA related themes it makes sense to compare this to 10% donations (though I also don't know why you are evaluating 10% donations at ~$2000, median salary in most rich countries is more than 10 times that).

But then it doesn't make sense to compare the 10% donations and part-time activities to the very demanding direct work paths (e.g. AI safety research). Donating $2000 (or generally 10%, unless they are poor) requires way less dedication than fully focussing your career on a top priority path.

Someone who would be dedicated enough to pursue a priority path but is unable to should in many cases be able to donate way more than $2000. Let's say they are "only" in the 90th percentile for ability in a rich country and will draw a 90th percentile salary, which is above £50,000 in the UK (source). If they have the same dedication level as someone in a top priority path they should be able to donate ~£15,000 of that. That is 10 times as much as $2000!

Load More