Happy to chat about AI policy, meta ethics, movement building and many other things. Schedule a call: https://calendly.com/konstantinpilz
Sounds great! However, these times are very difficult from Europe. Are the talks recorded?
Thanks, everyone!
I've selected these words as most promising and am currently doing a survey in the general public to evaluate which ones sound best to people unfamiliar with longtermism.
I agree. I think it's interesting that the field of "Zukunftsethik" exist but I wouldn't use the term as a name for a movement
I think these are all valuable, but not much more valuable in a world with short timelines. I wanted to express that I am not sure how we should change our approach in a world with short timelines. So I think these ideas are net positive but I'm uncertain whether they are much of an update
I agree that AGI timelines may be very short and even Holden Karnofsky assigns a 10% probability to AGI in the next 15 years. I think at this time everyone should at least think about what they would do if they knew for certain that AGI was coming in the next 15 years and then do at least 10% of that (if not more since in a world where AGI comes soon, you have a lot more impact since there are fewer EAs around). However, I don't really see what to do about it yet. I think focusing outreach on groups that are more likely to start working on AI safety makes sense. Focusing outreach in circles of ML researchers makes sense. Encouraging EAs currently working in other areas to go work in alignment or AI government makes sense. Curious about what others think.
Thanks for the explanation!
I agree that it is great to do something to people for which they will be thankful later. But newly created people seem just as good for this and if you care a lot about preferences you could create them in a way that they will be very thankful and the pure creation is fulfilling for them. Still don't see the value of resurrection vs new people. I think my main problem with preference utilitarianism is that you can't say whether it's good or bad to create preferences since both has unintuitive conseqences.
Bisherige Texte zum Stichwort auf Deutsch sind sehr negativ und man sieht immer, dass es eine fremde Idee ist. Will sichergehen, dass ich nichts übersehe.
Kann aber gut sein, dass wir am Ende "Longtermism" nehmen
Ideas so far from the EA Germany Slack (anonymous)
Interesting, thanks! Though I don't see why you'd only ressurect humans since animals seem to have the preference to survive as well. Anyways, I think preferences are often misleading and are not a good proxy for what would really be fulfilling. To me it also seems odd to say that a preference remains even if the person is no longer existing. Do you believe in souls or how do you make that work? (Sorry for the naivety, happy about any recs on the topic)
Seems useful, especially for critical posts. I may want to upvote them to show my appreciation and have more people read them though still disagree with e.g. the conclusion they draw.