pretraining data safety; responsible AI/ML
With long timeline and less than 10% probability: Hot take is these are co-dependent - prioritizing only extinction is not feasible. Additionally, does only one human exist while all others die count as non-extinction? What about only a group of humans survive? How should this be selected? It could dangerously/quickly fall back to Fascism. It would only likely benefit the group of people with current low to no suffering risks, which unfortunately correlates to the most wealthy group. When we are "dimension-reducing" the human race to one single point, we ignore the individuals. This to me goes against the intuition of altruism.
I fundamentally disagree with the winner-take-all type of cause prioritization - instead, allocate resources to each area, and unfortunately there might be multiple battles to fight.
To analyze people's responses, I can see this question being adjusted to consider prior assumptions: 1. What's your satisfaction on how we are currently doing in the world now? What are the biggest gaps to your ideal world? 2.What's your assessment of timeline + current % of extinction risk due to what?
Some example of large scale deepfakes that is pretty messed up: https://www.pbs.org/newshour/world/in-south-korea-rise-of-explicit-deepfakes-wrecks-womens-lives-and-deepens-gender-divide
Other examples on top of my head is the fake Linkedin profiles.
Not sure how to address the question otherwise; a thought is there might be deepfakes that we cannot detect/tell being deepfakes yet.
It also worries me, in the context of marginal contributions, when some people (not all) start to think of "marginal" as a "sentiment" rather than actual measurements (getting to know those areas, the actual resources, and the amount of spending, and what the actual needs/problems may be) as reasoning for cause prioritization and donations. A sentiment towards a cause area, does not always mean the cause area got the actual attention/resources it was asking for.
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. |
Commenting and feedback guidelines:
|
I find it surprising when people (people in general, not EA specific) do not seem to understand the moral perspective of "do no harm to other people". This is confusing to me, and I wonder what aspects/experiences contributed to people being able to understand this vs people not being able to understand this.
Great initiative; thanks! Would"This is a Draft Amnesty Week draft." also apply to quick notes as well?
Thanks for the thoughtful and organized feedbacks; I have to say I share very similar views after the intro to EA course - it seems to me back then that EA has a lot of subgroups/views. Appreciate the write up which probably spoke up for many more people!