M

MarcelE

11 karmaJoined Feb 2022

Bio

PhD Student.

Comments
5

Because if he does not choose the boat, he may not use it.
In other words: The man might not need a boat but a fishing rod or some other tool not for fishing and will know the best what tool he needs to buy the most.

I'd be interested to hear what arguments/the best case you've heard in your conversations about why the AI security folks are wrong and AGI is not, in principle, such a risk. I am looking for the best case against AGI X-Risk, since many professional AI researchers seem to hold this view, mostly without writing down their reasons which might be really relevant to the discussion

I saw these articles [1-2]/tweet [3] on a researcher claiming that China's population is significantly lower than what's stated in the official China/UN sources, so much as he estimates 1.28 instead of 1.41 billion.
I am Curious if anyone knows whether there is some truth to that claim and whether the UN takes the official national data at face value or do independent estimates of some kind?
[1] https://www.project-syndicate.org/commentary/china-2020-census-inflates-population-figures-downplays-demographic-challenge-by-yi-fuxian-2021-08 [Paywall]
[2] https://www.reuters.com/world/china/researcher-questions-chinas-population-data-says-it-may-be-lower-2021-12-03/
[3] https://twitter.com/fuxianyi/status/1546716386290008064

Thanks for this survey. I think we should not conclude based on this survey whether to use Longtermism or Zukunftsschutz. To get a deeper understanding what could work it would be helpful to have a more detailed survey and I would suggest some changes to the questions: 
(1) The sample size could be increased to allow for a representative sample and for analysing how different groups (such as Academics, PhDs, age-groups, discipline in university)  evaluate the concept and terms and whether they have heard of it before. Maybe the survey could focus on academics.
(2a)"Future generations" in the first questions may be more associated with the general concept of sustainability where this phrase is used for decades as well although it has a more short-term vibe (e.g.  2-4 generations not 100s). For most people familiar with the concept of sustainability to protect future generations, this would not be perceived as a concept differentiated from that in my estimation. Vorschlag "X" ist die Einstellung, dass die langfristige Entwicklung und das Überleben der Menschheit stärker priorisiert werden soll." (low certainty regarding this framing of the question)
(2b)  The example of pandemic preparedness in the 2nd question could be misleading for for people who hear about the concept Longtermism for the first time. Probably most people would associate it with Covid from a near-term perspective (e.g. avoiding future lockdowns, less economic damage) and not think of GCBR.  I have no clear solution on how to better phrase the question but it should at least touch the concept of X-risk or positively framed a thriving future of humanity. Maybe one could talk say "Clara, überzeugte X, macht sich dafür stark, deutlich mehr Geld in die Prävention von Katastrophen zu investieren, die  das Überleben der Menschheit gefährden (zum Beispiel Pandemien, Asteorideneinschläge, böse&mächtige künstliche Intelligenzen, usw.)"   (medium certainty regarding this framing of the question, low certainty for the AI example.)
A positive framing is much harder to find.

(3) Additionally to having a survey among the general public or academics in general you could simply start a survey among German EAs who are aware of the LT framework and allow for qualitative comments on the different  words next to a rating on a scale.

[This comment is no longer endorsed by its author]Reply

Thanks for the great post. I appreciate the idea of a EA university or a network of institutes like the Max-Planck Society. They both aim towards the idea of rigorous EA research. I would like to drop another similar idea (I don't know whether it was already discussed and dismissed) which would allow everyone from academics to participate in the EA related research:

 Fund an Open Access EA Journal [specifically focused on EA causes and cause prioritisation] without major hurdles for scientists to publish with an up-to-date Peer Review process, e.g. with paying the referees a reasonable fee, allowing for comments after publishing, etc. (I am not an expert in establishing Journals and how a peer review process should be optimised, so take this idea with a grain of salt). This would allow academics around the world to participate in high-quality research on classic EA topics such as longtermism, AI safety, biosecurity, global health and development, animal welfare, and cause prioritisation. It should be a serious journal with a different focus but a high quality level that graduates may use the published Papers as milestone in their career. 

 Maybe such an official Journal could be (at least perceived?) as more rigorous compared to a Forum with comment section?

[This comment is no longer endorsed by its author]Reply