rodeo_flagellum

Working (0-5 years experience)
277Joined Jul 2021

Participation
2

  • Attended more than three meetings with a local EA group
  • Attended an EAGx conference

Comments
51

Glad to hear that the links were useful!

Keeping by Holden's timeline sounds good, and I agree that AGI > HLMI in terms of recognizability. I hope the quiz goes well once it is officially released!

I am not the best person to ask this question (@so8res, @katja_grace, @holdenkarnofsky) but I will try to offer some points.

  • These links should be quite useful: 
  • I don't know of any recent AI expert surveys for transformative AI timelines specifically, but have pointed you to very recent ones of human-level machine intelligence and AGI. 
  • For comprehensiveness, I think you should cover both transformative AI (AI that precipitates a change of equal or greater magnitude to the agricultural or industrial revolution) and HLMI. I have yet to read Holden's AI Timelines post, but believe it's likely a good resource to defer to, given Holden's epistemic track record, so I think you should use this for the transformative AI timelines. For the HLMI timelines, I think you should use the 2022 expert survey (the first link). Additionally, if you trust that a techno.-optimist leaning crowd's forecasting accuracy generalizes to AI timelines, then it might be worth checking out Metaculus as well.
  • Lastly, I think it might be useful to ask under the existential risk section what percentage of ML/AI researchers think AI safety research should be prioritized (from the survey: "The median respondent believes society should prioritize AI safety research “more” than it is currently prioritized. Respondents chose from “much less,” “less,” “about the same,” “more,” and “much more.” 69% of respondents chose “more” or “much more,” up from 49% in 2016.")

I completed the three quizzes and enjoyed it thoroughly. 

Without any further improvements, I think these quizzes would still be quite effective. It would be nice to have a completion counter (e.g., an X/Total questions complete) at the bottom of the quizzes, but I don't know if this is possible on quizmanity. 

Got through about 25% of the essay and I can confirm it's pretty good so far. 

Strong upvote for introducing me to the authors and the site. Thank you for posting. 

Every time I think about how I can do the most good, I am burdened by questions roughly like

  • How should value be measured? 
  • How should well-being be measured? 
  • How might my actions engender unintended, harmful outcomes? 
  • How can my impact be measured? 

I do not have good answers to these questions, but I would bet on some actions being positively impactful on the net.

For example

  • Promoting vegetarianism or veganism
  • Providing medicine and resources to those in poverty
  • Building robust political institutions in developing countries
  • Promoting policy to monitor develops in AI

W.r.t. the action that is most  positively impactful, my intuition is that it would take the form of safeguarding humanity's future or protecting life on Earth. 

Some possible actions that might fit this bill:  

  • Work that robustly illustrates the theoretical limits of the dangers from and capabilities of superintelligence.
  • Work that accurately encodes human values digitally  
  • A global surveillance system  for human and machine threats
  • A system that protects Earth from solar weather and NEOs

The problem here is that some of these actions might spawn harm, particularly (2) and (3). 

Thoughts and Notes: October 5th 0012022 (1) 

As per my last shortform, over the next couple of weeks I will be moving my brief profiles for different catastrophes from my draft existential risk frameworks post into shortform posts to make the existential risk frameworks post lighter and more simple. 

In my last shortform, I included the profile for the use of nuclear weapons and today I will include the profile for climate change. 

Climate change 

Answer by rodeo_flagellumOct 05, 2022120

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General:

AI Safety 

Nuclear risk

General / space

Biosecurity 

Thoughts and Notes: October 3rd 0012022 (1)

I have been working on a post which introduces a framework for existential risks that  I have not seen covered on the either LW or EAF, but I think I've impeded my progress by setting out to do more than I originally intended. 

Rather than simply introduce the framework and compare it to the Bostrom's 2013 framework and the Wikipedia page on GCRs, I've tried to aggregate all global and existential catastrophes I could find under the "new" framework. 

Creating an anthology of global and existential catastrophes is something I would like to complete at some point, but doing so in the post I've written would be overkill and would not in line with the goal of"making the introduction of this little known framework brief and simple". 

To make my life easier, I am going to remove the aggregated catastrophes section of my post. I will work incrementally (and somewhat informally) on accumulating links and notes for and thinking about each global and/or existential catastrophe through shortform posts. 

Each shortform post in this vein will pertain to a single type of catastrophe. Of course, I may post other shortforms in between, but my goal generally is to cover the different global and existential risks one by one via shortform. 

As was the case in my original post, I include DALLE-2 art with each catastrophe, and the loose structure for each catastrophe is Risk, Links, Forecasts

Here is the first catastrophe in the list. Again note that I am not aiming for comprehensiveness here, but rather am trying to get the ball rolling for a more extensive review of the catastrophic or existential risks that I plan to complete at a later date. The forecasts were observed on October 3 0002022 and represent the community's uniform median forecast. 

Use of Nuclear Weapons(Anthropogenic, Current, Preventable) 

While I have much less experience in this domain, i.e. EA outreach, than you, I too fall on the side of debate that the amount spent is justified, or at least not negative in value. Even if those who've learned about EA or who've contributed to it in some way don't identify with EA completely, it seems that in the majority of instances some benefit was had collectively, be it from the skepticism, feedback, and input of these people on the EA movement / doing good or from the learning and resources the person tapped into and benefited from by being exposed to EA. 

In my experience as someone belonging to the WEIRD demographic, males in heterosexual relationships provide less domestic or child support, on average, than their spouse, where by "less" I mean both lower frequency and lower quality in terms of attention and emotional support provided. Males seem entirely capable of learning such skills, but there does seem to some discrepancy in the amount of support actually provided. I would be convinced otherwise were someone to show me a meta-analysis or two of parental care behaviors in heterosexual relationships that found, generally speaking, males and females provide analogous levels of care. In my demographic though, this does not seem to be the case. 

Load more