QubitSwarm99

287 karmaJoined Jul 2021Working (0-5 years)

Participation
2

  • Attended more than three meetings with a local EA group
  • Attended an EAGx conference

Comments
53

I should have chosen a clearer phrase than "not through formal channels". What I meant was that my much of my forecasting work and experiences came about through my participation on Metaculus, which is "outside" of academia; this participation did not manifest as forecasting publications or assistantships (as would be done through a Masters or PhD program), but rather as my track record (linked in CV to Metaculus profile) and my GitHub repositories. There was also a forecasting tournament I won, which I also linked on the CV. 

I agree with this.

"Number of publications" and "Impact per publication" are separate axes, and leaving the latter out produces a poorer landscape of X-risk research. 

Glad to hear that the links were useful!

Keeping by Holden's timeline sounds good, and I agree that AGI > HLMI in terms of recognizability. I hope the quiz goes well once it is officially released!

I am not the best person to ask this question (@so8res, @katja_grace, @holdenkarnofsky) but I will try to offer some points.

  • These links should be quite useful: 
  • I don't know of any recent AI expert surveys for transformative AI timelines specifically, but have pointed you to very recent ones of human-level machine intelligence and AGI. 
  • For comprehensiveness, I think you should cover both transformative AI (AI that precipitates a change of equal or greater magnitude to the agricultural or industrial revolution) and HLMI. I have yet to read Holden's AI Timelines post, but believe it's likely a good resource to defer to, given Holden's epistemic track record, so I think you should use this for the transformative AI timelines. For the HLMI timelines, I think you should use the 2022 expert survey (the first link). Additionally, if you trust that a techno.-optimist leaning crowd's forecasting accuracy generalizes to AI timelines, then it might be worth checking out Metaculus as well.
  • Lastly, I think it might be useful to ask under the existential risk section what percentage of ML/AI researchers think AI safety research should be prioritized (from the survey: "The median respondent believes society should prioritize AI safety research “more” than it is currently prioritized. Respondents chose from “much less,” “less,” “about the same,” “more,” and “much more.” 69% of respondents chose “more” or “much more,” up from 49% in 2016.")

I completed the three quizzes and enjoyed it thoroughly. 

Without any further improvements, I think these quizzes would still be quite effective. It would be nice to have a completion counter (e.g., an X/Total questions complete) at the bottom of the quizzes, but I don't know if this is possible on quizmanity. 

Got through about 25% of the essay and I can confirm it's pretty good so far. 

Strong upvote for introducing me to the authors and the site. Thank you for posting. 

Every time I think about how I can do the most good, I am burdened by questions roughly like

  • How should value be measured? 
  • How should well-being be measured? 
  • How might my actions engender unintended, harmful outcomes? 
  • How can my impact be measured? 

I do not have good answers to these questions, but I would bet on some actions being positively impactful on the net.

For example

  • Promoting vegetarianism or veganism
  • Providing medicine and resources to those in poverty
  • Building robust political institutions in developing countries
  • Promoting policy to monitor develops in AI

W.r.t. the action that is most  positively impactful, my intuition is that it would take the form of safeguarding humanity's future or protecting life on Earth. 

Some possible actions that might fit this bill:  

  • Work that robustly illustrates the theoretical limits of the dangers from and capabilities of superintelligence.
  • Work that accurately encodes human values digitally  
  • A global surveillance system  for human and machine threats
  • A system that protects Earth from solar weather and NEOs

The problem here is that some of these actions might spawn harm, particularly (2) and (3). 

Thoughts and Notes: October 5th 0012022 (1) 

As per my last shortform, over the next couple of weeks I will be moving my brief profiles for different catastrophes from my draft existential risk frameworks post into shortform posts to make the existential risk frameworks post lighter and more simple. 

In my last shortform, I included the profile for the use of nuclear weapons and today I will include the profile for climate change. 

Climate change 

Does anyone have a good list of books related to existential and global catastrophic risk? This doesn't have to just include books on X-risk / GCRs in general, but can also include books on individual catastrophic events, such as nuclear war. 

Here is my current resource landscape (these are books that I have personally looked at and can vouch for; the entries came to my mind as I wrote them - I do not have a list of GCR / X-risk books at the moment; I have not read some of them in full): 

General:

AI Safety 

Nuclear risk

General / space

Biosecurity 

Thoughts and Notes: October 3rd 0012022 (1)

I have been working on a post which introduces a framework for existential risks that  I have not seen covered on the either LW or EAF, but I think I've impeded my progress by setting out to do more than I originally intended. 

Rather than simply introduce the framework and compare it to the Bostrom's 2013 framework and the Wikipedia page on GCRs, I've tried to aggregate all global and existential catastrophes I could find under the "new" framework. 

Creating an anthology of global and existential catastrophes is something I would like to complete at some point, but doing so in the post I've written would be overkill and would not in line with the goal of"making the introduction of this little known framework brief and simple". 

To make my life easier, I am going to remove the aggregated catastrophes section of my post. I will work incrementally (and somewhat informally) on accumulating links and notes for and thinking about each global and/or existential catastrophe through shortform posts. 

Each shortform post in this vein will pertain to a single type of catastrophe. Of course, I may post other shortforms in between, but my goal generally is to cover the different global and existential risks one by one via shortform. 

As was the case in my original post, I include DALLE-2 art with each catastrophe, and the loose structure for each catastrophe is Risk, Links, Forecasts

Here is the first catastrophe in the list. Again note that I am not aiming for comprehensiveness here, but rather am trying to get the ball rolling for a more extensive review of the catastrophic or existential risks that I plan to complete at a later date. The forecasts were observed on October 3 0002022 and represent the community's uniform median forecast. 

Use of Nuclear Weapons(Anthropogenic, Current, Preventable) 

Load more