finnhambly

26Joined Dec 2018
finnhambly.com/

Comments
6

Topic Contributions
2

Okay great, that makes sense to me. Thank you very much for the clarification!

I am unsure what you mean by AGI. You say:

For purposes of our definitions, we’ll count it as AGI being developed if there are AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in such a world [where cheap AI systems are fully substitutable for human labor].

and:

causing human extinction or drastically limiting humanity’s future potential may not show up as rapid GDP growth, but automatically counts for the purposes of this definition.

If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"?

My instinct is that this should not be considered to be AGI — since it is the result of just narrow AI and a human. However the caveat implies that it would count, because an AI system would have powered human extinction.

I get the impression you want to count 'comprehensive AI systems' as AGI if the system is able to act ~autonomously from humans[1].  Is that correct?

  1. ^

    Putting it another way: 
    If there is a company powered employs both humans and lots of AI technologies and it brings about a "profound transformation (in economic terms or otherwise)" , I assume the combined capability of the AI-elements of the company should be equivalently general as a single AGI would be to count.

    If it does not sum up to that level of generality, but is still used to bring about a transformation, I think that it should not resolve 'AGI developed' positively. However, it currently looks like it would resolve it positively.

Thanks for this!

For others, as well as fixing/removing the misplaced percent symbol, you also need to do the following:

  1. In a new tab, type or paste about:config in the address bar and press Enter/Return. Click the button accepting the risk.
  2. In the search box above the list, type or paste userprof and pause while the list is filtered. If you do not see anything on the list, please ignore the rest of these instructions. You can close this tab now.
  3. Double-click the toolkit.legacyUserProfileCustomizations.stylesheets preference to switch the value from false to true.

I can see this getting a bit annoying/confusing, as it also blocks out commenters' usernames, but you can always hover over the empty space and read it from the link preview on the bottom-left of the window.

I enjoyed reading these updated thoughts!

A benefit of some of the agency discourse, as I tried to articulate in this post, is that it can  foster a culture of encouragement. I think EA is pretty cool for giving people the mindset to actually go out and try to improve things; tall poppy syndrome and 'cheems mindsets' are still very much the norm in many places!

I think a norm of encouragement is distinct from installing an individualistic sense of agency in everyone, though. The former should reduce the chances of Goodharting, since you'll ideally be working out your goals iteratively with likeminded people (mitigating the risk of single-mindedly pursuing an underspecified goal). It's great to have conviction — but conviction in everything you do by default could stop you from finding the things you really believe in.

I would happily vouch for the value of these events, as an attendee of the York group. They're fun, engaging, and definitely give an opportunity for members to dive into EA concepts.

It's just fun to hang out with a group of engaged EAs in nice cafés regularly (with interesting topics to talk about)!