All of finnhambly's Comments + Replies

I don't think this disclosure shows that much awareness, as the notes seem to dismiss it as a problem, unless I'm misunderstanding what Holden means by "don’t assume things about my takes on specific AI labs due to this". It sounds like he's claiming he's able to assess these things neutrally, which is quite a big claim!

Sorry, I didn't mean to dismiss the importance of the conflict of interest or say it isn't affecting my views.

I've sometimes seen people reason along the lines of "Since Holden is married to Daniela, this must mean he agrees with Anthropic on specific issue X," or "Since Holden is married to Daniela, this must mean that he endorses taking a job at Anthropic in specific case Y." I think this kind of reasoning is unreliable and has been incorrect in more than one specific case. That's what I intended to push back against.

Why is this getting downvoted? This comment seems plainly helpful; it's an important thing to highlight.

I can see why some people think the publicity effects of the letter might be valuable, but — when it comes to the 6-month pause proposal itself — I think Matthew's reasoning is right.

I've been surprised by how many EA folk are in favour of the actual proposal, especially given that AI governance literature often focuses on the risks of fuelling races. I'd be keen to read people's counterpoints to Matthew's thread(s); I don't think many expect GPT-5 will pose an existential threat, and I'm not yet convinced that 'practice' is a good enough reason to pursue a bad policy.

I don't think gossip ought to be that public or legible. 

Firstly, I don't think it would work for achieving your goals; I would still hesitate about having my opinions uploaded without feeling very confident in them (rumours are powerful weapons and I wouldn't want to start one if I was uncertain).

Secondly, I don't think it's worth the costs of destroying trust. A whole bunch more people will distance themselves from EA if they know their public reputation is on the line with every interaction. (I also agree with Lawrence on the Slack leaks, FWIW).

I s... (read more)

9
Nathan Young
1y
1. I disagree. I upload 60% opinions all the time. I would about gossip if I thought I could control it.  2. I think we could build systems to handle this. I think there is something whistleblower marketty  3. I think he would have as FTX got going. Also he might in 2018. 

I think the main problem being faced again and again is that internal reporting lacks teeth. 

I think public reporting is an inadequate alternative. It's a big demand to ask people to become public whistleblowers, especially since most things worth reporting aren't always black and white. It's hard to publicly speak out about things if you're not certain about them (eg because of self-doubt, wondering if it's even worth bothering, creating a reputation for yourself, etc).

Additionally, the subsequent discourse seems to put additional burden on those spe... (read more)

5
Nathan Young
1y
I think that the wiki could solve this. Having public records that someone hard nosed (like me) could write on others behalf. I know that my messing with prediction markets around this hasn't always gone well (sorry) but I think there is something good in that space too. I think Sam's "chance of fraud" would have been higher than anyone else's.

Okay great, that makes sense to me. Thank you very much for the clarification!

I am unsure what you mean by AGI. You say:

For purposes of our definitions, we’ll count it as AGI being developed if there are AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in such a world [where cheap AI systems are fully substitutable for human labor].

and:

causing human extinction or drastically limiting humanity’s future potential may not show up as rapid GDP growth, but automatically counts for the purposes of this definition.

If someone uses AI capabilities to create a synthetic virus (wh... (read more)

2
Nick_Beckstead
1y
Thanks, I think this is subtle and I don't think I expressed this perfectly. > If someone uses AI capabilities to create a synthetic virus (which they wouldn't have been able to do in the counterfactual world without that AI-generated capability) and caused the extinction or drastic curtailment of humanity, would that count as "AGI being developed"? No, I would not count this.  I'd probably count it if the AI a) somehow formed the intention to do this and then developed the pathogen and released it without human direction, but b) couldn't yet produce as much economic output as full automation of labor.

Thanks for this!

For others, as well as fixing/removing the misplaced percent symbol, you also need to do the following:

  1. In a new tab, type or paste about:config in the address bar and press Enter/Return. Click the button accepting the risk.
  2. In the search box above the list, type or paste userprof and pause while the list is filtered. If you do not see anything on the list, please ignore the rest of these instructions. You can close this tab now.
  3. Double-click the toolkit.legacyUserProfileCustomizations.stylesheets preference to switch the value from false to t
... (read more)

I enjoyed reading these updated thoughts!

A benefit of some of the agency discourse, as I tried to articulate in this post, is that it can  foster a culture of encouragement. I think EA is pretty cool for giving people the mindset to actually go out and try to improve things; tall poppy syndrome and 'cheems mindsets' are still very much the norm in many places!

I think a norm of encouragement is distinct from installing an individualistic sense of agency in everyone, though. The former should reduce the chances of Goodharting, since you'll ideally be wo... (read more)

I would happily vouch for the value of these events, as an attendee of the York group. They're fun, engaging, and definitely give an opportunity for members to dive into EA concepts.

It's just fun to hang out with a group of engaged EAs in nice cafés regularly (with interesting topics to talk about)!