Nathan Young

Sales/Product Management @ Goodheart Labs (Software Development)
12808 karmaJoined May 2019Working (0-5 years)London, UK

Bio

Participation
4

Create prediction markets and forecasting questions on AI risk and biorisk. I have been awarded an FTX Future Fund grant and I work part-time at a prediction market.

Use my connections on Twitter to raise the profile of these predictions and increase the chance that decision-makers discuss these issues.

I am uncertain about how to run an org for my FTXFF grant and whether prediction markets will be a good indicator in practice.

How others can help me

Talking to those in forecasting to improve my forecasting question generation tool

Writing forecasting questions on EA topics.

Meeting EAs I become lifelong friends with.

How I can help others

Connecting them to other EAs.

Writing forecasting questions on metaculus.

Talking to them about forecasting.

Sequences
1

Moving In Step With One Another

Comments
1905

Topic Contributions
19

Thanks for your work Claire. I am really grateful.

I feel frustrated that many people learned a new concept "longtermism" which many people misunderstand and relate to EA but now even many EAs don't think this concept is that high priority. Feels like an error from us, that could have been predicted beforehand.

I am grateful for all the hard work that went into popularising the concept and I think weak longtermism is correct. But I dunno, seems like an oops moment that it would be helpful for someone to acknowledge.

Hey Josh,

Is there a reason you haven't copied the whole post? I was surprised not to be able to read it here. 

I guess I think possibly that a "high feedback" culture needs to be:

  • Gracious - As @Kirsten says I find gracious feedback usually goes a lot better even if someone has behaved really badly. 
  • If not about "really bad" stuff then feedback should be consensual as a norm - As a community I think we should want to be opting into feedback rather than assuming everyone wants it. People assume I want feedback a lot and frankly, I do, but some of it can be brutal. And I have pretty thick skin. I have been sad for days after EA feedback. I wouldn't want other people to be treated like this without opting into it
  • A high gratefulness culture - sometimes I think in EA we are all waiting to correct but forget how large correction can loom in the group psyche. I know that we all grateful for each others effort, but sometimes it doesn't feel like that. It would be easier to feel safe if people were more thankful to one another too. Several people have told me they dislike posting on the forum and I guess this is part of it

In particular, I sense most EAs should work on these things, rather than giving more feedback, unless the person has asked for it or are doing more than, say $100k of harm.

Also as a side note, sometimes a desire for feedback can be unhealthy. It can be a desire to provide feedback to others, or to not do the work to figure out what is right and wrong - "if everyone can give feedback and they aren't, my behaviour must be fine". Sometimes I ask for feedback out of a desire to hurt myself. I think in general feedback is good, but at times it can become pathological. I sense this isn't the case for most people. 

I imagine that it has cost and does cost 80k to push for AI safety stuff even when it was wierd and now it seems mainstream.

Like, I think an interesting metric is when people say something which shifts some kind of group vibe. And sure, catastrophic risk folks are into it, but many EAs aren't and would have liked a more holistic approach (I guess). 

So it seems a notable tradeoff.

Confusion

I get why I and other give to Givewell rather than catastrophic risk - sometimes it's good to know your "Impact account" is positive even if all the catastrophic risk work was useless. 

But why do people not give to animal welfare in this case? Seems higher impact?

And if it's just that we prefer humans to animals that seems like something we should be clear to ourselves about.

Also I don't know if I like my mental model of an "impact account". Seems like my giving has maybe once again become about me rather than impact. 

ht @Aaron Bergman for surfacing this

I think that's part of the problem.

Who is loyal to the chinese people? 

And I don't think I'm good here. I think I try to be loyal to them, but I don't know what the chinese people want and I think if I try and guess I'll get it wrong in some key areas.

I'm reminded of when givewell?? asked recipients how they would trade money for children's lives and they really fucking loved saving children's lives. If we are doing things for others benefit we should take their weightings into account.

There is more to get into here but two main things:

  • I guess some EAs, and some who I think do really good work do literally believe in literal gods
  • I don't actually think this is that predictive. I know some theists who are great at thinking carefully and many athiests who aren't. I reckon I could distinguish the two in a discussion better than rejecting the former out of hand.
  •  

And since posting this I've said this to several people and 1 was like "yeah no I would downrate religious people too"

I think a poll on this could be pretty uncomfortable reading. If you don't, run it and see. 

Put it another way, would EAs discriminate against people who believe in astrology? I imagine more than the base rate. Part of me agrees with that, part of me thinks its norm harming to do. But I don't think this one is "less than the population"

Also I guess that current proposals would benefit openAI, google DeepMind and Anthropic. If there becomes a need to register large training runs, they have more money and infrastructure and smaller orgs would need to build that if they wanted to compete. It just probably would benefit them.

As you say, I think that its wrong to say this is their primary aim (which other CEOs would say there products might kill us all to achieve regulatory capture?) but there is real benefit.

Load more