P

Phib

118 karmaJoined Mar 2023

Bio

Hi - lil bit of anonymity intended so I can feel more free to contribute and engage more closely with this community. Trying to learn quickly by experimenting.

It turns out, also rather busy, so may not have a lot of time generally. Still trying to iterate tho.

Comments
25

Answer by PhibOct 16, 20235
0
0
2

Hi! Have little time but have spoken with someone who was really excited about the potential of:

https://www.ucl.ac.uk/news/2023/may/study-reveals-unique-molecular-machinery-woman-who-cant-feel-pain

https://www.faroutinitiative.com/

  • this seems to be an org pursuing this

Re: Existential Risk Persuasion Tournament, I’m wondering if one thing to consider with forecasters is just that they think a lot about the future, asking them to then imagine a future where a ton of their preconceived predictions may not occur, I wonder if this is a significant model shift. Or something like:

forecasters are biased toward status quo as that is easier to predict from - imagine you had to take into account everything all at once in your prediction, “will x marry by year 2050? Well there’s a 1% chance both parties are dead because of AI…” is absurd.

But I guess forecasters also have a more accurate world model anyway. Though this still felt like something I wanted to write out anyway considering I was trying to justify the forecasters low xrisk takes. (Again, status quo bias against extreme changes)

FWIW I think this post: https://forum.effectivealtruism.org/posts/J4cLuxvAwnKNQxwxj/how-does-ai-progress-affect-other-ea-cause-areas

Is a way better version of what I was trying to get at here, and MacAskill's answer is pretty good.

Answer by PhibJun 17, 20231
0
3

In my extreme case, I think it looks something like we’ve got really promising solutions in the work for alignment and they will be on time to actually be implemented.

Or perhaps, in the case where solutions aren’t forthcoming, we have some really robust structures (international govs and big AI labs or something coordinating) to avoid developing AGI.

Nice post, and I appreciate you noticing something that bugged you and posting about it in a pretty constructive manner.

Casual comment on my own post here but - even the x-risk tag symbol is a supervolcano, imagine we just presented over and over again such symbols that… aren’t actually very representative of the field? I guess that’s my main argument here.

I think this is a fair point, thanks for making it. And I certainly overgeneralize at times here, where I believe I’ve experienced moments that indicate such a schism, but also not enough to just label it as such in a public post. Idk!

This is odd, you copied 'more better's comment?

Load more