All of Asa Cooper Stickland's Comments + Replies

Can you expand a bit on what you mean by why these ideas applying better to near – termism?

E.g. Out of 'hey it seems like machine learning systems are getting scarily powerful, maybe we should do something to make sure they're aligned with humans' vs 'you might think it's most cost effective to help extremely poor people or animals but actually if you account for the far future it looks like existential risks are more important, and AI is one of the most credible existential risks so maybe you should work on that', the first one seems like a more scalable/legible message or something. Obviously I've strawmaned the second one a bit to make a point but I'm curious what your perspective is!

2
Chris Leong
1y
Maybe I should have said global health and development, rather than near-termism.

Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links - it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.

Romantic love is a lot more intense than mere friendship! Makes conflicts of interest way more likely.

My $0.02 - (almost) the entire financial and crypto world, including many prominent VC funds that invested in FTX directly seem to have been blindsided by the FTX blowup. So I'm less concerned about the ability to foresee that. However the 2018 dispute at Alameda seems like good reason to be skeptical, and I'm very curious what what was known by prominent EA figures, what steps they took to make it right and whether SBF was confronted about it before prominent people joined the future fund etc.

+1, I think people are applying too much hindsight here.* The main counter consideration: To the degree that EAs had info that VCs didn't have, it should've made us do better. 

*It's still important to analyze what went wrong and learn from it.

It would be awesome for the names of senior people who knew to be made public, plus the exact nature of what they were told and their response or lack thereof.

5
Ozzie Gooen
1y
I think this could be a nice-to-have, but really, I think it's too much to ask, "For every senior EA, we want a long list of exactly each thing they knew about SBF" This would probably be a massive pain, and much of the key information will be confidential (for example, informants who want to remain anonymous).  My guess is that there were a bunch of flags that were more apparent than nbouscal's stories. I do think we should have really useful summaries of the key results. If there were a few people who were complicit or highly negligent, then that should be reported, and appropriate actions taken. 

Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.

"Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers." I'm kind of skeptical of this.

Outside of Giada Pistilli and Talia Ringer I don't think these tweets would appear on the typical ML researcher timeline, they seem closer to niche rationality/EA shitposting.

Whether the typical ML person would think alignment/AI x-risk is really dumb is a different question, and I don't really know the answer to that one!

3
Asa Cooper Stickland
2y
Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.

Would it be possible to get a job tutoring high school students (or younger students)?

Maybe you could reach out to local EA orgs and see if they have any odd jobs they could pay you for?

Also if it's at all financially possible for you, I would recommend self-study in whatever you're interested in (e.g. programming/math/social science blah blah blah) with the hope of getting a job offering more career capital later on, rather than stressing out too much about getting a great job right now.

Well AI Safety is strongly recommended by 80k, gets a lot of funding, and is seen as prestigious / important by people (The last one is just in my experience). And the funding and attention given to longtermism is increasing. So I think it's fair to criticize these aspects if you disagree with them, although I guess charitable criticism would note that global poverty etc got a lot more attention in the beginning and is still well funded and well regarded by EA.

I'm pretty skeptical about arguments from optics, unless you're doing marketing for a big organization or whatever. I just think it's really valuable to have a norm of telling people your true beliefs rather than some different version of your beliefs designed to appeal to the person you're speaking to. This way people get a more accurate idea of what a typical EA person thinks if they talk to them, and you're likely better able to defend your own beliefs vs the optics-based ones if challenged. (The argument about there being so much funding in longtermism... (read more)

EDIT: I made this comment assuming the comment I'm replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here's the response anyway:

Well it's not so much that longtermists ignore such suffering, it's that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.

For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonab... (read more)

AI Safety Academic Conference

Technical AI Safety

The idea is to fund and provide logistical/admin support for a reasonably large AI safety conference along the lines of Neurips etc. Academic conferences provide several benefits: 1) Potentially increasing the prestige of an area and boosting the career capital of  people who get accepted papers. 2) Networking and sharing ideas, 3)  Providing feedback on submitted papers and highlighting important/useful papers.  This conference would be unusual in that the work submitted shares approximately t... (read more)

9
MaxRa
2y
As Gavin mentioned somewhere here, one significant downside would be to silo AI Safety work from the broader AI community.