ACS

Asa Cooper Stickland

180 karmaJoined Mar 2022

Comments
11

Can you expand a bit on what you mean by why these ideas applying better to near – termism?

E.g. Out of 'hey it seems like machine learning systems are getting scarily powerful, maybe we should do something to make sure they're aligned with humans' vs 'you might think it's most cost effective to help extremely poor people or animals but actually if you account for the far future it looks like existential risks are more important, and AI is one of the most credible existential risks so maybe you should work on that', the first one seems like a more scalable/legible message or something. Obviously I've strawmaned the second one a bit to make a point but I'm curious what your perspective is!

Is the romantic relationship that big a deal? They were known to be friends and colleagues + both were known to be involved with EA and FTX future fund, and I thought it was basically common knowledge that Alameda was deeply connected with FTX as you show with those links - it just seems kind of obvious with FTX being composed of former Alameda employees and them sharing an office space or something like that.

My $0.02 - (almost) the entire financial and crypto world, including many prominent VC funds that invested in FTX directly seem to have been blindsided by the FTX blowup. So I'm less concerned about the ability to foresee that. However the 2018 dispute at Alameda seems like good reason to be skeptical, and I'm very curious what what was known by prominent EA figures, what steps they took to make it right and whether SBF was confronted about it before prominent people joined the future fund etc.

It would be awesome for the names of senior people who knew to be made public, plus the exact nature of what they were told and their response or lack thereof.

Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.

"Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers." I'm kind of skeptical of this.

Outside of Giada Pistilli and Talia Ringer I don't think these tweets would appear on the typical ML researcher timeline, they seem closer to niche rationality/EA shitposting.

Whether the typical ML person would think alignment/AI x-risk is really dumb is a different question, and I don't really know the answer to that one!

Would it be possible to get a job tutoring high school students (or younger students)?

Maybe you could reach out to local EA orgs and see if they have any odd jobs they could pay you for?

Also if it's at all financially possible for you, I would recommend self-study in whatever you're interested in (e.g. programming/math/social science blah blah blah) with the hope of getting a job offering more career capital later on, rather than stressing out too much about getting a great job right now.

Well AI Safety is strongly recommended by 80k, gets a lot of funding, and is seen as prestigious / important by people (The last one is just in my experience). And the funding and attention given to longtermism is increasing. So I think it's fair to criticize these aspects if you disagree with them, although I guess charitable criticism would note that global poverty etc got a lot more attention in the beginning and is still well funded and well regarded by EA.

I'm pretty skeptical about arguments from optics, unless you're doing marketing for a big organization or whatever. I just think it's really valuable to have a norm of telling people your true beliefs rather than some different version of your beliefs designed to appeal to the person you're speaking to. This way people get a more accurate idea of what a typical EA person thinks if they talk to them, and you're likely better able to defend your own beliefs vs the optics-based ones if challenged. (The argument about there being so much funding in longtermism that the best opportunities are already funded I think is pretty separate to the optics one, and I don't have any strong opinions there)

For me, I would donate to where you think there's the highest EV, and if that turns out to be longtermism, think about a clear and non-jargony way to explain that to non-EA people, i.e. say something like 'I'm concerned about existential risks from things like nuclear war, future pandemics and risks from emerging technologies like AI, so I donate some money to a fund trying to alleviate those risks' (rather than talking about the 10^100 humans who will be living across many galaxies etc etc). A nice side effect of having to explain your beliefs might be convincing some more people to go check out this 'longtermism' stuff!

EDIT: I made this comment assuming the comment I'm replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here's the response anyway:

Well it's not so much that longtermists ignore such suffering, it's that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.

For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the 'neglectedness' part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven't thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.

Load more