A

akash

88 karmaJoined Sep 2022Pursuing a doctoral degree (e.g. PhD)

Participation
3

Comments
17

akash
18d30

Hello from another group organizer in the Southwest! We are in Tucson AZ, just a six hours drive away. Hopefully, someday in the not-so-far future, organizing a southwestern meetup / retreat / something would be feasible and super cool!

akash
22d10

Nice, I didn't know! Their research goals seem quite broad, which is good. Within the context of AI existential risk, this project looks interesting.

Answer by akashMay 06, 202362

I think the better question might be, "who are the best some professors/academic research groups in AI Safety to work with?"

Two meta-points I feel might be important —

  • For PhDs, the term "best university" doesn't mean much (there are some cases in which infrastructure makes a difference, but R1 schools, private or public, generally seem to have good research infrastructure). Your output as a graduate student heavily depends on which research group/PI you work with.
  • Specifically for AI safety, the sample size of academics is really low. So, I don't think we can rank them from best-to-eh. Doing so becomes more challenging because their research focus might differ, so a one-to-one comparison would be unsound.

With that out of the way, three research groups in academia come to mind:

Others:

  • Center for Human Inspired AI (CHIA) is a new research center at Cambridge; I don't know if their research would focus on subdomains of Safety; someone could look into this more.
  • I remember meeting two lovely folks from Oregon State University working on Safety at EAGx Berkeley. I cannot find their research group, and I forget what exactly they were working on; again, someone who knows more about this could comment perhaps.
  • An interesting route for a Safety-focused Ph.D. could be having a really good professor at a university who agrees to have an outside researcher as a co-advisor. I am guessing that more and more academics would want to start working on the Safety problem, so such collaborations would be pretty welcome, especially if they are also new to the domain.
    • One thing to watch out for: which research groups get funded by this NSF proposal. There will soon be new research groups that Ph.D. students interested in the Safety problem could gravitate towards!
akash
1mo51

The hallmark experiences of undiagnosed ADHD seem to be saying “I just need to try harder” over and over for years, or kicking yourself for intending to start work and then not getting much done...

Extremely relatable.

Thank you very much for writing this. I am in the process of getting a diagnosis, and this helped me overcome some of the totally made-up mental barriers regarding ADHD medication.

akash
2mo6-5

I downvoted and want to explain my reasoning briefly: the conclusions presented are too strong, and the justifications don't necessarily support them. 

We simply don't have enough experience or data points to say what the "central problem" in a utilitarian community will be. The one study cited seems suggestive at best. People on the spectrum are, well, on a spectrum, and so is their behavior; how they react will not be as monolithic as suggested.

All that being said, I softly agree with the conclusion (because I think this would be true for any community).

All of this suggests that, as you recommend, in communities with lots of consequentialists, there needs to be very large emphasis on virtues and common sense norms.

akash
2mo106

It's maybe worth clarifying that I'm most concerned about people who a combination of high-confidence in utilitarianism and a lack of qualms about putting it into practice.

Thank you, that makes more sense + I largely agree.

However, I also wonder if all this could be better gauged by watching out for key psychological traits/features instead of probing someone's ethical view. For instance, a person low in openness showing high-risk behavior who happens to be a deontologist could cause as much trouble as a naive utilitarian optimizer. In either case, it would be the high-risk behavior that would potentially cause problems rather than how they ethically make decisions. 

akash
2mo2819

...for example, I think we should be less welcoming to proudly self-identified utilitarians, since they’re more likely to have these traits.

Ouch. Could you elaborate more on this and back this up more? The statement makes it sound like an obvious fact, and I don't see why this would be true.

akash
2mo0-5

ML systems act when put in foreign situations wrt the training data. 


Could you elaborate on this more? My guess is that they could be working on the ML ethics side of things, which is great, but different than the Safety problem.

akash
2mo20

Disagree-voting for the following reasons:

  1. 700 people — that is a substantial sample size. 
  2. I think sampling bias will likely be small/inconsequential, given the sample size. 
    1. Ideally, yes, not having any logos would be great, but if I got an email asking me, "fill out this survey, and trust me, I am from a legitimate organization," I probably wouldn't fill out the survey.
    2. "very likely it is just the people familiar with lesswrong etc who has heard of these organizations" 
      1. Hard to say if this is true
      2. It is wrong to assume that familiarity with LW would lead people to answer a certain way. However, I can imagine people who dislike LW/similar would also be keen to complete the survey.
      3. The survey questions don't seem to prime respondents one way or the other.
      4. These 700 people did publish in ICML/NeurIPS; they have some degree of legitimacy.
  3. Nitpicky, but "a proportion of ML researchers who have published in top conferences such as ICML and NeurIPS think AI research could be bad" is probably a more accurate statement. I agree that this doesn't make the statements they are commenting on = truth; however, I think their opinion has a lot of value.

I think Metaculus does a decent job at forecasting, and their forecasts are updated based on recent advances, but yes, predicting the future is hard.

akash
3mo30

There are two statements in this summary that I find somewhat confusing:

  1. "He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the 'hinge of history' claim holds true."
  2. "Overall, MacAskill thinks that these arguments provide evidence that our time may be the most influential."

I haven't dived into the working paper yet, but these two statements seem contradictory. Is it possible to not be at the hinge of history but live during the most influential time? I thought being at the hinge of history ≈ being at the most influential time. What am I missing?

Load more