technicalities

https://www.gleech.org/

Background in philosophy, international development, statistics. Doing a technical AI PhD at Bristol.

Financial conflict of interest: technically the British government through the funding council.

Wiki Contributions

Comments

[Creative Writing Contest] [Fiction] [Referral] A Common Sense Guide to Doing the Most Good, by Alexander Wales

I'm actually pretty happy for this warning to spread; it's not a big problem now(?), but will be if growth continues. Vigilance is the way to make the critique untrue.

OTOH you don't necessarily want to foreground it as the first theme of EA, or even the main thing to worry about.

My first PhD year

Looks like a great year Jaime!

Strongly agree that freedom to take side projects is a huge upside to PhDs. What other job lets you drop everything to work full-time for a month, on something with no connection to your job description?

Frank Feedback Given To Very Junior Researchers

I think this is your best post this year. Because rarely said, despite these failure modes seeming omnipresent. (I fall into em all the time!)

Some longtermist fiction

Yep, skip Phlebas at first - but do come back to it later, because despite being silly and railroading, it is the clearest depiction of the series' main theme, which is people's need for Taylorian strong evaluation, the dissatisfaction of unlimited pleasure and freedom, liberalism as unstoppable, unanswerable assimilator.

I wrote a longtermist critique of the Culture here.

Surface Detail is about desperately trying to prevent an s-risk. Excession is the best on most axes.

Career advice for Australian science undergrad interested in welfare biology

Not a bio guy, but in general: talk to more people! List people you think are doing good work and ask em directly.

Also generically: try to do some real work in as many of them as you can. I don't know how common undergrad research assistants are in your fields, or in Australian unis, but it should be doable (if you're handling your courseload ok).

PS: Love the username.

Writing about my job: Data Scientist

Big old US >> UK pay gap imo. Partial explanation for that: 32 days holiday in the UK vs 10 days US. 

(My base pay was 85% of total; 100% seems pretty normal in UK tech.)

Other big factor: this was in a sorta sleepy industry that tacitly trades off money for working the contracted 37.5 h week, unlike say startups. Per hour it was decent, particularly given 10% study time. 

If we say hustling places have a 50 h week (which is what one fancy startup actually told me they expected), then 41 looks fine

The case against “EA cause areas”

Agree with the spirit - there is too much herding, and I would love for Schubert's distinctions to be core concepts. However, I think the problem you describe appears in the gap between the core orgs and the community, and might be pretty hard to fix as a result.

What material implies that EA is only about ~4 things?

  • the Funds
  • semi-official intro talks and Fellowship syllabi
  • the landing page has 3 main causes and mentions 6 more
  • the revealed preferences of what people say they're working on, the distribution of object-level post tags

What emphasises cause divergence and personal fit?

  • 80k have their top 7 of course, but the full list of recommended ones has 23
  • Personal fit is the second thing they raise, after importance
  • New causes, independent thinking, outreach, cause X, and 'question > ideology' is a major theme at every EAG and (by eye) in about a fifth of the top-voted Forum posts.

So maybe limited room for improvements to communication? Since it's already pretty clear. 

Intro material has to mention some examples, and only a couple in any depth. How should we pick examples? Impact has to come first. Could be better to not always use the same 4 examples, but instead pick the top 3 by your own lights and then draw randomly from the top 20.

Also, I've always thought of cause neutrality as conditional - "if you're able to pivot, and if you want to do the most good, what should you do?" and this is emphasised in plenty of places. (i.e. Personal fit and meeting people where they are by default.) But if people are taking it as an unconditional imperative then that needs attention.

How to explain AI risk/EA concepts to family and friends?

Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He's done tons of talks and podcasts - not sure which is best, but if 3 hours of heavy content isn't a problem, the 80k one is good.

There's already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.

Bonus: here's what I told my mum.

AIs are getting better quite fast, and we will probably eventually get a really powerful one, much faster and better at solving problems than people. It seems really important to make sure that they share our values; otherwise, they might do crazy things that we won't be able to fix. We don't know how hard it is to give them our actual values, and to assure that they got them right, but it seems very hard. So it's important to start now, even though we don't know when it will happen, or how dangerous it will be.

Undergraduate Making Life-Altering Choices While Sober, Please Advise

[I don't know you, so please feel free to completely ignore any of the following.]

I personally know three EAs who simply aren't constituted to put up with the fake work and weak authoritarianism of college. I expect any of them to do great things. Two other brilliant ones are Chris Olah and Kelsey Piper. (I highly recommend Piper's writing on the topic for deep practical insights and as a way of shifting the balance of responsibility partially off yourself and onto the ruinous rigid bureaucracy you are in. She had many of the same problems as you, and things changed enormously once she found a working environment that actually suited her. Actually just read the whole blog, she is one of the greats.)

80k have some notes on effective alternatives to a degree. kbog also wrote a little guide

In the UK a good number of professions have a non-college "apprenticeship" track, including software development and government! I don't know about the US.

This is not to say that you should not do college, just that there are first-class precedents and alternatives.

More immediately: I highly recommend coworking as a solution to ugh. Here's the best kind, Brauner-style, or here are nice group rooms on Focusmate or Complice.

You're a good writer and extremely self-aware. This is a really good start.

If you'd like to speak to some other EAs in this situation (including one in the US), DM me.

What is an example of recent, tangible progress in AI safety research?

Not recent-recent, but I also really like Carey's 2017 work on CIRL. Picks a small, well-defined problem and hammers it flush into the ground. "When exactly does this toy system go bad?"

Load More