technicalities

https://www.gleech.org/

Co-founder at Arb, an AI / forecasting / pandemics / economics consultancy.

Background in philosophy, international development, statistics. Doing a technical AI PhD at Bristol.

Financial conflict of interest: technically the British government through the funding council. Technically the Mercatus Center through Emergent Ventures. I contract for OpenPhil.

Wiki Contributions

Comments

Off Road: support for EAs struggling at uni

Some should, some shouldn't and I find it hard to tell who is what.

Off Road: support for EAs struggling at uni

We sent it to student groups and to a few extremely well-connected people we know through ESPR. 

We got the universities incidentally, from the .edu and .ac.uk email addresses. No involvement question,  self-definition is fine for now.

Off Road: support for EAs struggling at uni

Thanks Chana! I appreciate you thinking hard about this, and hope it'll make us more careful and good.

Price. My EA coach is $60 an hour (with student discount), which is my only datum. Happy to amend given more data.

Retention. Yeah, you capture what I was thinking about with (3): not being a naive optimiser, not squeezing as many people into EA as you can despite their misery and lack of fit. The self-care link is pointing at the same vague spirit: don't routinely crush feelings (in that case, your own). Both my and Damon's instincts run pretty heavily against indoctrination, so we should be able to spot it in others. I don't think we'll set any policy about continuing to help people after they leave EA, that's clearly a matter of conscience n context. 

I take (1) and (2) pretty seriously, but Free Support Booking, the current leading idea, is designed to mitigate them ~completely: the idea is we book "external" (non-EA) support people. I just forgot to say this at any point. Only trouble is the money.

Parts. I'll let Damon respond in full, but my take is: I don't think that sentence is meant as a strong claim nor mission statement. Parts stuff is a mental model: often useful, always extremely unclear metaphysically. Taken metaphorically ("as if I had several subagents, several utility functions, internal conflict"), it seems fine. We haven't designed the coaching yet, but it won't involve intense IFS or whatnot. 

I find it hard to think about the baseline risk of all psychological intervention (all intervention), which is what I take your concerned friends to be denoting. Going to a standard psychodynamic therapist seems similarly risky to me (i.e. not very).

Shoulds. Happy to flag it. (I personally get a lot out of shoulds, so we're not the anti-should movement.)

Off Road: support for EAs struggling at uni

Point someone raised offline: The above talks as if executive dysfunction or school suffering is an EA problem, or disproportionately represented in EA. Neither are true.

The explanation is, I find myself automatically thinking in terms of multipliers (helping someone who could do great thing vs helping someone else) and then the whole project is quietly conditioning on that. I notice that I really don't want to do this automatically.

Creative Writing Contest: The Winning Entries

There's the "prize" tag. Any user can tag posts (or suggest new tags actually).

2021 AI Alignment Literature Review and Charity Comparison

Not at Ought, but I can try: 

In engineering, there are many horrendous conceptual issues that just don't come up in practice. (I have in mind stuff like finite element analysis, a method which works really well despite its assumptions being constantly violated.)

Similarly, there things which are conceptually fine but practically intractable once you try and do them.

The idea with Elicit seems to be to try a difficult but tractable alignment problem, and so work out what problems we're overblowing and what we're overlooking.

The academic contribution to AI safety seems large

This post is really approximate and lightly sketched, but at least it says so. Overall I think the numbers are wrong and the argument is sound.

Synthesising responses:

  • Industry is going to be a bigger player in safety, just as it's a bigger player in capabilities.

  • My model could be extremely useful if anyone could replace the headcount with any proxy of productivity on the real problem. Any proxy at all.

  • Doing the bottom up model was one of the most informative parts for me. You can cradle the whole field in the palm of your mind. It is a small and precious thing. At the same time, many organisations you would assume are doing lots of AGI existential risk reduction, are not.

  • Half of the juice of this post is in the caveats and the caveat overflow gdoc.

  • I continue to be confused by how little attention people pay to the agendas, in particular the lovely friendly CHAI bibliography.

  • Todd notes that you could say the same about most causes, everything is in fact connected. If this degree of indirect effect was uniform, then the ranking of causes would be unchanged. But there's something very important about the absolute level, and not just for timelines and replaceability. Safety people need gears, and "is a giant wave of thousands of smart people going to come?" is a big gear.

  • A lot can change in 3 years, at MIRI.

What are some resources (articles, videos) that show off what the current state of the art in AI is? (for a layperson who doesn't know much about AI)

The Codex demo is pretty stunning for anyone who can program.

The Library tour could impress anyone (even knowing the cherry picking)

Gwern’s curation of robo creativity.

I like watching AlphaZero fight Stockfish. I can only hope to have such a champion IRL.

What is most confusing to you about AI stuff?

Here's a great post about this, which I would summarise as "not worried yet, but it's really hard to tell when we should worry".

The academic contribution to AI safety seems large

Thanks Dan. I agree that industry is more significant (despite the part of it which publishes being ~10x smaller than academic AI research). If you have some insight into the size and quality of the non-publishing part, that would be useful.

Do language models default to racism? As I understand the Tay case, it took thousands of adversarial messages to make it racist.

Agree that the elision of trust and trustworthiness is very toxic. I tend to ignore XAI.

Load More