Buck

Chief Technology Officer @ Redwood Research
4778Berkeley, CA, USAJoined Sep 2014

Bio

I'm Buck Shlegeris. I am the CTO of Redwood Research, a nonprofit focused on applied alignment research. Read more about us here: https://www.redwoodresearch.org/

I'm also a fund manager on the EA Infrastructure Fund.

Comments
252

How would a language model become goal-directed?

as long as you imitate someone aligned then it doesn't pose much safety risk.

Also, this kind of imitation doesn't result in the model taking superhumanly clever actions, even if you imitate someone unaligned.

Senior EA 'ops' roles: if you want to undo the bottleneck, hire differently

I don't normally think you should select for speaking fluent LessWrong jargon, and I have advocated for hiring senior ops staff who have read relatively little LessWrong.

Senior EA 'ops' roles: if you want to undo the bottleneck, hire differently

I think we might have fundamental disagreements about 'the value of outside perspectives' Vs. 'the need for context to add value'; or put another way 'the risk of an echo chamber from too-like-minded people' Vs. 'the risk of fracture and bad decision-making from not-like-minded-enough people'. 

I agree that this is probably the crux.

EA for dumb people?

(I'm flattered by the inclusion in the list but would fwiw describe myself as "hoping to accomplish great things eventually after much more hard work", rather than "accomplished".)

FWIW I went to the Australian National University, which is about as good as universities in Australia get. In Australia there's way less stratification of students into different qualities of universities--university admissions are determined almost entirely by high school grades, and if you graduate in the top 10% of high school graduates (which I barely did) you can attend basically any university you want to. So it's pretty different from eg America, where you have to do pretty well in high school to get into top universities. I believe that Europe is more like Australia in this regard.

EA for dumb people?

This is correct, she graduated but had a hard time doing so, due to health problems. (I hear that Stanford makes it really hard to fail to graduate, because university rankings care about completion rates.)

Note that Kelsey is absurdly smart though, and struggled with school for reasons other than inherently having trouble learning or thinking about things.

Senior EA 'ops' roles: if you want to undo the bottleneck, hire differently

(Writing quickly, sorry if I'm unclear)

Since you asked, here are my agreements and disagreements, mostly presented without argument:

  • As someone who is roughly in the target audience (I am involved in hiring for senior ops roles, though it's someone else's core responsibility), I think I disagree with much of this post (eg I think this isn't as big a problem as you think, and the arguments around hiring from outside EA are weak), but in my experience it's somewhat costly and quite low value to publicly disagree with posts like this, so I didn't write anything.
    • It's costly because people get annoyed at me.
    • It's low value because inasmuch as think your advice is bad, I don't really need to persuade you you're wrong, I just need to persuade the people who this article is aimed at that you're wrong. It's generally much easier to persuade third parties than people who already have a strong opinion. And I don't think that it's that useful for the counterarguments to be provided publicly.
      • And if someone was running an org and strongly agreed with you, I'd probably shrug and say "to each their own" rather than trying that hard to talk them out of it: if a leader really feels passionate about shaping org culture a particular way, that's a reasonable argument for them making the culture be that way.
  • For some of the things you talk about in this post (e.g. "The priority tasks are often mundane, not challenging", 'The role is mostly positioned as "enabling the existing leadership team" to the extent that it seems like "do all the tasks that we don't like"') I agree that it is bad inasmuch as EA orgs do this as egregiously as you're describing. I've never seen this happen in an EA org as blatantly as you're describing, but find it easy to believe that it happens.
    • However, if we talked through the details I think there's a reasonable chance that I'd end up thinking that you were being unfair in your description.
    • I think one factor here is that some candidates are actually IMO pretty unreasonably opposed to ever doing grunt work. Sometimes jobs involve doing repetitive things for a while when they're important. For example, I spoke to 60 people or so when I was picking applicants for the first MLAB, which was kind of repetitive but also seemed crucial. It's extremely costly to accidentally hire someone who isn't willing to do this kind of work, and it's tricky to correctly communicate both "we'd like you to not mostly do repetitive work" and "we need you to sometimes do repetitive work, as we all do, because the most important tasks are sometimes repetitive".
  • I think our main disagreement is that you're more optimistic about getting people who "aren't signed up to all EA/long-termist ideas" to help out with high level strategy decisions than I am. In my experience, people who don't have a lot of the LTist context often have strong opinions about what orgs should do that don't really make sense given more context.
    • For example, some strategic decisions I currently face are:
      • Should I try to hire more junior vs more senior researchers?
      • Who is the audience of our research?
      • Should I implicitly encourage or discourage working on weekends?
    • I think that people who don't have experience in a highly analogous setting will often not have the context required to assess this, because these decisions are based on idiosyncrasies of our context and our goals. Senior people without relevant experience will have various potentially analogous experience, and I really appreciate the advice that I get from senior people who don't have the context, but I definitely have to assess all of their advice for myself rather than just following their best practices (except on really obvious things).
    • If I was considering hiring a senior person who didn't have analogous experience and also wanted to have a lot of input into org strategy, I'd be pretty scared if they didn't seem really on board with the org leadership sometimes going against their advice, and I would want to communicate this extremely clearly to the candidate, to prevent mismatched expectations.
    • I think that the decisions that LTist orgs make are often predicated on LTist beliefs (obviously), and people who don't agree with LTist beliefs are going to systematically disagree about what to do, and so if the org hires such a person, they need that person to be okay with getting overruled a bunch on high level strategy. I don't really see how you could avoid this.
  • In general, I think that a lot of your concerns might be a result of orgs trying to underpromise and overdeliver: the orgs are afraid that you will come in expecting to have a bunch more strategic input than they feel comfortable promising you, and much less mundane work than you might occasionally have. (But probably some also comes from orgs making bad decisions.)
(Even) More Early-Career EAs Should Try AI Safety Technical Research

I agree with others that these numbers were way high two years ago and are still way high

Power dynamics between people in EA

Unfortunately, reciprocity.io is currently down (as of a few hours ago). I think it will hopefully be back in <24 hours.

 

EDIT now back up 

Some unfun lessons I learned as a junior grantmaker

If you come across as insulting, someone might say you're an asshole to everyone they talk to for the next five years, which might make it harder for you to do other things you'd hoped to do.

Some unfun lessons I learned as a junior grantmaker

The problem with saying things like this isn't that they're time consuming to say, but that they open you up to some risk of the applicant getting really mad at you, and have various other other risks like this. These costs can be mitigated by being careful (eg picking phrasings very intentionally, running your proposed feedback by other people) but being careful is time-consuming.

Load More