All of Gavin's Comments + Replies

Critiques of EA that I want to read

Think that changed after Aleks commented

6Lizka3d
The issue was that we were letting people upload files as submissions. If you uploaded a file, your email or name would be shared (and we had a note explaining this in the description of the question that offered the upload option). Nearly no one was using the upload option, and if you didn't upload anything, your information wasn't shared. Unfortunately, Google's super confusing UI says: "The name and photo associated with your Google account will be recorded when you upload files and submit this form. Your email is not part of your response," which makes it seem like the form is never anonymous. (See below.) I removed the upload option today to reduce confusion, and hope people will just create a pseudonym or fake Google account if they want to share something that's not publicly accessible on the internet via link anonymously. What the form looked like: I don't remember what the wording of the description actually was, but it was along these lines.Here's what the settings for the test form look like:
On Deference and Yudkowsky's AI Risk Estimates

This isn't much independent evidence I think: seems unlikely that you could become director of MIRI unless you agreed. (I know that there's a lot of internal disagreement at other levels.)

9Verden4d
My point has little to do with him being the director of MIRI per se. I suppose I could be wrong about this, but my impression is that Nate Soares is among the top 10 most talented/insightful people with elaborate inside view and years of research experience in AI alignment. He also seems to agree with Yudkowsky on a whole lot of issues and predicts about the same p(doom) for about the same reasons. And I feel that many people don't give enough thought to the fact that while e.g. Paul Christiano has interacted a lot with Yudkowsky and disagreed with him on many key issues (while agreeing on many others [https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer] ), there's also Nate Soares, who broadly agrees with Yudkowsky's models that predict very high p(doom). Another, more minor point: if someone is bringing up Yudkowsky's track record in the context of his extreme views on AI risk, it seems helpful to talk about Soares' track record as well.
Transcript of a talk on The non-identity problem by Derek Parfit at EAGxOxford 2016

In the Q&A after this talk, Sandberg asked "What is the moral relevance of Apple laptops booting half a second slower?" (since on Parfit's simple view of aggregation, with millions of devices, this is equivalent to a massive loss of life). I always thought Parfit was being rude by ignoring the question, but your comment makes it seem more like joshing.

2Ben Pace5d
Heeheehee. Sounds like Anders poking fun at his friend live.
The Case for Reading Books

Synthesis: Reading is overrated in normal intellectual circles and slightly underrated among our gingered-up maximisers.

On fiction, I seem to remember Rob Wiblin saying "Fiction is a non-rational means of persuasion: beware." But I can't find the tweet.

On nonfiction, I remember my shock the first time I saw a false claim in a pop science book. They just don't check very hard. They probably check less than newspapers, famously untrustworthy. Arguably I never recovered. 

In my day the philistine/maximiser move was to read textbooks, and while this mostly... (read more)

1martin_glusker5d
Yeah I remember Rob's thought--I think it might be a fb post?
The Case for Reading Books

(So as not to be mystical, here's something which sketches what the Tractarian move is. But trust me, it isn't the same.)

The Case for Reading Books

There are nonfiction books which lose a lot in summarisation. This is almost the definition of a great book. Take Wittgenstein's Tractatus : its central rhetorical move, which is also one of its main points about metaphysics, will simply not happen unless you make an effort to read it.

The question assumes that books are just baggy vehicles for schematic bullet-point arguments. Julia Galef has a wonderful list of the many other ways books can update you.

5Gavin5d
(So as not to be mystical, here's [https://absoluteirony.wordpress.com/2014/09/17/nagarjuna-nietzsche-rorty-and-their-strange-looping-trick/] something which sketches what the Tractarian move is. But trust me, it isn't the same.)
Open Thread

Check this out: https://www.eaforchristians.org/

What’s the theory of change of “Come to the bay over the summer!”?

We should praise the class of worker in general but leave the individuals alone.

Critiques of EA that I want to read

I'm confused about whether I should note my disagreements here or just wait for someone to write the proper versions.

So I'll just note one that I really want to see: I was unpersuaded by this

Alternative models for distributing funding are probably better and are definitely under-explored in EA. 

until I saw

Grantmakers brain-drain organizations — is this good?

Alternate funding models as a solution to the grantmaking bottleneck could be great!

It's not anonymous, it records the name associated with your google account. (Of course you can just create a google account with a fake name, but then you can also just make an EA forum account with a fake name and post here.)

How to become more agentic, by GPT-EA-Forum-v1

Nice work, but I wonder at the consequences. Sure, it's inevitable that forums like ours will soon be bot-ridden and will need to bring in clunky authentication features, but aren't you speeding that up?

Are you trying to spook people, or just playing, or trying to give us a tool to make writing easier? The last two of those seem not worth it to me in expectation.

7JoyOptimizer6d
One goal is to make it easier to understand Effective Altruism through an interactive model. I'm sick with COVID right now. I might respond in greater depth when I'm not sick.
On Deference and Yudkowsky's AI Risk Estimates

Charles, consider going for that walk now if you're able to. (Maybe I'm missing it, but the rhetorical moves in this thread seem equally bad, and not very bad at that.)

2Charles He7d
You are right, I don't think my comments are helping.
Don't Over-Optimize Things

Reminds me of the result in queueing theory, where (in the simplest queue model) going above ~80% utilisation of your capacity leads to massive increases in waiting time.

Seven ways to become unstoppably agentic

From memory:

As an occasional antidote to forced-march life: consider yourself as a homeostatic organism with a particular trajectory. Like a plant in a pot.

What does a plant need? Water, light, space, soil, nitrogen, pest defence, pollinators. What are the potted human equivalents? What would an environment which gave you this without striving look like? What do you need to become yourself?

(You can reshape a plant, like bonsai, but really not too much or you'll kill it or stunt it.)

Seven ways to become unstoppably agentic

[Was this title written by an inner optimiser?]

More seriously, this is a very powerful set of ideas and attitudes and I wish I had known them about 15 years earlier. (For contrast, during my school work experience I painted lines on country roads.)

You know my views about high schoolers being systematically underestimated and fully capable of greatness, so well done for bucking the trend. That said, there is such a thing as too much agency (e.g. starting a company without checking the competition or without knowing what the market fit is; e.g. starting a bi... (read more)

2Evie Cottrell8d
[On the title -- you gotta have fun with these things haha] Thanks Gavin! Yes, the laws of equal and opposite advice defo apply here. I also wonder whether this sort of thing becomes zero sum within a small enough environment (e.g. if everyone starts lowering their bar for asking for help, people will raise their bar for saying yes, because they will be inundated with requests). Could lead to competitor dynamics (discussed in the comments of this post [https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the] ), which seems unfortunate. I really like the point of spending years 'becoming yourself'. Like, I probs just want my younger siblings to chill out and spend a lot of time with their friends and doing stuff that feels hedonically good to them. I like the point about groundedness. I felt ungrounded and uncertain when I was first immersed in EA, and I think this could (?) have been less if I was older. I'm kinda unsure, and think it's maybe inevitable to feel unsettled when you are introduced to and immersed in a very new culture/worldview in a short space of time. Where is Elizabeth's post on being a potted plant? Could you send it?
What is the overhead of grantmaking?

This could be a good submission for the criticism contest. Clean, tightly reasoned, not going in with the bottom line written.

Who's hiring? (May-September 2022)

Also a question about seniority:

I found it hard to interpret from the post and context what level of seniority the role required.

Pardon: asks that it not be your first rodeo, that you can handle founding (especially hiring and culture-setting). But we don't need VP or C-level.

Who's hiring? (May-September 2022)

More questions from offline:

How are reservists incentivized to prioritize ALERT over their other work when activated?

It's a good question. I doubt that binding contracts are the right way to do this. We will probably do peacetime stipends and emergency pay. But I suppose it's a matter of self-selection: we will be responding to the current most important thing in the world and, in this crowd, that should be enough.

After the director, what is the org most limited by?

Org strategy > org ops (existence, banking, authorisation, etc) > hiring > fundraising > wargame planning. Reservists and forecasters are fairly ready to go.

2Gavin12d
Also a question about seniority [https://twitter.com/ChrisPainterYup/status/1536504758755786757]: Pardon: asks that it not be your first rodeo, that you can handle founding (especially hiring and culture-setting). But we don't need VP or C-level.
Who's hiring? (May-September 2022)

Great questions. Most of the below could be reshaped by the director:

  • No physical location - director has discretion. (One cool idea we had was to enlist existing teams at EA orgs, so that they're constantly building readiness and already colocated or functionally remote.)
  • Role reports to the board, currently Jan Kulveit, Vishal Maini, Ales Flidr and me but likely to need more.
  • Some unvetted examples we came up with: COVID in late December 2019; Russia-Ukraine (minor investigation from December 2021, expanding to active nuclear risk forecasting by end January
... (read more)
3Alex D12d
One cool idea would be embedding a physical EOC [https://en.wikipedia.org/wiki/Emergency_operations_center] into refuges, and calling reservists in once some crisis threshold was crossed.
Nick Bostrom - Sommar i P1 Radio Show

https://open.spotify.com/playlist/3OY4Q9y8AOjUOyIsC9NKR4?si=c10b8777ea6a4e94

4Stefan_Schubert13d
Thanks, very helpful! (For other readers; Gavin compiled all those songs on Spotify.)
Who's hiring? (May-September 2022)
Answer by GavinJun 13, 202245

ALERT (the active longtermist emergency response team) is looking for a Director to lead the project.

The role is fully funded and we've organised fiscal sponsorship from a UK registered charity. We have a longlist of reservists and interest from people at Rethink Priorities, Our World in Data, Bluedot, ALLFED, and CEA.

Some good characteristics for the job:

  • Resilience. It's possible that your decisions will have major consequences, in a relatively short time.
  • Ability to function well in crisis situations. Our impression is that this is a relatively stable tra
... (read more)
4Gavin12d
More questions from offline: It's a good question. I doubt that binding contracts are the right way to do this. We will probably do peacetime stipends and emergency pay. But I suppose it's a matter of self-selection: we will be responding to the current most important thing in the world and, in this crowd, that should be enough. Org strategy > org ops (existence, banking, authorisation, etc) > hiring > fundraising > wargame planning. Reservists and forecasters are fairly ready to go.

Is there a physical location or office? Whom does the role report to? What are example emergencies where reservists would be activated? What would they do when activated? Are there comparable orgs in other domains I should index to when thinking about ALERT? How many hours / week, roughly? What does it mean that the role would not be on duty most days? Is there an existing staff or would one need to be hired? When the National Guard is activated they are called away to a physical space to work with others, is it like that?

Nick Bostrom - Sommar i P1 Radio Show

https://docdrop.org/video/58RMpLkmITg/

Thanks! I still think the translation should be posted to the EA Forum, to make this content more discoverable.

Responsible/fair AI vs. beneficial/safe AI?

That's the average online vibe maybe, but plenty of AGI risk people are going for detente.

Bruce Kent (1929–2022)

Naively, UK disarmament in 1980 would have done one of two things 1) given the Soviets under Brezhnev much more power, lengthening the cold war and producing unknown effects on reform; or 2) forced even more US nuclear deployment in European bases, forcing a response from the Soviets, and so destabilising the world.  (As I say, there's a chance that it could instead have lead to a better equilibrium, but I can't see why anyone would think this was the most likely outcome.)

If the locations of your post-nuke decentralised government are known, then they can be targeted by nukes.

AI Twitter accounts to follow?

https://twitter.com/i/lists/1185207859728076800

I don't think MIRI has tried this much; we were unusually excited about Edward Kmett.

5Greg_Colbourn18d
He's just one person, so I wouldn't say that's significant empirical evidence. Unless a bunch of other people they approached turned them down (and if they did, it would be interesting to know why).
Idea: Pay experts to provide detailed critiques / analyses of EA ideas

I want to run this at some point. Pro statistical consultants could handle many posts, and philosophy grad students many of the rest. I was thinking in terms of one review per important post here.

Riffing further: 

  • Academic peer review generally doesn't run code, so this is one way we could surpass them. 
  • Also publishing the reviews, 
  • and doing them within a month, rather than within a year.
2Chris Leong19d
"And doing them within a month, rather than within a year." - Might be hard to determine what posts are important after just a month?
technicalities's Shortform

TIL I learned about the Utilitarian Fandom.

(Derives from old Felicifia, and so I guess Pablo wrote a lot of it.)

Announcing a contest: EA Criticism and Red Teaming

Don't see why not, as long as it's not salami sliced.

1Ryan Beck19d
Makes sense, thanks!
What is the state of the art in EA cost-effectiveness modelling?

See also

https://forum.effectivealtruism.org/posts/kyJtzRHd6hLzfxshd/announcing-the-legal-priorities-project-writing-competition

What's the causal effect of a PhD?

Yeah, I agree for Berkeley. I think I was silently assuming a domain of world top 200 or something.

One better bar could be "improves taste more than 4+ years of independent work fit around some other full time job". 

What is the state of the art in EA cost-effectiveness modelling?

Couldn't find any public OP analyses on a cursory look

4Stefan_Schubert22d
I guess that if one wants to red team effective altruist cost-effectiveness analyses that inform, e.g. giving decisions, non-public analyses may be relevant.
What is the state of the art in EA cost-effectiveness modelling?

I'd say Michael Dickens and Sam Nolan are probably peak performance. 

This is a classic, devastating methodological piece too.

2Stefan_Schubert22d
I would guess that other orgs besides GiveWell also have cost-effectiveness models/analyses [https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism?commentId=6yFEBSgDiAfGHHKTD] .
1Froolow22d
Thank you - these are really helpful to help me understand. On the Sam Nolan piece especially quantifying uncertainty was one of the biggest critiques I had of the GiveWell model so I'm glad this has already been considered!
Responsible/fair AI vs. beneficial/safe AI?

Good luck!

(BTW there's been a big spurt of alignment jobs lately, including serious spots in academia. e.g. here, here, here. probably not quite up to demand, but it's better than you'd think.)

Who's hiring? (May-September 2022)
Answer by GavinJun 04, 202229

The Alignment of Complex Systems Group is a new lab hosted by Charles University Prague. They work on formal theories to help with neglected AGI scenarios.

They're looking for postdocs, predocs, fellows, technical writers, research engineers, and students at MSc or PhD level. Also, critically: a project manager with research experience. They are fully funded by the SFF among others. 

Apply here.

The principals, Jan and Tomáš, are two of my favourite researchers. This will be a wonderful place to learn an extremely important field, or to push it on.

Announcing a contest: EA Criticism and Red Teaming

I sympathise with this and generally think that EA should take conflicts of interest more seriously.

That said, I think this is subtly the wrong question: what we really want is, "how rational are the judges?" How often did they change their mind in response to arguments of various kinds from various places of various tones?

Can we say anything to convince you of that? Maybe.

Anyway: Most days I feel like more of a "holy shit x-risk" guy than a strong longtermist. I briefly worked in international development, was a socialist, a feminist, a vegan, an e2g, etc... (read more)

Announcing a contest: EA Criticism and Red Teaming

Random personal examples:

  • This won the community's award for post of the decade. Its disagreement with EA feels half-fundamental; a sweeping change to implementation details and some methods. 
  • This was much-needed and pretty damning. About twice as long as it needed to be though.
  • This old debate looks good in hindsight
  • The initial patient longtermist posts shook me up a lot.
  • Robbie's anons were really good
  • This is on the small end of important, but still rich and additive.
  • This added momentum to the great intangibles vibe shift of 2016-8 
  • This was influe
... (read more)
Just Say No to Utilitarianism

Your chosen method - refuting a rule with a counterexample - throws out all moral rules, since every moral theory has counterexamples. This includes common sense ethics - recall the protracted cross-cultural justification of slavery, for one upon thousands of instances. (Here construing "go with your gut dude" as a rule.)

If we were nihilists, we could sigh in relief and stop here. But we're not - so what next? Clearly something not so rigid as rules.

You're also underselling the mathematical results: as a nonconsequentialist, you will make incoherent action... (read more)

1Arjun Panickssery23d
I'm not sure what exactly you mean by a moral rule, e.g. "Courage is better than cowardice, all else equal" doesn't have any counterexamples. But for certain definitions of moral rule you should reject all moral rules as incorrect. Looking at the post, I'll deny "My choices shouldn't be focused on ... how to pay down imagined debts I have to particular people, to society." You have real debts to particular people. I don't see how this makes ethics inappropriately "about my own self-actualization or self-image."
7Daniel Kirmani23d
This sounds a lot like "every hypothesis can be eventually falsified with evidence, therefore, trying to falsify hypotheses rules out every hypothesis. So we shouldn't try to falsify hypotheses." But we are Bayesians, are we not? If we are, we should update away from ethical principles when novel counterexamples are brought to our attention, with the magnitude of the update proportional to the unpleasantness of the counterexample.
Announcing a contest: EA Criticism and Red Teaming

Best I can think of is looking for the announcement posts inside each of these tags

https://forum.effectivealtruism.org/topics/all

Announcing a contest: EA Criticism and Red Teaming

Back in my day my enemies did instrumental harm like a rational person.

Responsible/fair AI vs. beneficial/safe AI?
Answer by GavinJun 03, 202211

I would call the cluster "AI ethics". But there's no hard cutoff, no sufficient philosophical difference: it's mostly just social clustering. Here's my short diplomatic piece about the gap

We should do our best to resist forming explicit competing factions; as Prunkl and Whittlestone note, it's all one space. Here's a principled argument for doing this.

 

(Though it is hard to avoid being factional when one group are being extremely factional at you. And we don't need to think that each point in the space is equally worrying.)

I like Jon Kleinberg,... (read more)

5howdoyousay?22d
To add to the other papers coming from the "AI safety / AGI" cluster calling for a synthesis in these views... https://www.repository.cam.ac.uk/handle/1810/293033 [https://www.repository.cam.ac.uk/handle/1810/293033] https://arxiv.org/abs/2101.06110 [https://arxiv.org/abs/2101.06110]
Announcing a contest: EA Criticism and Red Teaming

Sounds right

The problem is, we're not an agent and so no one makes The decision to shift and so no one is noticeably responsible for acknowledging credit and blame. But it's still fair to want it.

4Gavin23d
One traditional solution [https://en.wikipedia.org/wiki/Sin-eater#In_Wales_and_the_Welsh_Marches]
Announcing a contest: EA Criticism and Red Teaming

Maybe the lesson is: "even if you don't win, you might shape the movement"

Load More