H

Habryka

17800 karmaJoined Sep 2014

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1105

Can't think of any way off the top of my head.

A bunch of projects by Lightcone Infrastructure that likely qualify: 

  • We run LessWrong.com and the AI Alignment Forum (alignmentforum.org)
  • We also built and continue to maintain the codebase that runs LessWrong and the EA Forum (together with the EA Forum team), which is now also being used by a bunch of other forums (like the Progress Forum, the recently launched Animal Advocacy Forum, and the Sam Harris "Waking Up" community)
  • We also run Lighthaven, a large event and office space in Downtown Berkeley, which provides heavily subsidized event space for various EA-aligned programs and events (currently hosting the MATS program)

Yeah, makes sense. It seemed to me you were in a kind of tight spot, having scheduled and framed this specific protest around a thing that you ended up realizing had some important errors in it. 

I think it was important to reframe the whole thing more fully when that happened, but man, running protests is hard and requires a kind of courage and defiance that I think is cognitively hard to combine with reframing things like this. I still think it was a mistake, but I also feel sympathetic to how it happened, at least how it played out in my mind (I don't want to claim I am confident what actually happened, I might still be misunderstanding important components of how things came to pass).

Yeah, though I don't think it's like super egregious. I do think that even after correcting the "charter"-mistake you continued to frame OpenAI usage policies as something that should be treated as some kind of contractual commitment of OpenAI that they walked back. 

But that seems backwards to me, a ToS is a commitment by users of OpenAI towards OpenAI, not a commitment by OpenAI to its users (in the vast majority of cases). For example for LessWrong, our ToS includes very few commitments by us, and I definitely don't see myself as having committed to never changing them. If we have a clause in our ToS that asks users to not make too many API requests in quick succession, then I definitely have not committed to not serve people who nevertheless make that many requests (indeed in many cases like search engines or users asking us for rate-limiting exceptions to build things like greaterwrong.com, I have totally changed how we treat users who make too many requests). 

Framing it as having gone back on a commitment seems kind of deceptive to me.

I also think there is something broader that is off about organizing "Pause AI" protests that then advocate for things that seem mostly unrelated to pausing AI to me (and instead lean into other controversial topics). Like, I now have a sense that if I attend future Pause AI events, my attendance of those events will then be seen and used as social proof that OpenAI should give into pressure on some other random controversy (like making contracts with the military), and that feels like it has some deceptive components to it.

And then also at a high-level I feel like there was a rhetorical trick going on in the event messaging where I feel like the protest is organized around some "military bad because weapons bad" affect, without recognizing that the kind of relationship that OpenAI seems to have with the military seems pretty non-central for that kind of relationship (working on cybersecurity stuff, which I think by most people's lights is quite different).

(I also roughly agree with Jason's analysis here

This analysis roughly aligns with mine and is also why I didn't go to this protest (but did go to a previous protest organized by Pause AI). This protest seemed to me like it overall communicated pretty deceptively around how OpenAI was handling its military relations, and also I don't really see any reason to think that engaging with the military increases existential risk very much (at least I don't see recent changes as an update on OpenAI causing more risk, and wouldn't see reversing those changes as progress towards reducing existential risk). 

I think Apple is very exceptional here, and it does come at great cost as many Apple employees have complained about over the past years

I think larger organizations are obviously worse than this, though I agree that some succeed nevertheless. I was mostly just making an argument about relative cost (and think that unless you put a lot of effort into it, at 200+ it usually becomes prohibitively expensive, though it of course depends on the exact policy and). See Google and OpenAI for organizations that I think are more representative here (and are more what I was thinking about). 

Long run growth rates cannot be exponential. This is easy to prove. Even mild steady exponential growth rates would quickly exhaust all available matter and energy in the universe within a few million years (see Holden's post "This can't go on" for more details).

So a model that tries adjust for marginal utility of resources should also quickly switch towards something other than assumed exponential growth within a few thousand years.

Separately, the expected lifetime of the universe is finite, as is the space we can affect, so I don't see why you need discount rates (see a bunch of Bostrom's work for how much life the energy in the reachable universe can support).

But even if things were infinite, then the right response isn't to discount the future completely within a few thousand years just because we don't know how to deal with infinite ethics. The choice of exponential discount rates in time does not strike me as very principled in the face of the ethical problems we would be facing in that case.

I agree that in short-term contexts a discount rate can be a reasonable pragmatic choice to model things like epistemic uncertainty, but this seems to somewhat obviously fall apart on the scale of tens of thousands of years. If you introduce space travel and uploaded minds and a world where even traveling between different parts of your civilization might take hundreds of years, you of course have much better bounds on how your actions might influence the future.

I think something like a decaying exponential wouldn't seem crazy to me, where you do something like 1% for the next few years, and then 0.1% for the next few hundred years, and then 0.01% for the next few thousand years, etc. But anything that is assumed to stay exponential when modeling the distant future seems like it doesn't survive sanity-checks.

Edit: To clarify more: This bites particularly much when dealing with extinction risks. The whole point of talking about extinction is that we have an event which we are very confident will have very long lasting effects on the degree to which our values are fulfilled. If humanity goes extinct, it seems like we can be reasonably confident (though not totally confident) that this will imply a large reduction in human welfare billions of years into the future (since there are no humans around anymore). So especially in the context of extinction risk, an exponential discount rate seems inappropriate to model the relevant epistemic uncertainty.

I meant in the sense that humans were alive 10,000 years, and could have caused the extinction of humanity then (and in that decision, by the logic of the OP, they would have assigned zero weight to us existing).

Load more