It feels fairly alarming to me that this post didn't get more pushback here and is so highly upvoted.
I think it makes a couple interesting points, but then makes extremely crazy sounding claims, taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them. This is a pretty crazy sounding number that needs way better argumentation than "a poll of people said so", and here it's just asserted without much commentary at all.
(In addition to things other people have mentioned here,...
taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them. This is a pretty crazy sounding number that needs way better argumentation than "a poll of people said so", and here it's just asserted without much commentary at all.
I'm confused by this statement. The welfare range estimates aren't based on a "poll" and are based on numerous "particular models."
taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them
I'm interested to hear what you think the relevant difference is between the epistemic grounding of (1) these figures vs. (2) people's P(doom)s, which are super common in LW discourse. I can imagine some differences, but the P(dooms) of alignment experts still seem very largely ass-pulled and yet also largely deferred-to.
I basically disagree with this take on the discussion.
Most clearly: this post did generate a lot of pushback. It has more disagree votes than agree votes, the top comment by karma argues against some of its claims and is heavily upvoted and agree-voted, and it led to multiple response posts including one that reaches the opposite conclusion and got more karma & agree votes than this one.
Focusing on the post itself: I think that the post does a decent job of laying out the reasoning for its claims, and contains insights that are relevant and not widely ...
On the 7-15% figure I don't actually see where the idea that smaller, less intelligent animals suffer less when they are in physical pain is commonsense comes from. People almost never cite a source for it being commonsense, and I don't recall having had any opinion about it before I encountered academic philosophy. I think it is almost certainly true that people don't care very much about small dumb animals, but that, but there are a variety of reasons why that is only moderate evidence for the claim that ordinary people think they experience less intense...
A few DC and EU people tell me that in private, Anthropic (and others) are more unequivocally antiregulation than their public statements would suggest.
I've tried to get this on the record—person X says that Anthropic said Y at meeting Z, or just Y and Z—but my sources have declined.
Following up my other comment:
To try to be a bit more helpful rather than just complaining and arguing: when I model your current worldview, and try to imagine a disclaimer that helps a bit more with my concerns but seems like it might work for you given your current views, here's a stab. Changes bolded.
...OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We recommend specific opportunities at OpenAI that we think may be high impact. We recommend applicants pay attention to the details of individual r
Thanks.
Fwiw while writing the above, I did also think "hmm, I should also have some cruxes for 'what would update me towards 'these jobs are more real than I currently think.'" I'm mulling that over and will write up some thoughts soon.
It sounds like you basically trust their statements about their roles. I appreciate you stating your position clearly, but, I do think this position doesn't make sense:
I think EAs vary wildly. I think most EAs do not have those skills – I think it is a very difficult skill. Merely caring about the world is not enough.
I think most EAs do not, by default, prioritize epistemics that highly, unless they came in through the rationalist scene, and even then, I think holding onto your epistemics while navigating social pressure is a very difficult skill that even rationalists who specialize in it tend to fail at. (Getting into details here is tricky because it involves judgment calls about individuals, in social situation...
Surely this isn't the typical EA though?
I think job ads in particular are a filter for "being more typical."
I expect the people who have a chance of doing a good job to be well connected to previous people who worked at OpenAI, with some experience under their belt navigating organizational social scenes while holding onto their own epistemics. I expect such a person to basically not need to see the job ad.
I do want to acknowledge:
I refer to Jan Leike's and Daniel Kokotajlo's comments about why the left, and reference other people leaving the company.
I do think this is important evidence.
I want to acknowledge I wouldn't actually bet that Jan and Daniel would endorse everyone else leaving OpenAI, and only weakly bet that they'd endorse not leaving up the current 80k-ads as written.
I am grateful to them for having spoken up publicly, but I know that a reason people hesitate to speak publicly about this sort of thing is that it's easier for soundbyt...
I have slightly complex thoughts about the "is 80k endorsing OpenAI?" question.
I'm generally on the side of "let people make individual statements without treating it as a blanket endorsement."
In practice, I think the job postings will be read as an endorsement by many (most?) people. But I think the overall policy of "social-pressure people to stop making statements that could be read as endorsements" is net harmful.
I think you should at least be acknowledging the implication-of-endorsement as a cost you are paying.
I'm a bit confused about how...
I attempted to address this in the Isn't it better to have alignment researchers working there, than not? Are you sure you're not running afoul of misguided purity instincts? FAQ section.
I think the evidence we have from OpenAI is that it isn't very helpful to "be a safety conscious person there." (i.e. combo of people leaving who did not find it tractable to be helpful there, and NDAs making it hard to reason about, and IMO better to default assume bad things rather than good things given the NDAs)
I think it's especially not helpful if you're a low-contex...
I do basically agree we don't have bargaining power, and that they most likely don't care about having a good relationship with us.
The reason for the diplomatic "line of retreat" in the OP is more because:
...I'd probably imagine no-one much at OpenAI reall
Nod, thanks for the reply.
I won't argue more for removing infosec roles at the moment. As noted in the post, I think this is at least a reasonable position to hold. I (weakly) disagree, but for reasons that don't seem worth getting into here.
The things I'd argue here:
FYI fag is a pretty central example of a slur in America imo.
It gets used and normalized in some edgy cultures but I think that’s sort of like how the n-word gets used in some subcultures. (When I was growing up at least it was probably in the top 5 ‘worst’ words to say, at least weighted by ‘anyone ever actually said them’)
There’s also a thing where ‘retarded’ went from ‘not that bad’ to ‘particularly bad in some circles’, although I’m not sure how that played out since it was ‘after my time’.
All of this is sort of anti-inductive and evolving and makes sense to not be very obvious to a foreigner.
Eh, I've been living in the U.S. for a full decade, so I think the "foreigner excuse" doesn't really work here, I think I was mostly just wrong in a kind of boring way.
My guess is I just happened to have not heard this specific term used very much where I could see people's social reaction to it, which I guess is a weird attribute of slurs. Reading more about it in other contexts definitely made me convinced it qualifies as a slur (but also, relatedly, would honestly be quite surprised if people used it in any kind of real way during Manifest).
I work for Habryka, so my opinion here should be discounted. (for what it's worth I think I have disagreed with some of his other comments this week, and I think your post did update me on some other things, which I'm planning to write up). But re:
incorrectly predicted what journalists would think of your investigative process, after which we collaborated on a hypothetical to ask journalists, all of whom disagreed with your decision.
this seems egregiously inaccurate to me. Two of the three journalists said some flavor of "it's complicated" on the topic of ...
I think it's worth pointing to the specifics of each, because I really don't think it's unreasonable to gloss as "all of whom disagreed."
I would delay publication.
This goes without saying.
...I think it depends a lot on the group's ability to provide evidence the investigators' claims are wrong. In a situation like that I would really press them on the specifics. They should be able to provide evidence fairly quickly. You don't want a libel suit but you also don't want to let them indefinitely delay the publication of an article that will be damaging to
What’s wrong with “make a specific targeted suggestion for a specific person to do the thing, with an argument for why this is better than whatever else the person is doing?”, like Linch suggests?
This can still be hard, but I think the difficulty lives in the territory, and is an achievable goal for someone who follows EA Forum and pays attention to what organizations do what.
It seemed useful to dig into "what actually are the useful takeaways here?", to try an prompt some more action-oriented discussion.
The particular problems Elizabeth is arguing for avoiding:
I left off "Taxing Facebook" because it feels like the wrong name (since it's not really p...
The comments/arguments about the community health team mostly make me think something more like "it should change its name" than be disbanded. I think it's good to have a default whisper network to report things to and surreptitiously check in with, even if they don't really enforce/police things. If the problem is that people have a false sense of security, I think there are better ways to avoid that problem.
Just maintaining the network is probably a fair chunk of work.
That said – I think one problem is that the comm-health team has multiple roles. I'm ho...
But a glum aphorism comes to mind: the frame control you can expose is not the true frame control.
I think it's true that frame control (or, manipulation in general) tends to be designed to make it hard to expose, but, I think the actual issue here is more like "manipulation is generally harder to expose than it is to execute, so, people trying to expose manipulation have to do a lot of disproportionate work."
Part of the reason I think it was worth Ben/Lightcone prioritizing this investigation is as a retro-active version of "evaluations."
Like, it is pretty expensive to "vet" things.
But, if your org has practices that lead to people getting hurt (whether intentionally or not), and it's reasonably likely that those will eventually come to light, orgs are more likely to proactively put more effort into avoiding this sort of outcome.
This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on LW stem from people having an implicit assumption that the conversation here is about "should [any particular person in this article] be socially punished?". In my preferred world, before you get to that phase there should be at least some period f...
I do wanna note, I thought the experience of using the google campus was much worse than many other EAGs I've been at – having to walk 5-10 minutes over to another part of the campus, hope that anyone else had shown up to the event I wanted to go to (which they often hadn't) eventually left me with a learned helpnessness about trying to do anything.
I think there's a reasonable case that, from a health perspective, many people should eat less meat. But "less meat" !== "no meat".
Elizabeth was pretty clear on her take being:
Most people’s optimal diet includes small amounts of animal products, but people eat sub-optimally for lots of reasons and that’s their right.
i.e. yes, the optimal diet is small amounts of meat (which is less than most people eat, but more than vegans eat).
The article notes:
...It’s true that I am paying more attention to veganism than I am to, say, the trad carnivore idiots, even
The argument isn’t about that at all, and I think most people would agree that nutrition is important.
It sounds like you're misreading the point of the article.
The entire point of this article is that there are vegan EA leaders who downplay or dismiss the idea that veganism requires extra attention and effort. It doesn't at all say "there are some tradeoffs, therefore don't be vegan." (it goes out of the way to say almost the opposite)
Whether costs are worth discussing doesn't depend on how large one cost is vs the other – it depends on whether the h...
Another angle on this (I think this is implied by the OP but didn't quite state outright?)
All the community-norm posts are an input into effective altruism. The gritty technical posts are an output. If you sit around having really good community norms, but you never push forward the frontier of human knowledge relevant to optimizing the world, I think you're not really succeeding at effective altruism.
It is possible that frontier-of-human-knowledge posts should be paid for with money rather than karma, since karma just isn't well suited for rewarding it. But, yeah it seems like it distorts the onboarding experience of what people learn to do on the forum.
The Rose Garden Inn is even something at a comparable price point to pressure test against. As in it is the same ballpark general distance to most of the potential users, roughly the same price, within a factor of 2 in room count, etc. but way more run down, and as recent breakins have shown, though perhaps way more vulnerable to people just walking on premises and stealing construction materials as they work to fix it up.
I do think the Lightcone example is a large part of why I'm not up in arms about this. They've demonstrated in their existing somewhat s...
I'm not sure what your imagining, in terms of overall infrastructural update here. But, here's a post that is in some sense a followup post to this:
A sort of central paradox of EA as a movement/community is "you'd think, writing up cost-benefit analysis of donation targets would be like a core community activity", but, also, there's big professional orgs evalua... (read more)