MB

Miles_Brundage

639 karmaJoined Jul 2022

Comments
20

I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here--there are lots of considerations at play. 

One view I hold, though, is something like "the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you're considering the [personal/community-level] social implications thereof, is non-zero." We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don't think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn't "get it" yet, with  cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it's hard to be sure, and I don't think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker's view, are true. Again, that's consistent with people disagreeing about particular cases, just sharing my general view here.

This shouldn't only work in one ideological "direction" of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I'm not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the "I like this statement and think it is true" line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I'm making, and because I don't want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.

(meta note: I don't check the forum super consistently so may miss any replies)

I think there's probably some subtle subtext that I'm missing in your surprise or some other way in which we are coming at this from diff. angles (besides institutional affiliations, or maybe just that), since this doesn't feel out of distribution to me--like, large corporations are super powerful/capable. Saying that "computers" could soon be similarly capable is pretty crazy to most people (I think--I am pretty immersed in AI world, ofc, which is part of the issue I am pointing at re: iteration/uncertainty on optimal comms) and loudly likening something you're building to nuclear weapons does not feel particularly downplay-y to me. In any case, I don't think it's unreasonable for you/others to be skeptical re: industry folks' motivations etc., to be clear--seems good to critically analyze stuff like this since it's important to get right--but just sharing my 2c.

(disclosure: gave feedback on the post/work at OAI) 

I don't personally love the corporation analogy/don't really lean on it myself but would just note that IMO there is nothing euphemistic going on here-- the authors are just trying one among many possible ways of conveying the gravity of the stakes, which they individually and OAI as a company have done in various ways at different times. It's not 100% clear which are the "correct" ones both accuracy wise and effective communication wise. I mix things up myself depending on the audience/context/my current thinking on the issue at the time, and don't think euphemism is the right way to think about that or this.

Meta-note re: my commenting non-anonymously: 

To be clear, I 'd say the same thing to Nick if asked, and mean what I said re: "in everyone's interest" - I assume he wants FHI to succeed. I suspect/hope Nick would wants friends/colleagues "to disagree with [him] both publicly and privately; ... who will admonish [him], gently but firmly, with whatever grain of truth there is in any accusations against me." (from this article  https://www.nytimes.com/2022/06/21/opinion/cancel-culture-friendship.html  - while I don't like the term cancel culture and, like OP here, don't think it's an apt description of this situation, some of the points are relevant here).

Also, I felt very conflicted about posting this, both because I've benefited in the past from both Nick's work and him hiring me, and because posting here stress me a lot (I expect lots of downvotes in absolute terms, though unsure how things will net out). This all stresses me out a lot. But I went ahead because I want FHI to be able to move on and thrive.

Edit: note I tweaked this comment a fair bit after reflecting on it some more, especially the last part, and I also wanted to signal-boost one of the comments in the thread from Jonas referenced above, here: https://twitter.com/JonasSandbrink/status/1631677091996393472?s=20 The point about mentorship is a big part of what I was gesturing at re: management effectiveness.

(context: worked at FHI for 2 years, no longer affiliated with it but still in touch with some people who are)

I'd probably frame/emphasize things a bit differently myself but agree with the general thrust of this, and think it'd be both overdue and in everyone's interest.

The obvious lack of vetting of the apology was pretty disqualifying w.r.t. judgment for someone in such a prominent institutional and community position, even before getting to the content (on which I've commented  elsewhere). 

I'd add, re: pre-existing issues, that FHI as an institution has failed at doing super basic things like at least semi-regularly updating key components of their website*; the org's shortcomings re: diversity have been obvious from the beginning and the apology was the last nail in the coffin re: chances for improving on that front as long as he's in charge; and I don't think I know anyone who thinks he adds net positive value as a manager** (vs. as a researcher,  where I agree he has made important contributions, but that could continue without him wasting a critical leadership position, and as a founder, where his work is done). 

*e.g. the news banner thing displays 6 year old news; no publications at all were added to the publication page in all of last year, despite there definitely being several publications; etc.

**Edit: Sean's comment above suggests he maybe thinks Nick added value as a manager during a period that didn't overlap with mine, and I know Sean, so maybe I spoke too strongly here :) unclear if he meant as manager, or founder, or research visionary, or what, though. In any case, I think it is fair to say, from what I know, that many people who have been at FHI don't think he's a super active or effective manager.

FWIW despite having pretty diametrically opposed views on a lot of these things, I agree that there is something to the issue/divide you reference. It seems correlated with the "normie-EA vs. rationalist-EA" divide I mentioned elsewhere on this page, and I think there are potential tradeoffs from naively responding to the (IMO) real issues at stake on the other side of the ledger. How to non-naively navigate all this seems non-obvious.

I think it's a bit more nuanced than that + added some more detail on my views below.

Happy to comment on this, though I'll add a few caveats first:

- My views on priorities among the below are very unstable
- None of this is intended to imply/attribute malice or to demonize all rationalists ("many of my best friends/colleagues are rationalists"), or to imply that there aren't some upsides to the communities' overlap
- I am not sure what "institutional EA" should be doing about all this
- Since some of these are complex topics and ideally I'd want to cite lots of sources etc. in a detailed positive statement on them, I am using the "things to think about" framing. But hopefully this gives some flavor of my actual perspective while also pointing in fruitful directions for open-ended reflection. 
- I may be able to follow up on specific clarifying Qs though also am not sure how closely I'll follow replies, so try to get in touch with me offline if you're interested in further discussion.
- The upvoted comment is pretty long and I don't really want to get into line-by-line discussion of specific agreements/disagreements, so will focus on sharing my own model.

Those caveats aside, I think some things that EA-rationalists might want to think about in light of recent events are below. 

- Different senses of the word racism (~the "believing/stating that race is a 'real thing'/there are non-trivial differences between them (especially cognitive ones) that anyone should care about" definition, and the "consciously or unconsciously treating people better/worse given their race"), why some people think the former is bad/should be treated with extreme levels of skepticism and not just the latter, and whether there might be a finer line between them in practice than some think.
- Why the rationalist community seems to treat race/IQ as an area where one should defer to "the scientific consensus" but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally,  AI safety, etc. 
- Whether the purported consensus folks often refer to is actually existent + what kind of interpretations/takeaways one might draw from specific results/papers other than literal racism in the first sense above (I recommend The Genetic Lottery's section on this).  
- What the information value of "more accurate [in the red pill/blackpill sense] views on race" would even be "if true," given that one never interacts with a distribution but with specific people.
- How Black people and other folks underrepresented in EA/rationalist communities, who often face multiple types of racism in the senses above, might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.

(will vaguely follow-up on this in my response to ESRogs's parallel comment) 

Note that there is now at least one post on LW front page that is at least indirectly about the Bostrom stuff. I am not sure if it was there before and I missed it or what.

And others' comments have updated me a bit towards the forum vs. forum difference being less surprising. 

I still think there is something like the above going on, though, as shown by the kinds of views being expressed + who's expressing them just on EA Forum, and on social media. 

But I probably should have left LW out of my  "argument" since I'm less familiar with typical patterns/norms there. 

Load more