C

cata

87 karmaJoined Nov 2022

Comments
8

Appreciate the reply. I don't have a well-informed opinion about Hanania in particular, and I really don't care to read enough of his writing to try to get one, so I think I said everything I can say about the topic (e.g. I can't really speak to whether Hanania's views are specifically worse than all the examples I think of when I think of EA views that people may find outrageous.)

Why move from "wrong or heartless" to "unusual people with unusual views"?

 

I believe these two things:

A) People don't have very objective moral intuitions, so there isn't widespread agreement on what views are seriously wrong.

B) Unusual people typically come by their unusual views by thinking in some direction that is not socially typical, and then drawing conclusions that make sense to them.

So if you are a person who does B, you probably don't and shouldn't have confidence that many other people won't find your views to be seriously wrong. So a productive intellectual community that wants to hear things you have to say, should be prepared to tolerate views that seem seriously wrong, perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)

None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it).

I think this is absolutely false. A kind of obvious example (to many, since as above, people do not unanimously agree on what is hateful) is that famous Nick Bostrom email about racial differences. Another example to many is the similar correspondence from Scott Alexander. Another example would be Zack Davis's writing on transgender identity. Another example would be Peter Singer's writing on disability. Another example would be this post arguing in favor of altruistic eugenics. These are all views that many people who are even very culturally close to the authors (e.g. modern Western intellectuals) would consider hateful and wrong.

Of course, having views that substantially different cultures would consider hateful and wrong is so commonplace that I hardly need to give any examples. Many of my extended family members consider the idea that abortion is permissible to be hateful and wrong. I consider their views, in addition to many of their other religious views, to be hateful and wrong. And I don't believe that either of us have come by our views particularly unreasonably.

What would be wrong is implicitly conveying that the person you're platforming is vetted/normal/harmless, when they actually seem dangerous.

Perhaps this is an important crux. If a big conference is bringing a bunch of people to give talks that the speakers are individually responsible for, I personally would infer ~zero vetting or endorsement, and I would judge each talk with an open mind. (I think I am correct to do this, because little vetting is in fact done; the large conferences I have been familiar with hunt for speakers based on who they think will draw crowds, e.g. celebrities and people with knowledge and power, not because they agree with the contents of talks.) So if this is culturally ambiguous it would seem fine to clarify.

cata
11d9
3
15
2

I have been extremely unimpressed with Richard Hanania and I don't understand why people find his writing interesting. But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.

Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character. Especially not because of the moral character of their beliefs, rather than their actions. And really especially not because of the moral character of things they used to believe.

cata
17d41
9
2
1

I think you are placing far too little faith in the power of the truth. None of the events you list above are bad. It's implied that they are bad because they will cause someone to unfairly judge Open Phil poorly. But why presume that more information will lead to worse judgment? It may lead to better judgment.

As an example, GiveWell publishes detailed cost-effectiveness spreadsheets and analyses, which definitely make me take their judgment way more seriously than I would otherwise. They also provide fertile ground for criticism (a popular recent magazine article and essay did just that, nitpicking various elements of the analyses that it thought were insufficient.) The idea that GiveWell's audience would then think worse of them in the end because of the existence of such criticism is not credible to me.

I don't know if this is a fair assessment, but it's hard for me to expect anything else as long as many EAs are getting sourced from elite universities, since that's basically the planetary focus for the consumption and production of inflated credentials.

I think you are incorrectly conflating being mistaken and being "actively harmful" (what does actively mean here?) I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.

Truth-seeking is a long game that is mostly about people exploring ideas, not about people trying to minimize false beliefs at each individual moment.

I disagree completely. It seems like the kinds of things they could have done to not be subject to this bug would be e.g.

  • Basically be expert maintainers of all their dependencies who work full-time to fuzz test them or prove them correct (if they did this they would never have been able to release their website)
  • Magically pick high-quality dependencies better than they already do, without doing the above (no reason to believe this is possible, since the bug was in an old, first-party, actively maintained dependency of a software project widely respected for its quality (Redis))
  • Have some kind of single-tenant-per-stack setup where you have some kind of database/server all to yourself which serves your connections to GPT (AFAIK totally ridiculous -- what would be the cost/benefit value of running their API and website like this?)

Since, to my eyes, every single software organization in the world that has ever produced a public website would have been basically equally likely to get hit by this bug, I totally disagree that it's useful evidence about anything about OpenAI's culture, other than "I guess their culture is not run by superintelligent aliens who run at 100x human speed and proved all of their code correct before releasing the website." I agree, it's too bad that OpenAI is not that.

What is the thing that you thought they might have done differently, such that you are updating on them not having done that thing?

For reference, the bug: https://github.com/redis/redis-py/issues/2624

cata
1y20
4
0

Thanks for this post. I am a software engineer recently trying to do specifically altruistic work. By nature, I am kind of disdainful of PR and of most existing bureaucracies and authorities, so your emphasis on how important interoperating with those systems was for your work is very useful input to help me switch into "actually trying to be altruistic" mode.