Computational Physicist
6274 karmaJoined


I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 


Ai ethicists often imply that certain people in AI are just using x-risk as a form of tech hype/PR. The idea is that signaling concern for AI x-risk might make it look like you are concerned and responsible, while simultaneously advertising how powerful your product is and implying it will be much more powerful soon so that people invest in your company. Bonus points are that you can signal trustworthiness and caring while ignoring current-day harms caused by your product, and you can still find justifications for going full steam ahead on development (such as saying we have to put the moral people at the head of the race). 

I think this theory would explain the observed behavior here pretty well. 

Generally considered by who? If you polled the general population, I think the position that rich lives are more important to save than poor lives would be highly unpopular. I certainly don't believe it. 

One factor is that rich people have more resources and ability to save themselves. If a millionaire and an impoverished person both need a ten thousand dollar treatment to live, obviously you should donate to the impoverished person!

Answer by titotal11

EA is a fairly small and weird social movement in the grand scheme of things. A protest movements consisting only of EAers will produce pathetically small protests, which might get some curious media writeups but will be unlikely to scare or influence anybody. 

If you actually want a big protest movement, you have to be willing to form coalitions with other groups. And that means playing nice with people like AI ethicists, rather than mocking and attacking them as has been unfortunately common here.  

For people who are confused by the title, there is a nice paper overview on computerphile here, titled "has generative AI already peaked?". 

If I'm interpreting it right, the authors seem to be indicating based on their experiments that shoving in more compute power and data like openAI is doing with GPT will experience diminishing returns and not lead to significant general reasoning outside of their training set conditions. Or in EA terms, AGI is unlikely to arrive without significant algorithmic breakthroughs. 


EA, being a fallible movement, is wrong about a lot of things. A lot of people that are not aligned with EA have completely valid reasons for not doing so. If you excessively filter for people that already agree with you on everything, you risk creating a groupthink atmosphere where alternative ideas have no real way to enter the discourse. Take this to the extreme and you pretty much end up with a cult. 

Also, EA is not representative of the general public, and thus will have a hard time knowing how ideas and policies are received by or impact broader demographics. Having normal people around to provide sanity checks is a useful byproduct of hiring more generally, rather than finding people already adjacent to this very small and odd subculture. 

What is the best practice for dealing with biased sources? For example, if I'm writing an article critical of EA and cite a claim made by emille torres, would it be misleading to not mention that they have an axe to grind?

In most of the cases you cited, I think being more honest is a good goal.

However, echoing Ulrik's concern here, the potential downsides of "deep honesty" are not just limited to the "deeply honest" person. For example, a boss being "deeply honest" about being sexually attracted to a subordinate is not generally virtuous, it could just make them uncomfortable, and could easily be sexual harassment. This isn't a hypothetical, a high up EA cited the similar concept of "radical openness" as a contributing factor to his sexual harassment. 

White lies exist for a reason, there are plenty of cases where people are not looking for "radical honesty" . Like, say you turn someone down from a date because they have a large disfiguring facial scar that makes them unattractive to you. Some people might want to know that this is the reason, other people might find it depressing to be told that a thing they have no control over makes them ugly. I think this is a clear case where the recipient should be the one asking. Don't be "deeply honest" to someone about potentially sensitive subjects unprompted.  

As another example, you mention being honest when people ask "how are you". Generally, it's a good idea to open up to your friends, and have them open up to you. But if your cashier asks "how are you", they are just being polite, don't trauma dump to them about your struggles. 

Answer by titotal14

I was part of a hobby group that successfully addressed it's sexual harassment problem.  I wrote up my experiences here

I guess this does count as "lots of people leaving", as we kicked out the sexual harassers, and some of their friends left as well. This is why I don't think one should avoid conflict or people leaving at all costs: if you try to change the culture for the better, it's inevitable that some people who were comfortable with the status quo will take issue or leave. Of course, if you don't change, then a different set of people will be uncomfortable and leave or not join. It's a trade-off, and in the case of sexual harassers, a rather easy one. 

In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!

Load more