MS

Matthew Stork

84 karmaJoined Oct 2021

Comments
14

I would define race science as the field trying to prove the superiority of one race over another race, for the purpose of supporting a racial hierarchy.

So IQ differences between races = race science

Susceptibility to different diseases != race science

Differences in 100M dash times != race science (countries don't choose their leaders based on sprint times).

Sure, I provided David Reich as an example of a population geneticist doing good work that I believe is worthwhile.

I disagree, I don't think there is value in race science at all, since race isn't a particularly good way of categorizing people. At the moment, there are plenty of good scholars working in population genetics (David Reich at Harvard is a good example). None of the scholars I'm aware of use race as a primary grouping variable, since it's not particularly precise.

To be clear, the "one topic" is race science, not general intelligence.

I think this is the best Steelman of a certain position that prioritizes epistemic integrity. I also think this position is wrong.

The only acceptable approach to race science is to clearly and vigorously denounce assertions that one race is somehow superior or inferior, and to state that it is a priority to address any apparent disparities between races. Responding to inquiries on this subject with some version of "I'm not an expert in intelligence research, etc" comes across as "mealy mouthed," to use Rohit's words. Bostrom himself used a version of this argument in his apology, and it just doesn't fly.

This doesn't require sacrificing epistemic integrity. Rohit's suggested apology is pretty good in this regard:

"We still have IQ gaps between races, which doesn't make sense. It's closing, but not fast enough. We should work harder on fixing this."

EDIT: Overall, my main point is that Rohit is broadly correct in asserting that it's a huge problem if the EA community ends up somehow having a position on the IQ and race question. It's obviously a massive PR problem; how do you recruit people to join an organization that has been branded as being racist? Even more important though, if the question of IQ and race plays a non-trivial role in your determination of how to do the most good, then you have massively screwed up somewhere in your thought process.

EDIT 2: Removed some comments that prompted a discussion on topics that really just aren't relevant in my opinion. I think we should avoid getting caught up arguing about the specifics of Bostrom's claims, but part of my comment seems to have prompted discussion in that direction so I've removed it.

Agree with your post and want to add one thing. Ultimately this was a failure of the EA ideas more so than the EA community. SBF used EA ideas as a justification for his actions. Very few EAs would condone his amoral stance w.r.t. business ethics, but business ethics isn't really a central part of EA ideas. Ultimately, I think the main failure was EAs failing to adequately condemn naive utilitarianism. 

I think back to the old Scott Alexander post about the rationalist community: Yes, We Have Noticed The Skulls | Slate Star Codex. I think he makes a valid point, that the rationalist community has tried to address the obvious failure modes of rationalism. This is also true of the EA community, in that there has absolutely been some criticism of galaxy brained naive utilitarianism. However, there is a certain defensiveness in Scott's post, an annoyance that people keep bringing up past failure modes even though rationalists try really hard to not fail that way again. I suspect this same defensiveness may have played a role in EA culture. Utilitarianism has always been criticized for the potential that it could be used to justify...well, SBF-style behavior. EAs can argue that we have newer and better formulations of utilitarianism / moral theory that don't run into that problem, and this is true (in theory). However, I do suspect that this topic was undervalued in the EA community, simply because we were super annoyed at critics that keep harping on the risks of naive utilitarianism even though clearly no real EA actually endorses naive utilitarianism. 

I appreciate the response here and want to clarify my argument a bit. I totally understand that currently available SUT isn't sufficient to make cultured meat cost-effective. I'm mostly arguing against the notion that these problems are intractable. To your point about the difficulties with gamma irradiation, it seems likely that there could be a reasonable alternative to gamma irradiation for SUT sterilization. At the moment, pharma companies get by just fine using stainless steel for processes > 2kL, so there isn't much pressure to improve from that angle. If single-use is truly enabling for cultured meat, then that provides an impetus for more investment in improved sterilization technologies.

The purported economic and environmental benefits of SUT are related to the elimination of sterilization steam (because everything is gamma-sterilized before shipping) and the elimination of cleaning chemicals (because the bags are not cleaned for reuse). 

The major cost savings I see for a cultured meat plant would be in a reduction in the requirements for air quality. A fully single use plant with completely aseptic connections can (in-theory) be run aseptically without a clean room. There would just be a small clean room for media and solution prep. Existing pharma plants using SUT tend to still need high quality air as there are some steps in the process that require manual manipulation. I've seen it suggested though that future biopharma processes which use fully integrated and continuous systems can be run  in clean rooms with drastically lower air quality than existing plants.

Combined with 100% manual unpacking, setting, connect/disconnect, and teardown of bags, the single-use idea seemed very much at odds with the fully automatic plant that many propose.

Moving to long duration perfusion (> 30 days) reduces the need for unpacking / teardown. There have been biopharma companies which have demonstrated the ability to run stable perfusion for up to 60 days. For the most part, companies haven't gone longer than that mostly because it's not really necessary.

What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation. It's not really clear to me that this is true. The crux of your argument seems to come from this paragraph:

The community was trusting - in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.

Would this have been any different if EA consisted of an archipelago of affiliated groups? If anything, Whistleblowing is easier in a large group since you have a network of folks you can contain to raise the alarm. Without a global EA group, who exactly do the ex-Alameda folks complain to? I guess they could talk to a journalist or something, but "trading firm CEO is kind of an amoral dick" isn't really newsworthy (I'd say that's probably the default assumption).

I also generally disagree that making EA more low trust is a good idea. It's pretty well established that low trust societies have more crime and corruption than high trust societies. In that sense, making EA more low trust seems counterproductive to prevent SBF v2.0. In a low trust society, trust is typically reserved for your immediate community. This has obvious problems though! Making trust community-based (i.e. only trusting people in my immediate EA community) seems worse than making trust idea-based (i.e. trusting anyone that espouses shared EA values). People are more likely to defend bad actors if they consider them to be part of their in-group.

To be honest, I'd recommend the exact opposite course of action: make EA even more high trust. High trust societies succeed by binding members to a common consensus on ethics and morality. EAs need to be clearer about our expectations are with regard to ethics. It was apparently not clear to SBF that being a part of the EA community means adherence to a set of norms outside of naive utilitarian calculus. The EA community should emphatically state our norms and expectations. The corollary to that is that members that break the rules must be called-out and potentially even banished from the group. 

Coming to this pretty late, but I'm curious - does the success of Paxlovid for COVID change your views on this? It took ~21 months from the start of the program to have the drug approved under an EUA. So not as fast as the vaccines, but still relatively fast. Efficacy is pretty amazing at ~90% reduction in severe illness and death (in unvaccinated populations). 

Makes sense, thanks for the context!

Load more