All of ZachWeems's Comments + Replies

Regarding the last paragraph, in the edit:

I think the comments here are ignoring a perfectly sufficient reason to not, eg, invite him to speak at an EA adjacent conference. If I understand correctly, he consistently endorsed white supremacy for several years as a pseudonymous blogger.

Effective Altruism has grown fairly popular. We do not have a shortage of people who have heard of us and are willing to speak at conferences. We can afford to apply a few filtering criteria that exclude otherwise acceptable speakers. 

"Zero articles endorsing white suprem... (read more)

The agree-votes have pretty directly proven you correct.

I think people are accidentally down-voting instead of disagree-voting, which makes the comment hidden.

 The up/down vote is on the left, agree/disagree is on the right.

[This comment is no longer endorsed by its author]Reply
5
Nathan Young
1y
No, I think it's deliberate.

|I meant to link to Gottfredson's statement. Do you think that black people and other racial groups scored equally on IQ tests in 1996? I don't.

My disagreement was with the characterization of Gottfredson's statement as mainstream when this is disputed by mainstream sources. 

It is true that there was a difference in IQ scores, so I suggested a less disputed source saying so.

|People don't object as often  to arguments about race in this way in other contexts. For example, "black people are abused by the police more" doesn't get the response of "wh... (read more)

Separate from my other comment:

|people of the white race, black race, and Asian race

I'm assuming this was completely unintended, but terms like "the X race" have very negative connotations in American English. Especially if X is "white". Better terms are "X people" or "people categorized as X".

"Blacks" also has somewhat negative connotations. "Black people" is better.

(I apologize on behalf of America for our extremely complicated rules about phrasing)

I hard-disagreed for two reasons:

  • The mainstream-ness of the linked statement is heavily disputed. A person in 1996 could have reasonably been unaware of this ofc. (You may have intended to link to the 1996 APA report Intelligence: Knowns and Unknowns instead?)
  • Accuracy about genetics and race is unusually important in charged conversations like this, and your 1st paragraph seems to miss an important point: categories like "black", "white" and "Asian" are poor choices of genetic clusters. This is part of why population geneticists will call race a social con
... (read more)

I meant to link to Gottfredson's statement. Do you think that black people and other racial groups scored equally on IQ tests in 1996? I don't. My point is that there was a good number of people who had this belief and if Bostrom formulated a true belief, it seems odd that he should face criticism for this. If you think it is false, we can discuss more.

I don't know whether exactly it is a "poor choice" but the reason people talk about genetics and race is because they believe that the social categories have different gene variant frequencies resulting in p... (read more)

I disagree with the first and last sentences of the last paragraph- while Bostrom's statements were compatible with a belief in genetically-influenced IQ differences,  he did not clearly say so.

That said, it isn't to his credit that he hedged about it in the apology.

Yes, when it comes to judging people for what they said it's useful to focus on what they actually said. 

Generally, if you have to focus on things that a person didn't say to fuel your own outrage that should be taken as a sign that what they actually said isn't as problematic as your first instinctual response suggests. 

Tangent: Out of curiosity, did you/ does your friend typically refer to (belief in meaningful genetically influenced racial IQ differences) as "HBD", as "part of/under HBD", or neither?

My impression was the term was mostly used by genetics nerds, with a small number of racists using the term as a fig leaf, causing the internet to think it was a motte-and-bailey in all uses. If people who mostly cared about the IQ thing used it regularly I suppose I was wrong.

(And to be clear since I'm commenting under my own name, meaningful genetically influenced racial IQ differences aren't plausible. My interest is the old internet drama.)

FAQ number 5) reads oddly.

|5) Was nepotism involved? In particular, would FLI's president's brother have profited in any way had the grant been awarded?

|No. He published some articles in the newspaper, but the understanding from the very beginning was that this was pro-bono, and he was never paid and never planned to get paid by the newspaper of the foundation. The grant proposal requested no funds for him. He is a journalist with many years of experience working for Swedish public radio and television, and runs his own free and non-commercial podcast. The... (read more)

Consider that perhaps the reason most of us on the forum aren't agreeing with you- and the reason Bostrom himself repudiated his words- isn't the taboo around the belief, but rather that most of us think the evidence is unconvincing at best. 

But most of us are not willing to have the object-level debate here for reasons such as politics being mindkiller, not wanting this to become a forum for debating taboo positions, and yes, internal and public perception of the community.

(I don't have survey data or anything, but I'd bet this is the case.)

If so, to the extent the majority of EA's tend to be right about things, you should update in that direction in lieu of having the thoughtful critiques of your position.

Agreed. 

My model is, he has a number of frustrations with EA. That on its own isn't a big deal. There are plenty of valid, invalid, and arguable gripes with various aspects of EA. 

But he also has a major bucket error where the concept of "far-right" is applied to a much bigger Category of bad stuff. Since some aspects of EA & longtermism seem to be X to him, and X goes in the Category, and stuff in the Category is far-right, EA must have far-right aspects. To inform people of the problem, he writes articles claiming they're far-right. 

If... (read more)

Commenting from five months into the future, when this is topically relevant:

I disagree. I read Torres' arguments as not merely flawed, but as attempts to link longtermism to the far right in US culture wars. In such environments people are inclined to be uncharitable, and to spread the word to others who will also be uncharitable. With enough bad press it's possible to get a Common Knowledge effect, where even people who are inclined to be openminded are worried about being seen doing so. That could be bad for recruiting, funding, cooperative endeavors, &... (read more)

1
mikbp
2y
It is long time ago now, but I don't remember having the feeling that he linked longtermism to the far right in that text. I don't know about in other places.

I have read and reread this comment and am honestly not sure whether this was a reply to my answer or to something else.

On point 1, I think the past week is a fair indication that the coronavirus is a big problem, and we can let this point pass.

On point 2, as of my answer, there seemed to be no academic talk of human challenge trials to shorten vaccine timelines, regardless of how many were working on vaccines. The problem I see is that if a human challenge trial would shorten timelines, authorities and researchers might still hesitate to run one due to p... (read more)

-1
ishi
4y
https://sciencehouse.wordpress.com has a more recent study and discussion of 2 other studies at imperial college london and oxford. Science Magazine AAAS also has a whole issue (march 27) on topic. COVID-19 appears to be a real problem but time will tell. (My area has many scientists, but also many poor and uneducated people, so there are lots of 'conspiracy theories' floating around --'viruses of the mind' --there are academic papers on these as well, mostly written by physicists.) My point 4 i actually view as the main one, unless you are actually developing vaccines in a laboratory or testing them in the field. I have done a tiny bit of lab biology and field biology as a student a but its not my area ) In that sense my comment was 'off topic'---i was talking about prevention, not cures. A term commonly used now is to avoid 'hot spots' --the temperature or incidence of the virus is not the same everywhere, so while it may seem biased, avoid the hot spots . You can say hi to your neighbor, but you cant hug them. https://johncarlosbaez.wordpress.com may have more discussion that is more relevant to your post.

Medicine isn't my area, but I'd guess the timelines for vaccine trial completion might be significantly accelerated if some trial participants agreed to be deliberately exposed to SARS-CoV-2, rather than getting data by waiting for participants to get exposed on their own. This practice is known as a "human challenge trial" (HCT), and is occasionally used to get rapid proof-of-concept on vaccines. Using live, wild-type SARS-CoV-2 on fully informed volunteers could possibly provide valuable enough data to reduce the expected development ... (read more)

-2
ishi
4y
I have mixed feelings about this idea because 1) its still fairly early to know how big a problem this is (and I have heard or read expert opinions on both sides---some say it may be a big problem, while others say it most likely is not) 2) using the EA INT ( Impact-Neglectedness--Tractability) framework (though some use SNT (U) where S= scale=Impact and (U) is 'urgency' (a time discounting or triage factor ---i.e. there's no point in setting up a research program to find a cure if it is going to arrive too late ) I am not sure this issue is Neglected. (I think US govt just allocated 1/2 billion$ to work on this). there are also already many people---epidimiologists, virologists, health care workers, and health management people ---working on this (internationally). Of course that doesn't mean they couldn't use some help, or that what they are doing is the most 'effective'. I wouldn't dismiss what these professional people are doing (health departments, CDC, etc.) as innefective or in need of help or input than I would dismiss the efforts of the fire department for fires around here and just try to put the fire myself. But, its possible they could use help (even in a variety of ways---average people can just call the fire dept if they can't put a fire out, and help any people displaced by the fire.) 3) I have already seen a few theoretical epidemiological papers on this subject. https://infomullet.com has one which is not peer reviewed and less theoretical (its more a 'fermi ' or 'back of an envelope' set of calculations (though done on a computer ) than a fully fledged theoretical model. (I think its based on the standard SIR model in epidemiology , or an updated , more complex modification of that). If one is doing a theoretical model , I think one has to try to review what people are doing or have done , though one can at the same time develop your own models and compare---there is no reason to reinvent the wheel. (While I have done a little labwork as an
1
RoboTeddy
4y
Software engineers could help conduct real-time outbreak response in Seattle: https://twitter.com/trvrb/status/1234931579702538240

Meta:

It might be worthwhile to have some sort of flag or content warning for potentially controversial posts like this.

On the other hand, this could be misused by people who dislike the EA movement, who could use it as a search parameter to find and "signal-boost" content that looks bad when taken out of context.

3
Evan_Gaensbauer
6y
I agree with kbog, while this is unusual for discourse for the EA Forum, this is still far above a bar where I think it's practical to be worried about controversy. If someone thinks the content of a post on the EA Forum might trigger some reader(s), I don't see anything wrong with including content warnings on posts. I'm unsure what you mean by "flagging" potentially controversial content.

What are the benefits of this suggestion?

kbog
6y10
0
0

This is a romp through meadows of daisies and sunflowers compared to what real Internet drama looks like. It's perfectly healthy for a bunch of people to report on their negative experiences and debate the effectiveness of an organization. It will only look controversial if you frame it as controversial; people will only think it is a big deal if you act like it is a big deal.

|...having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism...

I had to go back and double-check that this comment was written before Asilomar 2017. It describes some of the talks very well.

I would also like to be added to the crazy EA's investing group. Could you send an invite to me on here?

0
kbog
7y
I left already, there wasn't much of interest.

The 'Stache is great! He's actually how I heard about Effective Altruism.

Right, I'm accounting for my own selfish desires here. An optimally moral me-like person would only save enough to maximize his career potential.

| It just seems rather implausible, to me, that retirement money is anywhere close to being a cost-effective intervention, relative to other likely EA options.

I don't think that "Give 70-year-old Zach a passive income stream" is an effective cause area. It is a selfish maneuver. But the majority of EAs seem to form some sort of boundary, where they only feel obligated to donate up to a certain point (whether that is due to partially selfish "utility functions" or a calculated move to prevent burnout). I've considered choosing some arbit... (read more)

5
Linch
7y
Apologies, rereading it again, I think my first comment was rude. :/ I do a lot of selfish and suboptimal things as well, and it will be inefficient/stressful if each of us have to always defend any deviation from universal impartiality in all conversations. I think on the strategic level, some "arbitrariness" is fine, and perhaps even better than mostly illusory non-arbitrariness. We're all human, and I'm not certain it's even possible to really cleanly delineate how much you value different satisfying different urges for a meaningful and productive life. On the tactical level, I think general advice on frugality, increasing your income, and maximizing investment returns is applicable. Off the top of my head, I can't think of any special information specifically to the retirement/EA charity dichotomy. (Maybe the other commentators can think of useful resources?) (Well, one thing that you might already be aware of is that retirement funds and charity donations on two categories that are often tax-exempt, at least in the US. Also, many companies "match" your investment into retirement accounts up to a certain %, and some match your donations. Optimizing either of those categories can probably save you (tens of) thousands of dollars a year) Sorry I can't be more helpful!

Question 2: Suppose tomorrow MIRI creates a friendly AGI that can learn a value system, make it consistent with minimal alteration, and extrapolate it in an agreeable way. Whose values would it be taught?

I've heard the idea of averaging all humans' values together and working from there. Given that ISIS is human and that many other humans believe that the existence of extreme physical and emotional suffering is good, I find that idea pretty repellent. Are there alternatives that have been considered?

8
RobBensinger
8y
Right now, we're trying to ensure that people down the road can build AGI systems that it's technologically possible to align with operators' interests at all. We expect that early systems should be punting on those moral hazards and diffusing them as much as possible, rather than trying to lock in answers to tough philosophical questions on the first go. That said, we've thought about this some. One proposal by Eliezer years ago was coherent extrapolated volition (CEV), which (roughly) deals with this problem by basing decisions on what we'd do "if counterfactually we knew everything the AI knew; we could think as fast as the AI and consider all the arguments; [and] we knew ourselves perfectly and had better self-control or self-modification ability." We aren't shooting for a CEV-based system right now, but that sounds like a plausible guess about what we'd want researchers to eventually develop, when our institutions and technical knowledge are much more mature. It's clear that we want to take the interests and preferences of religious extremists into account in making decisions, since they're people too and their welfare matters. (The welfare of non-human sentient beings should be taken into account too.) You might argue that their welfare matters, but they aren't good sources of moral insight: "it's bad to torture people on a whim, even religious militants" is a moral insight you can already get without consulting a religious militant, and perhaps adding the religious militant's insights is harmful (or just unhelpful). The idea behind CEV might help here if we can find some reasonable way to aggregate extrapolated preferences. Rather than relying on what people want in today's world, you simulate what people would want if they knew more, were more reflectively consistent, etc. A nice feature of this idea is that ISIS-ish problems might go away, as more knowledge causes more irreligion. A second nice feature of this idea is that many religious extremists' rep

It seems like people in academia tend to avoid mentioning MIRI. Has this changed in magnitude during the past few years, and do you expect it to change any more? Do you think there is a significant number of public intellectuals who believe in MIRI's cause in private while avoiding mention of it in public?

4
So8res
8y
I think this has been changing in recent years, yes. A number of AI researchers (some of them quite prominent) have told me that they have largely agreed with AI safety concerns for some time, but have felt uncomfortable expressing those concerns until very recently. I do think that the tides are changing here, with the Concrete Problems in AI Safety paper (by Amodei, Olah, et al) perhaps marking the inflection point. I think that the 2015 FLI conference also helped quite a bit.