All of jsteinhardt's Comments + Replies

Where is a good place to start learning about Forecasting?

I'm teaching a class on forecasting this semester! The notes will all be online: http://www.stat157.com/

Democratising Risk - or how EA deals with critics

It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like "instructions" than "arguments", and Rubi was calling for suppressing arguments on the danger that they would be believed.

2Davidmanheim1moThe claim was a general one - I certainly don't think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong. The original question was: "If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?" And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.
Democratising Risk - or how EA deals with critics

At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.

 

I don't think "the work got published, so the censorship couldn't have been that bad" really makes sense as a reaction to claims of censorship. You won't see... (read more)

Democratising Risk - or how EA deals with critics

I also agree with you. I would find it very problematic if anyone was trying to "ensure harmful and wrong ideas are not widely circulated". Ideas should be argued against, not suppressed.

Ideas should be argued against, not suppressed.


All ideas? Instructions for how to make contact poisons that aren't traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals' command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed. 

You  can tell me that making information more available is good, and I agree in almost all cases. But only almost all.

Bayesian Mindset

Re: Bayesian thinking helping one to communicate more clearly. I agree that this is a benefit, but I don't think it's the fastest route or the one with the highest marginal value. For instance, when you write:

A lot of expressed beliefs are “fake beliefs”: things people say to express solidarity with some group (“America is the greatest country in the world”), to emphasize some value (“We must do this fairly”), to let the listener hear what they want to hear (“Make America great again”), or simply to sound reasonable (“we will balance costs and benefits”) o

... (read more)
EA Debate Championship & Lecture Series

I just don't think this is very relevant to whether outreach to debaters is good. A better metric would be to look at life outcomes of top debaters in high school. I don't have hard statistics on this but the two very successful debaters I know personally are both now researchers at the top of their respective fields, and certainly well above average in truth-seeking.

I also think the above arguments are common tropes in the "maths vs fuzzies" culture war, and given EA's current dispositions I suspect we're systematically more likely to hear and be receptiv... (read more)

Please stand with the Asian diaspora

Thanks, and sorry for not responding to this earlier (was on vacation at the time). I really appreciated this and agree with willbradshaw's comment below :).

Please stand with the Asian diaspora

I think we just disagree about what a downvote means, but I'm not really that excited to argue about something that meta :).

As another data point, I appreciated Dicentra's comment elsewhere in the thread. I haven't decided whether I agree with it, but I thought it demonstrated empathy for all sides of a difficult issue even while disagreeing with the OP, and articulated an important perspective.

Please stand with the Asian diaspora

I think your characterization of my thought process is completely false for what it's worth. I went out of my way multiple times to say that I was not expressing disapproval of Dale's comment.

Edit: Maybe it's helpful for me to clarify that I think it's both good for Dale to write his comment, and for Khorton to write hers.

I think your characterization of my thought process is completely false for what it’s worth. I went out of my way multiple times to say that I was not expressing disapproval of Dale’s comment.

That's certainly better news than the alternative, but I hope you find it understandable that I don't update to 100% believing your claim, given that you may not have full introspective access to all of your own cognitive processes, and what appears to me to be a series of anomalies that is otherwise hard to explain. But I'm certainly willing to grant this for the ... (read more)

Please stand with the Asian diaspora

I didn't downvote Dale, nor do I wish to express social disapproval of his post (I worry that the length of this thread might lead Dale to feel otherwise, so I want to be explicit that I don't feel that way).

To your question, if I were writing a post similar to Dale, what I would do differently is be more careful to make sure I was responding to the actual content of the post. The OP asked people to support Asian community members who were upset, while at least the last paragraph of Dale's post seemed to assume that OP was arguing that we should be searchi... (read more)

The OP asked people to support Asian community members who were upset, while at least the last paragraph of Dale’s post seemed to assume that OP was arguing that we should be searching for ways to reduce violence against Asians.

It seems totally reasonable to interpret the OP as arguing for the latter as well as the former:

  1. The title of the post references "the Asian diaspora" instead of just "Asian community members"
  2. The OP also wrote "As a community, we should stand against the intolerance and unnecessary suffering caused by these hate crimes" and a r
... (read more)
3JackM10moThanks for this I think that all makes a lot of sense. FWIW I wasn't necessarily asking you to provide this feedback to Dale. I was just noting that such feedback hadn't yet been provided. I interpreted your earlier comment as implying that it had.
9Khorton10moThanks for this jsteinhardt, I agree with the above.
Please stand with the Asian diaspora

I think it's good for people to point out ways that criticism can be phrased more sympathetically, and even aligned with your goal of encouraging more critical discussion (which I am also in favor of). As someone who often gives criticism, sometimes unpopular criticism, I both appreciate when people point out ways I could phrase it better but also strongly desire people to be forgiving when I fail to do so. If no one took the time to point these out to me, I would be less capable of offering effective criticism.

Along these lines, my guess is that you and K... (read more)

As someone who often gives criticism, sometimes unpopular criticism, I both appreciate when people point out ways I could phrase it better

Neither you nor Khorton appear to have done this for Dale, at least not very clearly.

8Wei_Dai10moI have been assuming that EAF follows the same norm as LW with regard to downvotes, namely that it means "I’d like to see fewer comments like this one." Just in case EAF follows a different norm, I did a quick search and happened across a comment [https://forum.effectivealtruism.org/posts/KZkDBm6vuPMFALjQx/why-do-you-downvote-ea-forum-posts-and-comments?commentId=5QqDDhGrSxc3QaseD] by Khorton herself (which was highly upvoted, so I think is likely representative of typical understanding of downvotes on EAF): So it seems basically the same, i.e., a downvote means that on net the voter would prefer not to see a comment like it on the forum. Given that some people may not be very good at or very motivated to express sympathy in connection with stating an alternative hypothesis, this seems equivalent to saying that she would prefer such people not post such alternative hypotheses on the forum. Sure, this seems to be the current norm, but as Khorton's comment had garnered substantial upvotes before I disagreed with it (I don't remember exactly but I think it was comparable to Dale's initial comment at that point), I was worried about her convincing others to her position and thereby moving the forum towards a new norm. Anyway, I do agree with "I think it’s good for people to point out ways that criticism can be phrased more sympathetically" and would have no objections to any comments along those lines. I note however that is not what Khorton did and her comment in fact did not point out any specific ways that Dale's comment could have been phrased more sympathetically.
Please stand with the Asian diaspora

They being Laaunch? I agree they do a lot of different things. Hate is a Virus seemed to be doing even more scattered things, some of which didn't make sense to me. Everything Laaunch was doing seemed at least plausibly reasonable to me, and some, like the studies and movement-building, seemed pretty exciting.

 

My guess is that even within Asian advocacy, Laaunch is not going to look as mission-focused and impact-driven as say AMF. But my guess is no such organization exists--it's a niche cause compared to global poverty, so there's less professionalization--though I wouldn't be surprised if I found a better organization with more searching. I'm definitely in the market for that if you have ideas.

Though I wouldn't be surprised if I found a better organization with more searching.. I'm definitely in the market for that if you have ideas.

I don't have direct ideas for the stated goal, but some brainstorming on the purpose of why you are interested in Asian advocacy might be fruitful? If you are interested in things that help Asian diaspora have better lives, have a wildly flourishing future, etc, I'd bet that the same general (human-focused) cause areas that EAs are interested in (scientific advancement, reducing existential synthetic biology and AI r... (read more)

Please stand with the Asian diaspora

Thanks. I'm currently planning to donate to Laaunch as they seem the most disciplined and organized of the groups. I couldn't actually tell what Hate is a Virus wants to do from their website--for instance a lot of it seems to be about getting Asians to advocate for other racial minorities, but I'm specifically looking for something that will help Asians. Laaunch seems more focused on this while still trying to build alliances with other racial advocacy groups.

They (EDIT: Laaunch) seem to be doing a lot of different things and I'm confused as to what their theory of change is. 

(Tbc I only had a cursory look at their website so it's possible I missed it).

2HaukeHillebrandt10moI agree that LAAUNCH seems quite high upside because they do research which I feel is often more neglected and can be quite high impact (e.g. they conduct "A comprehensive, national assessment of attitudes and stereotypes towards Asian Americans in the US – one of the few such studies in the last 20 years").
Please stand with the Asian diaspora

For me personally, it's symbolically important to make some sort of donation as a form of solidarity. It's not coming out of my EA budget, but I'd still rather spend the money as effectively as possible. It seems to me that practicing the virtue of effective spending in one domain will only help in other domains.

JPAL had some links to some orgs here:

Asian Americans Advancing Justice--Atlanta   
stopAAPIhate.org
hateisavirus.org
laaunch.org 

Edit: I also found Asian Americans Advancing Justice - this seems to be one of the biggest civil rights charities focusing on low income Asian Americans. They seem to have a good track record.  One can donate without paying any fees via PayPal Giving Fund here. 

Might also be worth to ask @chloecockburn who had some BLM recommendations.

Please stand with the Asian diaspora

I think one concrete action people could take is to try to listen to the experiences of their Asian friends and colleagues. There is a lot of discrimination that isn't violence. Understanding and solidarity can go a long way, and can also prevent reduce discrimination.

For Chinese immigrants in particular there are also a lot of issues related to immigration and to U.S.-China tensions.

Neither of these is directly related to the Atlanta shootings, but I think it can be a good symbolic moment to better understand others, especially since discrimination agains... (read more)

There is a lot of discrimination that isn't violence. 

This is a good point, and definitely true. One example is the massive discrimination that asians face in college admissions. During the Harvard admissions trial, both sides agreed that asian applicants had generally superior academic and extracurricular credentials to white applicants, and much higher than black applicants, and yet were admitted at significantly lower rates. The university's defence was that on average asians had inferior personalities, a finding which to my knowledge not supported... (read more)

Please stand with the Asian diaspora

Thanks for this. I have been trying to think about what organizations I can support that would be most effective here. I'm still thinking though it myself but if you have particular thoughts, let me know.

PBS Newshour created this list of ways people in the US can fight racism and violence against Asian Americans. (I'll add it to the post.)

I also think that solidarity with Asians around the world includes opposing the human rights violations occurring in Asian countries, such as Myanmar, China, and India.

What's the argument for supporting organizations in this cause area? If you're just trying to purchase fuzzies for yourself or other community members, that seems fine, but it's hard for me to see it making sense to prioritize anti-Asian violence as a cause area by the usual EA metrics.

But maybe there are other related causes that are more promising from an EA perspective, like lowering US-China tensions, or otherwise reducing the risks of a US-China war...

(Autistic) visionaries are not natural-born leaders

I think I'd just note that the post, in my opinion, helps combat some of these issues. For instance it suggests that autistic people are able to learn how to interact with neurotypical people successfully, given sufficient effort--ie, the "mask".

TAI Safety Bibliographic Database

Thanks, that's helpful. If you're saying that the stricter criterion would also apply to DM/CHAI/etc. papers then I'm not as worried about bias against younger researchers.

Regarding your 4 criteria, I think they don't really delineate how to make the sort of judgment calls we're discussing here, so it really seems like it should be about a 5th criterion that does delineate that. I'm not sure yet how to formulate one that is time-efficient, so I'm going to bracket that for now (recognizing that might be less useful for you), since I think we actually disagr... (read more)

1Jess_Riedel1ySorry I was unclear. Those were just 4 desiderata that the criteria need to satisfy; the desiderata weren't intended to fully specify the criteria. Certainly possible, but I think this would partly be because MIRI would explicitly talk in their paper about the (putative) connection to TAI safety, which makes it a lot easier for me see. (Alternative interpretation: it would be tricking me, a non-expert, into thinking there was more of a substantive connection to TAI safety than actually is there.) I am trying not to penalize researchers for failing to talk explicitly about TAI, but I am limited. I think it's more likely the database has inconsistencies of the kind you're pointing at from CHAI, Open AI, and (as you've mentioned) DeepMind, since these organizations have self-described (partial) safety focus while still doing lots of research non-safety and near-term-safety research. When confronted with such inconsistencies, I will lean heavily toward not including any of them since this seems like the only feasible choice given my resources. In other words, I select your final option: "The hypothetical MIRI work shouldn't have made the cut". Here I understand you to be suggesting that we use a notability criterion that can make up for the connection to TAI safety being less direct. I am very open to this suggestion, and indeed I think an ideal database would use criteria like this. (It would make the database more useful to both researchers and donors.) My chief concern is just that I have no way to do this right now because I am not in a position to judge the notability. Even after looking at the abstracts of the work by Raghunathan et al. and Wong & Kolter, I, as a layman, am unable to tell that they are quite notable. Now, I could certainly infer notability by (1) talking to people like you and/or (2) looking at a citation trail. (Note that a citation count is insufficient because I'd need to know it's well cited by TAI safety papers specifically.) But this is
TAI Safety Bibliographic Database

Also in terms of alternatives, I'm not sure how time-expensive this is, but some ideas for discovering additional work:

-Following citation trails (esp. to highly-cited papers)

-Going to the personal webpages of authors of relevant papers, to see if there's more (also similarly for faculty webpages)

5Jess_Riedel1ySure, sure, we tried doing both of these. But they were just taking way too long in terms of new papers surfaced per hour worked. (Hence me asking for things that are more efficient than looking at reference lists from review articles and emailing the orgs.) Following the correct (promising) citation trail also relies more heavily on technical expertise, which neither Angelica nor I have. I would love to have some collaborators with expertise in the field to assist on the next version. As mentioned, I think it would make a good side project for a grad student, so feel to nudge yours to contact us!
TAI Safety Bibliographic Database

Well,  it's biased toward safety organizations, not large organizations.

Yeah, good point. I agree it's more about organizations (although I do think that DeepMind is benefiting a lot here, e.g. you're including a fairly comprehensive list of their adversarial robustness work while explicitly ignoring that work at large--it's not super-clear on what grounds, for instance if you think Wong and Cohen should be dropped then about half of the DeepMind papers should be too since they're on almost identical topics and some are even follow-ups to the Wong pap... (read more)

9Jess_Riedel1yYea, I'm saying I would drop most of those too. I agree this can contribute to organizational bias. Just to be clear: I'm using "motivation" here in the technical sense of "What distinguishes this topic for further examination out of the space of all possible topics?", i.e., is the topic unusually likely to lead to TAI safety results down the line?" (It's not anything to do with the author's altruism or whatever.) I think what would best advance this conversation would be for you to propose alternative practical inclusion criteria which could be contrasted the ones we've given. Here's how is how I arrived at ours. The initial desiderata are: 1. Criteria are not based on the importance/quality of the paper. (Too hard for us to assess.) 2. Papers that are explicitly about TAI safety are included. 3. Papers are not automatically included merely for being relevant to TAI safety. (There are way too many.) 4. Criteria don't exclude papers merely for failure to mention TAI safety explicitly. (We want to find and support researchers working in institutions where that would be considered too weird.) (The only desiderata that we could potentially drop are #2 or #4. #1 and #3 are absolutely crucial for keeping the workload manageable.) So besides papers explicitly about TAI safety, what else can we include given the fact that we can't include everything relevant to safety? Papers that TAI safety researchers are unusually likely (relative to other researchers) to want to read, and papers that TAI safety donors will want to fund. To me, that means the papers that are building toward TAI safety results more than most papers are. That's what I'm trying to get across by "motivated". Perhaps that is still too vague. I'm very in your alternative suggestions!
9jsteinhardt1yAlso in terms of alternatives, I'm not sure how time-expensive this is, but some ideas for discovering additional work: -Following citation trails (esp. to highly-cited papers) -Going to the personal webpages of authors of relevant papers, to see if there's more (also similarly for faculty webpages)
TAI Safety Bibliographic Database

Thanks for curating this! You sort of acknowledge this already, but one bias in this list is that it's very tilted towards large organizations like DeepMind, CHAI, etc. One way to see this is that you have AugMix by Hendrycks et al., but not the Common Corruptions and Perturbations paper, which has the same first author and publication year and 4x the number of citations (in fact it would top the 2019 list by a wide margin). The main difference is that AugMix had DeepMind co-authors while Common Corruptions did not.

I mainly bring this up because this bias ... (read more)

Thanks Jacob.  That last link is broken for me, but I think you mean this?

 You sort of acknowledge this already, but one bias in this list is that it's very tilted towards large organizations like DeepMind, CHAI, etc.

Well,  it's biased toward safety organizations, not large organizations.  (Indeed, it seems to be biased toward small safety organizations over larges ones since they tend to reply to our emails!)  We get good coverage of small orgs like Ought, but you're right we don't have a way to easily track individual unaffiliate... (read more)

Ask Rethink Priorities Anything (AMA)

I didn't mean to imply that laziness was the main part of your reply, I was more pointing to "high personal costs of public posting" as an important dynamic that was left out of your list. I'd guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don't think we disagree on the overall list of considerations.

Ask Rethink Priorities Anything (AMA)

I think the reasons people don't post stuff publicly isn't out of laziness, but because there's lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.

3MichaelA1y(Just speaking for myself, as always) I definitely agree that there are many cases where it does make sense not to post stuff publicly. I myself have a decent amount of work which I haven't posted publicly. (I also wrote a small series of posts [https://www.lesswrong.com/s/r3dKPwpkkMnJPbjZE] earlier this year on handling downside risks and information hazards, which I mention as an indication of my stance on this sort of thing.) I also agree that laziness will probably rarely be a major reason why people don't post things publicly (at least in cases where the thing is mostly written up already). I definitely didn't mean to imply that I believe that laziness is the main reason people don't post things publicly, or that there are no good reasons to not post things publicly. But I can see how parts of my comment were ambiguous and could've been interpreted my comment that way. I've now made one edit to slightly reduce ambiguity. So you and I might actually have pretty similar stances here. But I also think that decent portions of cases in which a person doesn't post publicly may fit one of the following descriptions: * The person sincerely believes there are good reasons to not post publicly, but they're mistaken. * But I also think there are times when people sincerely believe they should post something publicly, and then do, even though really they shouldn't have (e.g., for reasons related to infohazards or the unilateralist's curse). * I'm not sure if people err in one direction more often than the other, and it's probably more useful to think about things case by case. * The person overestimates the risks posting publicly posing to their own reputations, or (considered from a purely altruistic perspective) overweight risks to their own reputations relative to potential benefits to others/the world (basically because the benefits are mostly externalities while the risks aren't). * That
80k hrs #88 - Response to criticism

Thanks for writing this and for your research in this area. Based on my own read of the literature, it seems broadly correct to me, and I wish that more people had an accurate impression of polarization on social media vs mainstream news and their relative effects.

While I think your position is much more correct than the conventional one, I did want to point to an interesting paper by Ro'ee Levy, which has some very good descriptive and casual statistics on polarization on Facebook: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3653388. It suggests (... (read more)

How to give advice

You also sort of touch on this but I think it's also helpful to convey when you have genuine uncertainty (not at the cost of needless hedging and underconfidence) and also say when you think someone else (who they have access to) would be likely to have more informed advice on a particular question.

1SebastianSchmidt1moEspecially with career decisions. I actually think that it can be good to start out with some noticeable gesture that makes people realize this clearly and use language like "this is my impression" or "my tentative judgment call is".
How to give advice

I like your guidelines. Some others that come to mind:

-Some people are not just looking for advice but to avoid the responsibility of choosing for themselves (they want someone else to tell them what the right answer is). I think it's important to resist this and remind people that ultimately it's their responsibility to make the decision.

-If someone seems to be making a decision out of fear or anxiety, I try to address this and de-dramatize the different options. People rarely make their best decisions if they're afraid of the outcomes.

-I try to show my w... (read more)

1SebastianSchmidt1mo-I agree. We live in a time where people are seeking a lot of validation and just want to be told what to do. It's super important to encourage them to take agency and not just defer completely to others. -Excellent point. If people are in a challenged state, I also see the priority as changing their state. E.g., increase their hope, agency, and light-heartedness. -Reasoning transparency is great. Especially, because people will otherwise be inclined to overanchor on the specific suggestion instead of the consideration that led to the specific consideration.
3jsteinhardt1yYou also sort of touch on this but I think it's also helpful to convey when you have genuine uncertainty (not at the cost of needless hedging and underconfidence) and also say when you think someone else (who they have access to) would be likely to have more informed advice on a particular question.
Are there any other pro athlete aspiring EAs?

Thanks! 1 seems believable to me, at least for EA as it currently presents. 2 seems believable on average but I'd expect a lot of heterogeneity (I personally know athletes who have gone on to be very good researchers). It also seems like donations are pretty accessible to everyone, as you can piggyback on other people's research.

Are there any other pro athlete aspiring EAs?

I personally wouldn't pay that much attention to the particular language people use--it's more highly correlated with their local culture than with abilities or interests. I'd personally be extra excited to talk to someone with a strong track record of handling uncertainty well who had a completely different vocabulary than me, although I'd also expect it to take more effort to get to the payoff.

Are there any other pro athlete aspiring EAs?

This is a bit tangential, but I expect that pro athletes would be able to provide a lot of valuable mentorship to ambitious younger people in EA--my general experience has been that about 30% of the most valuable growth habits I have are imported from sports (and also not commonly found elsewhere). E.g. "The Inner Game of Tennis" was gold and I encourage all my PhD students to read it.

Are there any other pro athlete aspiring EAs?

I didn't downvote, but the analysis seems incorrect to me: most pro athletes are highly intelligent, and in terms of single attributes that predict success in subsequent difficult endeavors I can't think of much better; I'd probably take it over successful startup CEO even. It also seems like the sort of error that's particularly costly to make for reasons of overall social dynamics and biases.

3RyanCarey1yGood point. I think I'd rather clarify/revise my claims to: 1) pro athletes will be somewhat less interested in EA than poker players, mostly due to different thinking styles, and 2) many/most pro athletes are highly generally capable but their comparative advantage won't usually be donating tournament winnings or doing research. Something like promoting disarmament, or entering politics could be. But it needs way more thought.

Niceness and honesty are both things that take work, and can be especially hard when trying to achieve both at once. I think it's often possible to achieve both, but this often requires either substantial emotional labor or unusual skill on the part of the person giving feedback. Under realistic constraints on time and opportunity cost, niceness and honesty do trade off against each other.

This isn't an argument to not care about niceness, but I think it's important to realize that there is an actual trade-off. I personally prefer people to err strongly on the honesty side when giving me feedback. In the most blunt cases it can ruin my day but I still prefer overall to get the feedback even then.

Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team

Okay, thanks for the clarification. I now see where the list comes from, although I personally am bearish on this type of weighting. For one, it ignores many people who are motivated to make AI beneficial for society but don't happen to frequent certain web forums or communities. Secondly, in my opinion it underrates the benefit of extremely competent peers and overrates the benefit of like-minded peers.

While it's hard to give generic advice, I would advocate for going to the school that is best at the research topic one is interested in pursuing... (read more)

Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team

I'm not sure what the metric for the "good schools" list is but the ranking seemed off to me. Berkeley, Stanford, MIT, CMU, and UW are generally considered the top CS (and ML) schools. Toronto is also top-10 in CS and particularly strong in ML. All of these rankings are of course a bit silly but I still find it hard to justify the given list unless being located in the UK is somehow considered a large bonus.

Yep, I'd actually just asked to clarify this. I'm listing schools that are good for doing safety work in particular. They may also be biased toward places I know about. If people are trying to become professors, or are not interested in doing safety work in their PhD then I agree they should look at a usual CS university ranking, which would look like what you describe.

That said, at Oxford there are ~10 CS PhD students interested in safety, and a few researchers, and FHI scholarships, which is why it makes it to the Amazing tier. At Imperial, there are 2 students and one professor. But happy to see this list improved.

Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team

I intended the document to be broader than a research agenda. For instance I describe many topics that I'm not personally excited about but that other people are and where the excitement seems defensible. I also go into a lot of detail on the reasons that people are interested in different directions. It's not a literature review in the sense that the references are far from exhaustive but I personally don't know of any better resource for learning about what's going on in the field. Of course as the author I'm biased.

The EA Community and Long-Term Future Funds Lack Transparency and Accountability

Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.

Disclosure: I am working at OpenPhil over the summer. (I don't have any particular private information, both of the above facts are publicly available.)

EDIT: I don't intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.

-2Dunja3yThanks for the input! But I didn't claim that Nick is biased against academia - I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory. As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system. Nevertheless, I am not concluding from this that Nick is biased (nor that he isn't) - we just don't have evidence for either of these claims, and at the end of the day, this shouldn't matter. The procedure for grants awarding should be robust enough to prevent such biases to kick in. I am not sure if any such measures have been undertaken in this case though, which is why I raising this point.
Comparative advantage in the talent market

If we think of the community as needing one ops person and one research person, the marginal value in each area drops to zero once that role is filled.

Yes, but these effects only show up when the number of jobs is small. In particular: If there are already 99 ops people and we are looking at having 99 vs. 100 ops people, the marginal value isn't going to drop to zero. Going from 99 to 100 ops people means that mission-critical ops tasks will be done slightly better, and that some non-critical tasks will get done that wouldn't have otherwise. Going from... (read more)

Comparative advantage in the talent market

I'm worried that you're mis-applying the concept of comparative advantage here. In particular, if agents A and B both have the same values and are pursuing altruistic ends, comparative advantage should not play a role---both agents should just do whatever they have an absolute advantage at (taking into account marginal effects, but in a large population this should often not matter).

For example: suppose that EA has a "shortage of operations people" but person A determines that they would have higher impact doing direct research rather than doing ... (read more)

Talent gaps from the perspective of a talent limited organization.

FWIW, 50k seems really low to me (but I live in the U.S. in a major city, so maybe it's different elsewhere?). Specifically, I would be hesitant to take a job at that salary, if for no other reason than I thought that the organization was either dramatically undervaluing my skills, or so cash-constrained that I would be pretty unsure if they would exist in a couple years.

A rough comparison: if I were doing a commissioned project for a non-profit that I felt was well-run and value-aligned, my rate would be in the vicinity of $50USD/hour. I'd currently be wi... (read more)

My current thoughts on MIRI's "highly reliable agent design" work

(Speaking for myself, not OpenPhil, who I wouldn't be able to speak for anyways.)

For what it's worth, I'm pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can't really think of anyone more familiar with MIRI's work than Paul who isn't already at MIRI (note that Paul started out pursuing MIRI's approach and shifted in an ML direction over time).

That being said, I agree that the public write-up on the OpenAI grant doesn't reflect that well on OpenPhil, and it seems... (read more)

6Wei_Dai5yThe Agent Foundations Forum [https://agentfoundations.org/members] would have been a good place to look for more people familiar with MIRI's work. Aside from Paul, I see Stuart Armstrong, Abram Demski, Vadim Kosoy, Tsvi Benson-Tilsen, Sam Eisenstat, Vladimir Slepnev, Janos Kramar, Alex Mennen, and many others. (Abram, Tsvi, and Sam have since joined MIRI, but weren't employees of it at the time of the Open Phil grant.) I had previously seen some complaints about the way the OpenAI grant was made, but until your comment, hadn't thought of a possible group blind spot due to a common ML perspective. If you have any further insights on this and related issues (like why you're critical of deep learning but still think the grant to OpenAI was a pretty good idea, what are your objections to Paul's AI alignment approach, how could Open Phil have done better), would you please write them down somewhere?
My current thoughts on MIRI's "highly reliable agent design" work

I think the argument along these lines that I'm most sympathetic to is that Paul's agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people's collective blind spot (because we're all blinded by the same paradigm).

That actually didn't cross my mind before, so thanks for pointing it out. After reading your comment, I decided to look into Open Phil's recent grants to MIRI and OpenAI, and noticed that of the 4 technical advisors Open Phil used for the MIRI grant investigation (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei), all either have a ML background or currently advocate a ML-based approach to AI alignment. For the OpenAI grant however, Open Phil didn't seem to have similarly engaged technical advisors who might be predisposed to be critic... (read more)

My current thoughts on MIRI's "highly reliable agent design" work

This doesn't match my experience of why I find Paul's justifications easier to understand. In particular, I've been following MIRI since 2011, and my experience has been that I didn't find MIRI's arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn't generate myself is the one that I feel most applies to... (read more)

6jsteinhardt5yI think the argument along these lines that I'm most sympathetic to is that Paul's agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people's collective blind spot (because we're all blinded by the same paradigm).
My current thoughts on MIRI's "highly reliable agent design" work

Shouldn't this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.

Personally, I feel like I understand Paul's approach better than I understand MIRI's approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.

Shouldn't this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.

The fact that Paul hasn't had a chance to hear from many of his (would-be) critics and answer them means we don't have a lot of information about how promising his approach is, hence my "too early to call it more promising than HRAD" conclusion.

I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.

Have you written down... (read more)

What Should the Average EA Do About AI Alignment?

I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don't understand much about AI. I think outreach by non-experts could end up being net-negative.

1kbog5yIt is very different for 1-on-1 engagement with highly relevant audiences than it is for general online discourse.
0Raemon5yI agree with this concern, thanks. When I rewrite this post in a more finalized form I'll include reasoning like this.
What Should the Average EA Do About AI Alignment?

In general I think this sort of activism has a high potential for being net negative --- AI safety already has a reputation as something mainly being pushed by outsiders who don't understand much about AI. Since I assume this advice is targeted at the "average EA" (who presumably doesn't know much about AI), this would only exacerbate the issue.

0kbog5yIt depends on the context. In many places there are people who really don't know what they're talking about and have easily corrected, false beliefs. Plus, most places on the Internet protect anonymity. If you are careful it is very easy to avoid having an effect that is net negative on the whole, in my experience.
80,000 Hours: EA and Highly Political Causes

Thanks for clarifying; your position seems reasonable to me.

80,000 Hours: EA and Highly Political Causes

OpenPhil made an extensive write-up on their decision to hire Chloe here: http://blog.givewell.org/2015/09/03/the-process-of-hiring-our-first-cause-specific-program-officer/. Presumably after reading that you have enough information to decide whether to trust her recommendations (taking into account also whatever degree of trust you have in OpenPhil). If you decide you don't trust it then that's fine, but I don't think that can function as an argument that the recommendation shouldn't have been made in the first place (many people such as myself do trust i... (read more)

0the_jaded_one5yI agree, and I didn't mention that document or my degree of trust in it. I suppose it depends what you want to produce. If debates were predictably productive I presume people would just update without even having to have a debate. What counterarguments is one supposed to make, other than the ones one thinks of? I suppose the alternative is to not make a counterargument, or start a debate with all possible lines of play fully worked out and prepared? A high standard, to be sure. Sometimes one doesn't correctly anticipate the actual responses. Is there some tax on number of comments or responses? I mean this is valid to an extent, if someone is making really dumb arguments, but then again sometimes one has to ask the emperor why he isn't wearing any clothes.
80,000 Hours: EA and Highly Political Causes

Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.

I don't agree with the_jaded_one's conclusions or think his post is particularly well-thought-out, but I don't think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think the_jaded_one's criticism is bad criticism, then I think it makes... (read more)

7Kerry_Vaughan5yI agree with this and wasn't trying to say something to the contrary. What I was trying to do is note that the post makes a relatively minor issue into an expose on EA and on 80K. I think this is unnecessary and unwarranted by the issue. What is was trying to do is note one way of handling the issue if your goal is merely to gain more information or see that a problem gets fixed. I think public criticism is fine. I think a good, but not required, practice is to show the criticism to the organization ahead of publishing it so that they can correct factual inaccuracies. I think that would have improved the criticism substantially in this case.
1Robert_Wiblin5yThe original post is partly based on a misconception about how we produced the list and our motivations. That's the kind of thing that could have been clarified if the author contacted us before publishing (or indeed, after publishing).
Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

In my post, I said

anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentis... (read more)

Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.

My impression is that criticism has traditionally been a strong part of Jewish culture, but I'm not culturally Jewish so can't speak directly.

I heard that B... (read more)

3Daniel_Dewey5yThanks! One guess is that ritualization in academia helps with this -- if you say something in a talk or paper, you ritually invite criticism, whereas I'd be surprised to see people apply the same norms to e.g. a prominent researcher posting on facebook. (Maybe they should apply those norms, but I'd guess they don't.) Unfortunately, it's not obvious how to get the same benefits in EA.
Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a n... (read more)

5Brian_Tomasik5yI'm surprised to hear that people see criticizing EA as incurring social costs. My impression was that many past criticisms of EA have been met with significant praise (e.g., Ben Kuhn's [http://www.benkuhn.net/ea-critique]). One approach for dealing with this could be to provide a forum for anonymous posts + comments.
5Daniel_Dewey5yThis is a great point -- thanks, Jacob! I think I tend to expect more from people when they are critical -- i.e. I'm fine with a compliment/agreement that someone spent 2 minutes on, but expect critics to "do their homework", and if a complimenter and a critic were equally underinformed/unthoughtful, I'd judge the critic more harshly. This seems bad! One response is "poorly thought-through criticism can spread through networks; even if it's responded to in one place, people cache and repeat it other places where it's not responded to, and that's harmful." This applies equally well to poorly thought-through compliments; maybe the unchallenged-compliment problem is even worse, because I have warm feelings about this community and its people and orgs! Proposed responses (for me, though others could adopt them if they thought they're good ideas): * For now, assume that all critics are in good faith. (If we have / end up with a bad-critic problem, these responses need to be revised; I'll assume for now that the asymmetry of critique is a bigger problem.) * When responding to critiques, thank the critic in a sincere, non-fake way, especially when I disagree with the critique (e.g. "Though I'm about to respond with how I disagree, I appreciate you taking the critic's risk [http://effective-altruism.com/ea/169/a_response_to_ea_has_a_lying_problem/9rv] to help the community. Thank you! [response to critique]") * Agree or disagree with critiques in a straightforward way, instead of saying e.g. "you should have thought about this harder". * Couch compliments the way I would couch critiques. * Try to notice my disagreements with compliments, and comment on them if I disagree. Thoughts?

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

This is completely true.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the

... (read more)
8Benjamin_Todd5yInteresting. Which groups could we learn the most from?
Load More