Thanks for this! I think we still disagree though. I'll elaborate on my position below, but don't feel obligated to update the post unless you want to.
* The adversarial training project had two ambitious goals, which were the unrestricted threat model and also a human-defined threat model (e.g. in contrast to synthetic L-infinity threat models that are usually considered).
* I think both of these were pretty interesting goals to aim for and at roughly the right point on the ambition-tractability scale (at least a priori). Most research projects ...
I'll briefly comment on a few parts of this post since my name was mentioned (lack of comment on other parts does not imply any particular position on them). Also, thanks to the authors for their time writing this (and future posts)! I think criticism is valuable, and having written criticism myself in the past, I know how time-consuming it can be.
I'm worried that your method for evaluating research output would make any ambitious research program look bad, especially early on. Specifically:
...The failure of Redwood's adversarial training project is unfortuna
My personal judgment is that Buck is a stronger researcher than most people with ML PhDs. He is weaker at empirical ML than this baseline, but very strong conceptually in ways that translate well to machine learning. I do think Buck will do best in a setting where he's either paired with a good empirical ML researcher or gains more experience there himself (he's already gotten a lot better in the past year). But overall I view Buck as on par with a research scientist at a top ML university.
Thank you for this comment, some of the contributors of this post have updated their views of Buck as a researcher as a result.
Thanks for this detailed comment Jacob. We're in agreement with your first point, but on re-reading the post we can see why it seems like we think the problem selection was also wrong - we don't believe this. We will clarify the distinction between problem selection and execution in the main post soon.
Our main concerns was that we think it is important, when working on a problem where a lot of prior research has been done, to come in to it with a novel approach or insight. We think its possible the team could have done this via a more thorough ...
Thanks for this thoughtful and excellently written post. I agree with the large majority of what you had to say, especially regarding collective vs. individual epistemics (and more generally on the importance of good institutions vs. individual behavior), as well as concerns about insularity, conflicts of interest, and underrating expertise and overrating "value alignment". I have similarly been concerned about these issues for a long time, but especially concerned over the past year.
I am personally fairly disappointed by the extent to which many commenter...
This is kind of tangential, but anyone who is FODMAP-sensitive would be unable to eat any of Soylent, Huel, or Mealsquares as far as I'm aware.
To continue the tangent, I'm pretty sure this is not true for Huel. From their UK website:
All ingredients listed in Huel Powder v3.0 and Black Edition Huel are low FODMAP and for this reason they can be used alongside the Low FODMAP Diet and as part of your dietary routine if you have IBS. If you have IBS and an extremely busy lifestyle where convenience is a top priority, Huel could form a part of your eating routine to ensure you are achieving a regular meal pattern and optimal nutrition.
Thanks for writing this! One thing that might help would be more examples of Phase 2 work. For instance, I think that most of my work is Phase 2 by your definition (see here for a recent round-up). But I am not entirely sure, especially given the claim that very little Phase 2 work is happening. Other stuff in the "I think this counts but not sure" category would be work done by Redwood Research, Chris Olah at Anthropic, or Rohin Shah at DeepMind (apologies to any other people who I've unintentionally left out).
Another advantage of examples is it could help highlight what you want to see more of.
I'm teaching a class on forecasting this semester! The notes will all be online: http://www.stat157.com/
It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like "instructions" than "arguments", and Rubi was calling for suppressing arguments on the danger that they would be believed.
At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.
I don't think "the work got published, so the censorship couldn't have been that bad" really makes sense as a reaction to claims of censorship. You won't see...
I also agree with you. I would find it very problematic if anyone was trying to "ensure harmful and wrong ideas are not widely circulated". Ideas should be argued against, not suppressed.
Ideas should be argued against, not suppressed.
All ideas? Instructions for how to make contact poisons that aren't traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals' command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed.
You can tell me that making information more available is good, and I agree in almost all cases. But only almost all.
Re: Bayesian thinking helping one to communicate more clearly. I agree that this is a benefit, but I don't think it's the fastest route or the one with the highest marginal value. For instance, when you write:
...A lot of expressed beliefs are “fake beliefs”: things people say to express solidarity with some group (“America is the greatest country in the world”), to emphasize some value (“We must do this fairly”), to let the listener hear what they want to hear (“Make America great again”), or simply to sound reasonable (“we will balance costs and benefits”) o
I just don't think this is very relevant to whether outreach to debaters is good. A better metric would be to look at life outcomes of top debaters in high school. I don't have hard statistics on this but the two very successful debaters I know personally are both now researchers at the top of their respective fields, and certainly well above average in truth-seeking.
I also think the above arguments are common tropes in the "maths vs fuzzies" culture war, and given EA's current dispositions I suspect we're systematically more likely to hear and be receptiv...
Thanks, and sorry for not responding to this earlier (was on vacation at the time). I really appreciated this and agree with willbradshaw's comment below :).
I think we just disagree about what a downvote means, but I'm not really that excited to argue about something that meta :).
As another data point, I appreciated Dicentra's comment elsewhere in the thread. I haven't decided whether I agree with it, but I thought it demonstrated empathy for all sides of a difficult issue even while disagreeing with the OP, and articulated an important perspective.
I think your characterization of my thought process is completely false for what it's worth. I went out of my way multiple times to say that I was not expressing disapproval of Dale's comment.
Edit: Maybe it's helpful for me to clarify that I think it's both good for Dale to write his comment, and for Khorton to write hers.
I think your characterization of my thought process is completely false for what it’s worth. I went out of my way multiple times to say that I was not expressing disapproval of Dale’s comment.
That's certainly better news than the alternative, but I hope you find it understandable that I don't update to 100% believing your claim, given that you may not have full introspective access to all of your own cognitive processes, and what appears to me to be a series of anomalies that is otherwise hard to explain. But I'm certainly willing to grant this for the ...
I didn't downvote Dale, nor do I wish to express social disapproval of his post (I worry that the length of this thread might lead Dale to feel otherwise, so I want to be explicit that I don't feel that way).
To your question, if I were writing a post similar to Dale, what I would do differently is be more careful to make sure I was responding to the actual content of the post. The OP asked people to support Asian community members who were upset, while at least the last paragraph of Dale's post seemed to assume that OP was arguing that we should be searchi...
The OP asked people to support Asian community members who were upset, while at least the last paragraph of Dale’s post seemed to assume that OP was arguing that we should be searching for ways to reduce violence against Asians.
It seems totally reasonable to interpret the OP as arguing for the latter as well as the former:
I think it's good for people to point out ways that criticism can be phrased more sympathetically, and even aligned with your goal of encouraging more critical discussion (which I am also in favor of). As someone who often gives criticism, sometimes unpopular criticism, I both appreciate when people point out ways I could phrase it better but also strongly desire people to be forgiving when I fail to do so. If no one took the time to point these out to me, I would be less capable of offering effective criticism.
Along these lines, my guess is that you and K...
As someone who often gives criticism, sometimes unpopular criticism, I both appreciate when people point out ways I could phrase it better
Neither you nor Khorton appear to have done this for Dale, at least not very clearly.
They being Laaunch? I agree they do a lot of different things. Hate is a Virus seemed to be doing even more scattered things, some of which didn't make sense to me. Everything Laaunch was doing seemed at least plausibly reasonable to me, and some, like the studies and movement-building, seemed pretty exciting.
My guess is that even within Asian advocacy, Laaunch is not going to look as mission-focused and impact-driven as say AMF. But my guess is no such organization exists--it's a niche cause compared to global poverty, so there's less professionalization--though I wouldn't be surprised if I found a better organization with more searching. I'm definitely in the market for that if you have ideas.
Though I wouldn't be surprised if I found a better organization with more searching.. I'm definitely in the market for that if you have ideas.
I don't have direct ideas for the stated goal, but some brainstorming on the purpose of why you are interested in Asian advocacy might be fruitful? If you are interested in things that help Asian diaspora have better lives, have a wildly flourishing future, etc, I'd bet that the same general (human-focused) cause areas that EAs are interested in (scientific advancement, reducing existential synthetic biology and AI r...
Thanks. I'm currently planning to donate to Laaunch as they seem the most disciplined and organized of the groups. I couldn't actually tell what Hate is a Virus wants to do from their website--for instance a lot of it seems to be about getting Asians to advocate for other racial minorities, but I'm specifically looking for something that will help Asians. Laaunch seems more focused on this while still trying to build alliances with other racial advocacy groups.
They (EDIT: Laaunch) seem to be doing a lot of different things and I'm confused as to what their theory of change is.
(Tbc I only had a cursory look at their website so it's possible I missed it).
For me personally, it's symbolically important to make some sort of donation as a form of solidarity. It's not coming out of my EA budget, but I'd still rather spend the money as effectively as possible. It seems to me that practicing the virtue of effective spending in one domain will only help in other domains.
JPAL had some links to some orgs here:
Asian Americans Advancing Justice--Atlanta
stopAAPIhate.org
hateisavirus.org
laaunch.org
Edit: I also found Asian Americans Advancing Justice - this seems to be one of the biggest civil rights charities focusing on low income Asian Americans. They seem to have a good track record. One can donate without paying any fees via PayPal Giving Fund here.
Might also be worth to ask @chloecockburn who had some BLM recommendations.
I think one concrete action people could take is to try to listen to the experiences of their Asian friends and colleagues. There is a lot of discrimination that isn't violence. Understanding and solidarity can go a long way, and can also prevent reduce discrimination.
For Chinese immigrants in particular there are also a lot of issues related to immigration and to U.S.-China tensions.
Neither of these is directly related to the Atlanta shootings, but I think it can be a good symbolic moment to better understand others, especially since discrimination agains...
There is a lot of discrimination that isn't violence.
This is a good point, and definitely true. One example is the massive discrimination that asians face in college admissions. During the Harvard admissions trial, both sides agreed that asian applicants had generally superior academic and extracurricular credentials to white applicants, and much higher than black applicants, and yet were admitted at significantly lower rates. The university's defence was that on average asians had inferior personalities, a finding which to my knowledge not supported...
Thanks for this. I have been trying to think about what organizations I can support that would be most effective here. I'm still thinking though it myself but if you have particular thoughts, let me know.
PBS Newshour created this list of ways people in the US can fight racism and violence against Asian Americans. (I'll add it to the post.)
I also think that solidarity with Asians around the world includes opposing the human rights violations occurring in Asian countries, such as Myanmar, China, and India.
What's the argument for supporting organizations in this cause area? If you're just trying to purchase fuzzies for yourself or other community members, that seems fine, but it's hard for me to see it making sense to prioritize anti-Asian violence as a cause area by the usual EA metrics.
But maybe there are other related causes that are more promising from an EA perspective, like lowering US-China tensions, or otherwise reducing the risks of a US-China war...
I think I'd just note that the post, in my opinion, helps combat some of these issues. For instance it suggests that autistic people are able to learn how to interact with neurotypical people successfully, given sufficient effort--ie, the "mask".
Thanks, that's helpful. If you're saying that the stricter criterion would also apply to DM/CHAI/etc. papers then I'm not as worried about bias against younger researchers.
Regarding your 4 criteria, I think they don't really delineate how to make the sort of judgment calls we're discussing here, so it really seems like it should be about a 5th criterion that does delineate that. I'm not sure yet how to formulate one that is time-efficient, so I'm going to bracket that for now (recognizing that might be less useful for you), since I think we actually disagr...
Also in terms of alternatives, I'm not sure how time-expensive this is, but some ideas for discovering additional work:
-Following citation trails (esp. to highly-cited papers)
-Going to the personal webpages of authors of relevant papers, to see if there's more (also similarly for faculty webpages)
Well, it's biased toward safety organizations, not large organizations.
Yeah, good point. I agree it's more about organizations (although I do think that DeepMind is benefiting a lot here, e.g. you're including a fairly comprehensive list of their adversarial robustness work while explicitly ignoring that work at large--it's not super-clear on what grounds, for instance if you think Wong and Cohen should be dropped then about half of the DeepMind papers should be too since they're on almost identical topics and some are even follow-ups to the Wong pap...
Thanks for curating this! You sort of acknowledge this already, but one bias in this list is that it's very tilted towards large organizations like DeepMind, CHAI, etc. One way to see this is that you have AugMix by Hendrycks et al., but not the Common Corruptions and Perturbations paper, which has the same first author and publication year and 4x the number of citations (in fact it would top the 2019 list by a wide margin). The main difference is that AugMix had DeepMind co-authors while Common Corruptions did not.
I mainly bring this up because this bias ...
Thanks Jacob. That last link is broken for me, but I think you mean this?
You sort of acknowledge this already, but one bias in this list is that it's very tilted towards large organizations like DeepMind, CHAI, etc.
Well, it's biased toward safety organizations, not large organizations. (Indeed, it seems to be biased toward small safety organizations over larges ones since they tend to reply to our emails!) We get good coverage of small orgs like Ought, but you're right we don't have a way to easily track individual unaffiliate...
I didn't mean to imply that laziness was the main part of your reply, I was more pointing to "high personal costs of public posting" as an important dynamic that was left out of your list. I'd guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don't think we disagree on the overall list of considerations.
I think the reasons people don't post stuff publicly isn't out of laziness, but because there's lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.
Thanks for writing this and for your research in this area. Based on my own read of the literature, it seems broadly correct to me, and I wish that more people had an accurate impression of polarization on social media vs mainstream news and their relative effects.
While I think your position is much more correct than the conventional one, I did want to point to an interesting paper by Ro'ee Levy, which has some very good descriptive and casual statistics on polarization on Facebook: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3653388. It suggests (...
You also sort of touch on this but I think it's also helpful to convey when you have genuine uncertainty (not at the cost of needless hedging and underconfidence) and also say when you think someone else (who they have access to) would be likely to have more informed advice on a particular question.
I like your guidelines. Some others that come to mind:
-Some people are not just looking for advice but to avoid the responsibility of choosing for themselves (they want someone else to tell them what the right answer is). I think it's important to resist this and remind people that ultimately it's their responsibility to make the decision.
-If someone seems to be making a decision out of fear or anxiety, I try to address this and de-dramatize the different options. People rarely make their best decisions if they're afraid of the outcomes.
-I try to show my w...
Thanks! 1 seems believable to me, at least for EA as it currently presents. 2 seems believable on average but I'd expect a lot of heterogeneity (I personally know athletes who have gone on to be very good researchers). It also seems like donations are pretty accessible to everyone, as you can piggyback on other people's research.
I personally wouldn't pay that much attention to the particular language people use--it's more highly correlated with their local culture than with abilities or interests. I'd personally be extra excited to talk to someone with a strong track record of handling uncertainty well who had a completely different vocabulary than me, although I'd also expect it to take more effort to get to the payoff.
This is a bit tangential, but I expect that pro athletes would be able to provide a lot of valuable mentorship to ambitious younger people in EA--my general experience has been that about 30% of the most valuable growth habits I have are imported from sports (and also not commonly found elsewhere). E.g. "The Inner Game of Tennis" was gold and I encourage all my PhD students to read it.
I didn't downvote, but the analysis seems incorrect to me: most pro athletes are highly intelligent, and in terms of single attributes that predict success in subsequent difficult endeavors I can't think of much better; I'd probably take it over successful startup CEO even. It also seems like the sort of error that's particularly costly to make for reasons of overall social dynamics and biases.
Niceness and honesty are both things that take work, and can be especially hard when trying to achieve both at once. I think it's often possible to achieve both, but this often requires either substantial emotional labor or unusual skill on the part of the person giving feedback. Under realistic constraints on time and opportunity cost, niceness and honesty do trade off against each other.
This isn't an argument to not care about niceness, but I think it's important to realize that there is an actual trade-off. I personally prefer people to err strongly on the honesty side when giving me feedback. In the most blunt cases it can ruin my day but I still prefer overall to get the feedback even then.
Okay, thanks for the clarification. I now see where the list comes from, although I personally am bearish on this type of weighting. For one, it ignores many people who are motivated to make AI beneficial for society but don't happen to frequent certain web forums or communities. Secondly, in my opinion it underrates the benefit of extremely competent peers and overrates the benefit of like-minded peers.
While it's hard to give generic advice, I would advocate for going to the school that is best at the research topic one is interested in pursuing...
I'm not sure what the metric for the "good schools" list is but the ranking seemed off to me. Berkeley, Stanford, MIT, CMU, and UW are generally considered the top CS (and ML) schools. Toronto is also top-10 in CS and particularly strong in ML. All of these rankings are of course a bit silly but I still find it hard to justify the given list unless being located in the UK is somehow considered a large bonus.
Yep, I'd actually just asked to clarify this. I'm listing schools that are good for doing safety work in particular. They may also be biased toward places I know about. If people are trying to become professors, or are not interested in doing safety work in their PhD then I agree they should look at a usual CS university ranking, which would look like what you describe.
That said, at Oxford there are ~10 CS PhD students interested in safety, and a few researchers, and FHI scholarships, which is why it makes it to the Amazing tier. At Imperial, there are 2 students and one professor. But happy to see this list improved.
I intended the document to be broader than a research agenda. For instance I describe many topics that I'm not personally excited about but that other people are and where the excitement seems defensible. I also go into a lot of detail on the reasons that people are interested in different directions. It's not a literature review in the sense that the references are far from exhaustive but I personally don't know of any better resource for learning about what's going on in the field. Of course as the author I'm biased.
Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.
Disclosure: I am working at OpenPhil over the summer. (I don't have any particular private information, both of the above facts are publicly available.)
EDIT: I don't intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.
If we think of the community as needing one ops person and one research person, the marginal value in each area drops to zero once that role is filled.
Yes, but these effects only show up when the number of jobs is small. In particular: If there are already 99 ops people and we are looking at having 99 vs. 100 ops people, the marginal value isn't going to drop to zero. Going from 99 to 100 ops people means that mission-critical ops tasks will be done slightly better, and that some non-critical tasks will get done that wouldn't have otherwise. Going from...
I'm worried that you're mis-applying the concept of comparative advantage here. In particular, if agents A and B both have the same values and are pursuing altruistic ends, comparative advantage should not play a role---both agents should just do whatever they have an absolute advantage at (taking into account marginal effects, but in a large population this should often not matter).
For example: suppose that EA has a "shortage of operations people" but person A determines that they would have higher impact doing direct research rather than doing ...
FWIW, 50k seems really low to me (but I live in the U.S. in a major city, so maybe it's different elsewhere?). Specifically, I would be hesitant to take a job at that salary, if for no other reason than I thought that the organization was either dramatically undervaluing my skills, or so cash-constrained that I would be pretty unsure if they would exist in a couple years.
A rough comparison: if I were doing a commissioned project for a non-profit that I felt was well-run and value-aligned, my rate would be in the vicinity of $50USD/hour. I'd currently be wi...
(Speaking for myself, not OpenPhil, who I wouldn't be able to speak for anyways.)
For what it's worth, I'm pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can't really think of anyone more familiar with MIRI's work than Paul who isn't already at MIRI (note that Paul started out pursuing MIRI's approach and shifted in an ML direction over time).
That being said, I agree that the public write-up on the OpenAI grant doesn't reflect that well on OpenPhil, and it seems...
I think the argument along these lines that I'm most sympathetic to is that Paul's agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people's collective blind spot (because we're all blinded by the same paradigm).
That actually didn't cross my mind before, so thanks for pointing it out. After reading your comment, I decided to look into Open Phil's recent grants to MIRI and OpenAI, and noticed that of the 4 technical advisors Open Phil used for the MIRI grant investigation (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei), all either have a ML background or currently advocate a ML-based approach to AI alignment. For the OpenAI grant however, Open Phil didn't seem to have similarly engaged technical advisors who might be predisposed to be critic...
This doesn't match my experience of why I find Paul's justifications easier to understand. In particular, I've been following MIRI since 2011, and my experience has been that I didn't find MIRI's arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn't generate myself is the one that I feel most applies to...
Shouldn't this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.
Personally, I feel like I understand Paul's approach better than I understand MIRI's approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.
Shouldn't this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.
The fact that Paul hasn't had a chance to hear from many of his (would-be) critics and answer them means we don't have a lot of information about how promising his approach is, hence my "too early to call it more promising than HRAD" conclusion.
I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.
Have you written down...
To push back on this point, presumably even if grantmaker time is the binding resource and not money, Redwood also took up grantmaker time from OP (indeed I'd guess that OP's grantmaker time on RR is much higher than for most other grants given the board member relationship). So I don't think this really negates Omega's argument--it is indeed relevant to ask how Redwood looks compared to grants that OP hasn't made.
Personally, I am pretty glad Redwood exists and think their research so far is promising. But I am also pretty disappointed that OP hasn't funde... (read more)
My prior is that people who Jacob thinks are slam-dunks should basically always be getting funding, so I'm pretty surprised by this anecdote. (In general I also expect that there are a lot of complex details in cases like these, so it doesn't seem implausible that it was the right call, but it seemed worth registering the surprise.)