All of AnnaSalamon's Comments + Replies

From accounts I heard later (I was not at the camp, but did hear a lot about it from folks who were), I'm basically certain CFAR would have interfered with the minor going even if the minor had agreed.  Multiple CFAR staff members stepped in to attempt to prevent the minor from going (as mentioned in e.g. https://www.rationality.org/resources/updates/2019/cfars-mistakes-regarding-brent, and as I also remember from closer to time) much fuss was correctly made at the time, etc.  I agree that many bad mistakes were made, then and previously and afte... (read more)

I disagree. It seems to me that the EA community's strength, goodness, and power lie almost entirely in our ability to reason well (so as to be actually be "effective", rather than merely tribal/random). It lies in our ability to trust in the integrity of one anothers' speech and reasoning, and to talk together to figure out what's true.

Finding the real leverage points in the world is probably worth orders of magnitude in our impact. Our ability to think honestly and speak accurately and openly with each other seems to me to be ... (read more)

I received this as a private message:

Hi, this is meant to be a reply to your reply to Anna. Please post it for me. [...]
Agreed that Anna seems to be misinterpreting you or not addressing your main point. The biggest question in my mind is whether EA will be on the wrong side of the revolution anyway, because we're an ideological competitor and a bundle of resources that can be expropriated. Even if that's the case though, maybe we still have to play the odds and just hope to fly under the radar somehow.
Seems like hiring some history professors as
... (read more)

First of all, thanks so much for your time for providing an insightful (and poetic!) comment.

It seems to me that the EA community's strength, goodness, and power lie almost entirely in our ability to reason well

Mostly agreed. I think "reasoning well" hides a lot of details though, eg. a lot of the time people reason poorly due to specific incentives than because of their general inability to reason.

Finding the real leverage points in the world is probably worth orders of magnitude in our impact.

Agreed

Our ability to think honestly and speak
... (read more)

I feel that 1-2 such posts per organization per year is appropriate and useful, especially since organizations often have year-end reviews or other orienting documents timed near their annual fundraiser, and reading these allows me to get oriented about what the organizations are up to.

5
MichaelDickens
7y
1-2 posts per year seems arguably reasonable; one post per month (as CEA has been doing) is excessive.

Seeing this comment from you makes me feel good about Open Phil's internal culture; it seems like evidence that folks who work there feel free to think independently and to voice their thoughts even when they disagree. I hope we manage a culture that makes this sort of thing accessible at CFAR and in general.

1
Owen Cotton-Barratt
7y
Interestingly I don't think there is a big gap between my position (hence also Luke's?) and Open Phil's position.

Gotcha. Your phrasing distinction makes sense; I'll adopt it. I agree now that I shouldn't have included "clarity" in my sentence about "attempts to be clear/explainable/respectable".

The thing that confused me is that it is hard to incentivize clarity but not the explainability; the easiest observable is just "does the person's research make sense to me?", which one can then choose how to interpret, and how to incentivize.

It's easy enough to invest in clarity / Motion A without investing in explainability / Motion B, though. ... (read more)

4
Owen Cotton-Barratt
7y
My suspicion is that MIRI significantly underinvests/misinvests in Motion A, although of course this is a bit hard to assess from outside. I think that they're not that good at clearly explaining their thoughts, but that this is a learnable (and to some extent teachable) skill, and I'm not sure their researchers have put significant effort into trying to learn it. I suspect that they don't put enough time into trying to clearly explain the foundations for what they're doing, relative to trying to clearly explain their new results (though I'm less confident about this, because so much is unobserved). I think they also sometimes indugle in a motion where they write to try to persuade the reader that what they're doing is the correct approach and helpful on the problem at hand, rather than trying to give the reader the best picture of the ways in which their work might or might not actually be applicable. I think at a first pass this is trying to substitute for Motion B, but it actively pushes against Motion A. I'd like to see explanations which trend more towards: * Clearly separating out the motivation for the formalisation from the parts using the formalisation. Then these can be assessed separately. (I think they've got better at this recently.) * Putting their cards on the table and giving their true justification for different assumptions. In some cases this might be "slightly incoherent intuition". If that's what they have, that's what they should write. This would make it easier for other people to evaluate, and to work out which bits to dive in on and try to shore up.

I feel as though building a good culture is really quite important, and like this sort of specific proposal & discussion is how, bit by bit, one does that. It seems to me that the default for large groups of would-be collaborators is to waste almost all the available resource due basically to "insufficiently ethical/principled social fabric".

(My thoughts here are perhaps redundant with Owen's reply to your comment, but it seems important enough that I wanted to add a separate voice and take.)

Re: how much this matters (or how much is wasted ... (read more)

Not sure how much this is a response to you, but:

In considering whether incentives toward clarity (e.g., via being able to explain one’s work to potential funders) are likely to pull in good or bad directions, I think it’s important to distinguish between two different motions that might be used as a researcher (or research institution) responds to those incentives.

  • Motion A: Taking the research they were already doing, and putting a decent fraction of effort into figuring out how to explain it, figuring out how to get it onto firm foundations, etc.

  • Moti

... (read more)
4
Owen Cotton-Barratt
7y
I agree with all this. I read your original "attempts to be clear" as Motion A (which I was taking a stance in favour of), and your original "attempts to be exainable" as Motion B (which I wasn't sure about).

Relatedly, it seems to me that in general, preparadigm fields probably develop faster if:

  1. Different research approaches can compete freely for researchers (e.g., if researchers have secure, institution-independent funding, and can work on whatever approach pleases them). (The reason: there is a strong relationship between what problems can grab a researcher’s interest, and what problems may go somewhere. Also, researchers are exactly the people who have leisure to form a detailed view of the field and what may work. cf also the role of play in research

... (read more)
9
Owen Cotton-Barratt
7y
I generally agree with both of these comments. I think they're valuable points which express more clearly than I did some of what I was getting at with wanting a variety of approaches and thinking I should have some epistemic humility. One point where I think I disagree: I don't want to defend pulls towards being respectable, and I'm not sure about pulls towards being explainable, but I think that attempts to be clear are extremely valuable and likely to improve work.I think that clarity is a useful thing to achieve, as it helps others to recognise the value in what you're doing and build on the ideas where appropriate (I imagine that you agree with this part). I also think that putting a decent fraction of total effort into aiming for clarity is likely to improve research directions. This is based on research experience -- I think that putting work into trying to explain things very clearly is hard and often a bit aversive (because it can take you from an internal sense of "I understand all of this" to a realisation that actually you don't). But I also think it's useful for making progress purely internally, and that getting a crisper idea of the foundations can allow for better work building on this (or a realisation that this set of foundations isn't quite going to work).

I suspect it’s worth forming an explicit model of how much work “should” be understandable by what kinds of parties at what stage in scientific research.

To summarize my own take:

It seems to me that research moves down a pathway from (1) "totally inarticulate glimmer in the mind of a single researcher" to (2) "half-verbal intuition one can share with a few officemates, or others with very similar prejudices" to (3) "thingy that many in a field bother to read, and most find somewhat interesting, but that there's still no agreement ... (read more)

1
Girish_Sastry
7y
I agree that this makes sense in the "ideal" world, where potential donors have better mental models of this sort of research pathway, and have found this sort of thinking useful as a potential donor. From an organizational perspective, I think MIRI should put more effort into producing visible explanations of their work (well, depending on their strategy to get funding). As worries about AI risk become more widely known, there will be a larger pool potential donations to research in the area. MIRI risks becoming out-competed by others who are better at explaining how their work decreases risk from advanced AI (I think this concern applies both to talent and money, but here I'm specifically talking about money). High-touch, extremely large donors will probably get better explanations, reports on progress, etc from organizations, but the pool of potential $ from donors who just read what's available online may be very large, and very influenced by clear explanations about the work. This pool of donors is also more subject to network effects, cultural norms, and memes. Given that MIRI is running public fundraisers to close funding gaps, it seems that they do rely on these sorts of donors for essential funding. Ideally, they'd just have a bunch of unrestricted funding to keep them secure forever (including allaying the risk of potential geopolitical crises and macroeconomic downturns).
4
Paul_Crowley
7y
I went to a MIRI workshop on decision theory last year. I came away with an understanding of a lot of points of how MIRI approaches these things that I'd have a very hard time writing up. In particular, at the end of the workshop I promised to write up the "Pi-maximising agent" idea and how it plays into MIRI's thinking. I can describe this at a party fairly easily, but I get completely lost trying to turn it into a writeup. I don't remember other things quite as well (eg "playing chicken with the Universe") but they have the same feel. An awful lot of what MIRI knows seems to me folklore like this.

Relatedly, it seems to me that in general, preparadigm fields probably develop faster if:

  1. Different research approaches can compete freely for researchers (e.g., if researchers have secure, institution-independent funding, and can work on whatever approach pleases them). (The reason: there is a strong relationship between what problems can grab a researcher’s interest, and what problems may go somewhere. Also, researchers are exactly the people who have leisure to form a detailed view of the field and what may work. cf also the role of play in research

... (read more)

Folks who haven't started college yet and who are no more than 19 years old are eligible for EuroSPARC; so, yes, your person (you?) should apply :)

1
Vidur Kapur
8y
Thanks for the info! Yes, I'll give it a shot.