Jess_Whittlestone

Topic Contributions

Comments

"Big tent" effective altruism is very important (particularly right now)

I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).

 I think I naturally interpret a downvote as meaning "I think this post/comment isn't helpful and I generally want to discourage posts/comments like it." That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently,  especially where they are reasonable and constructive like this one.

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community). I haven't spent a lot of time on this forum recently so I'm wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?

[EDIT: I checked the norms on up/downvoting, which say to downvote if either "There’s an error", or "The comment or post didn’t add to the conversation, and maybe actually distracted." I personally think this post added something useful to the conversation about the scope and focus of EA, and it seems harsh to downvote it because it conflated a few different dimensions - and that's why Max's comment seemed a bit harsh/dismissive to me]

Long-Term Future Fund: August 2019 grant recommendations

Firstly, I very much appreciate the grant made by the LTF Fund! On the discussion of the paper by Stephen Cave & Seán Ó hÉigeartaigh in the addenda, I just wanted to briefly say that I’d be happy to talk further about both: (a) the specific ideas/approaches in the paper mentioned, and also (b) broader questions about CFI and CSER’s work. While there are probably some fundamental differences in approach here, I also think a lot may come down to misunderstanding/lack of communication. I recognise that both CFI and CSER could probably do more to explain their goals and priorities to the EA community, and I think several others beyond myself would also be happy to engage in discussion.

I don’t think this is the right place to get into that discussion (since this is a writeup of many grants beyond my own), but I do think it could be productive to discuss elsewhere. I may well end up posting something separate on the question of how useful it is to try and “bridge” near-term and long-term AI policy issues, responding to some of Oli’s critique - I think engaging with more sceptical perspectives on this could help clarify my thinking. Anyone who would like to talk/ask questions about the goals and priorities of CFI/CSER more broadly is welcome to reach out to me about that. I think those conversations may be better had offline, but if there's enough interest maybe we could do an AMA or something.

Long-Term Future Fund: April 2019 grant recommendations

I'd be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.

Long-Term Future Fund: April 2019 grant recommendations

Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you've said I do think that more time needs to be going into these grant reviews.

It don't think it's unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn't your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that's the case, then I think there's a bigger problem (the job isn't being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.

Long-Term Future Fund: April 2019 grant recommendations
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

I'm pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know - being able to do this should be a requirement of the job, IMO. Seconding Peter's question below, I'd be keen to hear if there are any plans to make progress on this.

If you really don't have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.

Effective Altruism Grants project update

This may be a bit late, but: I'd like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund - especially when some of the amounts are pretty big, and there's a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I'm struggling to imagine what that's being spent on.

EA Survey 2017 Series: How do People Get Into EA?

Did SlateStarCodex even exist before 2009? I'm sceptical - the post archives only go back to 2013: http://slatestarcodex.com/archives/. Maybe not a big deal but does suggest at least some of your sample were just choosing options randomly/dishonestly.

Anonymous EA comments

If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.

I strongly agree with this, and I hadn't heard anyone articulate it quite this explicitly - thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn't always "use this CFAR technique."

(I think CFAR are great and a lot of their techniques are really useful. But I've also spent a bunch of time feeling bad the fact that I don't seem able to learn and implement these techniques in the way many other people seem to, and it's taken me a long time to realise that trying to 'figure out' how to fix my problems in a very analytical way is very often not what I need.)

Use "care" with care.

Thanks for writing this Roxanne, I agree that this is a risk - and I've also cringed sometimes when I've heard EAs say they "don't care" about certain things. I think it's good to highlight this as a thing we should be wary of.

It reminds me a bit of how in academia people often say, "I'm interested in x", where x is some very specific, niche subfield, implying that they're not interested in anything else - whereas what they really mean is, "x is the focus of my research." I've found myself saying this wrt my own research, and then often caveating, "actually, I'm interested in a tonne of wider stuff, this is just what I'm thinking about at the moment!" So I'd like it if the norm in EA were more towards saying things like, "I'm currently focusing on/working on/thinking about x" rather than, "I care about x"

Should I be vegan?

If you haven't tried just avoiding eggs, it seems worth at least trying.

Yeah, that seems right!

I don't understand the "completely trivial difference" line. How do you think it compares to the quality of life lost by eating somewhat cheaper food? For me, the cheaper food is much more cost-effective, in terms of world-bettering per unit of foregone joy.

I think this is probably just a personal thing - for me I think eating somewhat cheaper food would be worse in terms of enjoyment than cutting out dairy. The reason I say it's a basically trivial difference is that, while I enjoy dairy products, I don't think I enjoy them more than I enjoy various other foods - they're just another thing that I enjoy. So given that I can basically replace all the non-vegan meals I would normally have with vegan meals that I like as much (which requires some planning, of course), then I don't think there will be much, if any, difference in my enjoyment of food over time. I also think that even a very small difference in the pleasure I get from eating dairy vs vegan food would be trivial in terms of my happiness/enjoyment over my life as a whole, or even any day as a whole - I don't think I'd ever look back on a day and think "Oh, my enjoyment of that day would have been so much greater if I'd eaten cheese." I enjoy food, but it's not that big a deal relative to a lot of other more important things.

Load More