Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better. In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don't die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can't influence the long-term future). I've been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let's try to avoid billions of people dying this century.
The case for working on animal welfare over AI / X-risk
https://docs.google.com/document/d/1gk2vVgp6NJf15rGr9R_H68DGwpKgIPUvhkk7DCqUbL4/edit?usp=sharingSorry about that and thanks for pointing this out :)Akash will update this soon!
Thanks for writing this post. :)I like how you accept that a low-commitment reading group is sometimes the best option.
I think one of the ways reading groups go wrong is when you don't put in the intentional effort or accountability to encourage everyone to actually read, but you still expect them to – even though you're unsurprised when they don't read. But then, because you wish they had read, you still run the discussion as if they're prepared. You get into this awkward situation you talked about where people don't speak since they don't want to blatantly reveal they haven't read.
I love and appreciate these suggestions! I'll be stealing the idea about copying readings into google docs and am super excited for it.
Thanks for writing this post :)
It seems like one of the main factors leading to your mistakes was the way ideas can get twisted as they are echoed through the community and the epistemic humility that turns into deference to experts. I especially resonated with this:
I did plenty of things just because they were ‘EA’ without actually evaluating how much impact I would be having or how much I would learn.
As a university organizer, I see that nearly all of my experience with EA so far is not “doing” EA, but only learning about it. Not making impact estimates myself and then comparing to experts, but being anchored to experts’ answers from the start. It’s very much like university. You learn the common arguments and “right” answers, and even though you’re encouraged to discuss and disagree, everyone pretty much knows what the teacher or facilitator wants you to say.
I like your plans to further consider what you think about how to help others best and your own cause prioritization. That’s what I’m trying to do right now too :)But I’m curious about why neither of us did this earlier. EAs often say they want you to figure things out for yourself, but there is also so much deference and respect towards the experts that I think makes it scary to say what you actually think, when everyone has a pretty good idea of what you’re supposed to think, and how epistemically humble we should be. Do you have any thoughts on how to better encourage people to build their own views in EA? Or what would have made your past self do that?
Hey Ozzie, I've thought about this a little before and wrote about it here if you're interested! :)
This is really exciting! You could try reaching out to coinbase to get listed as an organization on this page: https://www.coinbase.com/learn/crypto-basics/how-to-donate-crypto
Just wanted to let you know that this is extraordinarily helpful for me right now planning my first retreat, thanks Jessica!
That's great! I have to decide by Thursday, so I'll let you know what we're working on :). Definitely nothing larger than a few gigabytes I would say. I'm pretty new to data science and we're using pretty simple methods in this project, so I'm guessing we'll also want to do a relatively simple regression or classification analysis on a relatively simple (and maybe small) dataset.