# All Posts

Sorted by Magic (New & Upvoted)

# Saturday, September 18th 2021Sat, Sep 18th 2021

Frontpage Posts
Personal Blogposts
Shortform
1jwilson0163dHi, I am working on a non profit to help animals in another country by creating a sanctuary for them. I already know how to setup a corporation, convert it to a non profit, and operate it with a board of directors. For this project, I will be opening a US non-profit and using funds to help animals in other countries. I am looking for additional guides on how to establish a non-profit organization in another country and wanted to know if there is anything different about running a US non profit that does work in another country and if the process is any different when operating only in the US.
Wiki/Tag Page Edits and Discussion

# Thursday, September 16th 2021Thu, Sep 16th 2021

Personal Blogposts
Shortform
12Ozzie Gooen5dA few junior/summer effective altruism related research fellowships are ending, and I’m getting to see some of the research pitches. Lots of confident-looking pictures of people with fancy and impressive sounding projects. I want to flag that many of the most senior people I know around longtermism are really confused about stuff. And I’m personally often pretty skeptical of those who don’t seem confused. So I think a good proposal isn’t something like, “What should the EU do about X-risks?” It’s much more like, “A light summary of what a few people so far think about this, and a few considerations that they haven’t yet flagged, but note that I’m really unsure about all of this.” Many of these problems seem way harder than we’d like for them to be, and much harder than many seem to assume at first. (perhaps this is due to unreasonable demands for rigor, but an alternative here would be itself a research effort). I imagine a lot of researchers assume they won’t stand out unless they seem to make bold claims. I think this isn’t true for many EA key orgs, though it might be the case that it’s good for some other programs (University roles, perhaps?). Not sure how to finish this post here. I think part of me wants to encourage junior researchers to lean on humility, but at the same time, I don’t want to shame those who don’t feel like they can do so for reasons of not-being-homeless (or simply having to leave research). I think the easier thing is to slowly spread common knowledge and encourage a culture where proper calibration is just naturally incentivized. Facebook Thread [https://www.facebook.com/ozzie.gooen/posts/10165443096615363]
1
9Puggy5dHere’s the problem: Some charities are not just multiple times better than others, some are thousands of times better than others. But as far as I can tell, we haven’t got a good way of signaling to others what this means. Think about when Ed Sheeran sells an album. It’s “certified platinum” then “double platinum” peaking at “certified diamond”. When people hear this it makes them sit back and say “wow, Ed sheeran is on a different level.” When a football player is about to announce his college, he says “I’m going D1”. You become a “grandmaster” at chess. Ah, that restaurant must be good it has won two Michelin stars. That economist writing about the tragedy of the commons is great, she won a Nobel prize. We need nomenclature that goes beyond “High impact” charity. “Cost-effective” “High impact” “Effective” are all good descriptions, but we need to come up with a rating system or some method of giving high status to the best charities (possibly based on how much \$ it costs to save one life). It’s got to be something that we can bring into the popular conscience, and it can’t be something we just narrowly assign to all of our own EA meta charities. We need journalists popularizing the term and recognizing the 3-5 super charities that save lives like no ones business. We should work with marketing teams and carefully plan what the name would be. But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it, just like he gains status by eating at Michelin star restaurants. (excuse me if I’m not the first to outline this idea)
1

# Tuesday, September 14th 2021Tue, Sep 14th 2021

Shortform
4Fergus McCormack6dI wrote a very rough draft of an idea I had. It was just a stream of consciousness and I didn't really edit it. I'm not sure what the standards are like on the EA Forum: I would like to invest time in developing this further, as well as other posts I could possibly write, but as I mentioned at the bottom of this post, I'm at a critical juncture in my career and need to invest my time and energy elsewhere. If I can produce forum posts with potentially some interesting ideas, but that are of a relatively low standard, is is better for me to post these without worrying about developing them, or waiting until I have more time to invest in developing better forum posts/writing less and editing more? Epistemic status: medium-low (I have some conviction in this idea, but I’m posting this without much rework in an attempt to overcome perfectionism, partially inspired by Neel Nanda. This likely means that some of the things in this post will be wrong, but I hope it could be useful nonetheless). Summary: we should systematically proceduralise activities more. Learning how to proceduralise tasks requires a lot of up-front investment. This could be reduced by centralised, shared ‘checklists’ for different tasks which are designed to increase effectiveness in particular tasks. I’ve always had a preference for proceduralising (approaching tasks in a systematised manner), but my exposure to EA and the rationalist community has inundated me with high quality information about how to do effective research, learn effectively, prioritise effectively and make better decisions etc. The bottleneck holding back my progress is now the implementation of ideas that I have high conviction in. To effectively implement these ideas and integrate them into my system 2 would require a lot of deliberate practice and highly engaged, focussed efforts. I want to somehow outsource or expedite this process, and this seems like an important issue that is likely to be affecting other EAs as well
1
Wiki/Tag Page Edits and Discussion