Today I learned about Simpol, an org working on solving global coordination problems: https://simpol.orgTheir approach is to encourage governments to work together to enact "simultaneous policy" across multiple issues to take action while avoiding first-mover challenges or a race to the bottom. By negotiating across multiple issues, concessions can be made to entities that might lose out in some areas in order to keep the entire negotiation net-positive for all.So far my learning has been from this podcast episode, although it took a while to really explain the not-very-complex solution. https://www.jimruttshow.com/john-bunzl/
A new-to-me take on the Amazon. Claims deforestation would lead to changing rain patterns "from Argentina up to the American midwest", "which means that Amazon dieback would disrupt/destroy water and food supplies across much of the western hemisphere". The article talks about the state of journalism, climate strategy, and climate science.https://savingjournalism.substack.com/p/revisiting-the-amazon-fires
Long term-ist book that I missed: "The Good Ancestor: How To Think Long Term in a Short-Term World" by Roman Krznaric, released in October 2020. (The US version has a different subtitle.)And the TED TalkI just started listening to the audiobook. I'm about 8 minutes in and he's already mentioning existential risk from AI and pandemics. In the next lines he mentions Bostrom and alludes to Ord. He draws a parallel between colonialism's disregard for indigenous populations and the practice of ignoring future people.Edit: after the intro we start to veer away from EA and talk about e.g. extinction rebellion as role models. Talking about a 100+ year horizon as being long term thinking.Searching on this forum, I can see that I missed references to it in https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other and https://forum.effectivealtruism.org/posts/ySwarSKFzxKLhCyo8/introduction-to-longtermism
Go for it!
This might be a few years old now but here's a collection of EA elevator pitches https://docs.google.com/document/d/1vsQdWIcL1nWdTTdQtB4uH1f_rIjDo27-CwaZUnfqEG4
In his Notes on hiring a copyeditor for CEA, Aaron writes:
... I’ve also kept a record of the other applicants who most impressed me, so that I can let them know if I hear about promising opportunities. I’ve already referred a few candidates for different part-time roles at other EA orgs, and I anticipate more chances to come.
Their career review for ML PhDs says
Want to use an ML PhD to make the world a better place? We want to help.
We’ve coached dozens of people considering PhDs, and can often put you in touch with relevant experts for more guidance. Apply for our free coaching service, particularly if you want to work on AI safety
I'd say go ahead and apply. They can decide whether they want to speak to you (I'd guess that they would). If they don't, no big loss, and if they do it only takes about about 45 minutes of 1-1 time.
You can now vote in Project 4 Awesome to help EA charities win grants of, judging by past years, $25,000 USD.
You can vote for each video for each charity, and each vote counts. Click on the thumbnail to access the voting page for each video.
Give Directly: http://projectforawesome.com/?charity=2rRk4r7S
Clean Air Taskforce: http://www.projectforawesome.com/?charity=YBH2SiFJ
It's probably best to open one tab, do the CAPTCHA, and then open the rest of the tabs so you don't have to repeat the CAPTCHA. (Credit to Michael; I have no idea how to link to users.)