I work at 80,000 Hours, leading the team that has conversations with people about their social impact and career.
My wife and I are currently allocating 10% of my income to "giving later" , investing the funds 100% in stocks in the interim. We will likely make our regular donation to the donor lottery this year, which will come out of these funds. I would consider giving more to the donor lottery, but on first glance I am less excited about needing to put money into a DAF or equivalent if we win because it is less flexible than money in an investment account. If users have thoughts on the ideal vehicle to put "giving later" funds in, I would be interested to hear. I currently feel good about it being fairly flexible, such that it could be spend on things that are not charities or 501c3s. I am currently keeping it in a fairly standard investment account.
Hey Jia, I haven't done many online courses, but one that I did and enjoyed was the Coursera Deep Learning course with Andrew Ng. https://www.coursera.org/specializations/deep-learningI think if you will be working on multi-agent RL and haven't played around with deep learning models, you will likely find it helpful. You code up a python model that gets increasingly complicated until it does things like attempting to identify a cat (if I'm remembering it correctly). It's fairly 'hands on' but also somewhat accessible to people without a technical background. Friends of mine starting out at both CSET and OpenAI worked through it and found it helpful to get context as they moved into their new roles.
This post is extremely helpful, and I have referred to it multiple times as I plan my finances. Thanks again for putting it together.
The importance of this and related topics is premised on humanity's ability to achieve interstellar travel and settle other solar systems. Nick Beckstead did a shallow investigation into this question back in 2014, which didn't find any knockdown arguments against. Posting this here mainly as I haven't seen some of these arguments discussed in the wider community much.
[Spitballing] I'm wondering if Angry Birds has just not been attempted by a major labs with sufficient compute resources? If you trained an agent like Agent57 or MuZero on Angry Birds then I am curious as to whether the agent would outperform humans?
Louis Dixon has written a helpful summary of this talk here. It also has some interesting discussion in the comments: https://forum.effectivealtruism.org/posts/NLJpMEST6pJhyq99S/notes-could-climate-change-make-earth-uninhabitable-for
This is one of the most thought-provoking (for me) posts that I've seen on the forum for a while. Thanks to you both for taking the time to put this together!
Thanks for flagging this. I think estimating temperature rise after burning all available fossil fuels is mostly educated guesswork. Both estimating the total amount of fossil fuels is hard and estimate the climate response from them is hard.
However, I hadn't seen this Winkelmann, et al. paper, which makes a valuable contribution. It suggests that the climate response is substantially sub-linear at higher levels of warming.
The notes that are currently posted above about how warm it would get if we burned all the fossil fuels were back-of-the-envelope calculations that I did in this slides' notes, and I wouldn't trust them much. They assume a linear model which isn't reliable at these temperatures. I didn't end up including them in the talk as I didn't think they were robust enough. I'll ask Louis about removing them.
Thanks for flagging this Linch!
Great question. I'm afraid I only have a vague answer: I would guess that the chance of climate change directly making Earth uninhabitable in the next few centuries is much smaller than 1 in 10,000. (That's ignoring the contribution of climate change to other risks.) I don't know how likely the LHC is to cause a black hole, but I would speculate with little knowledge that the climate habitability risk is greater than that.
As I mentioned in the talk, I think there are other emerging tech risks that are more likely and more pressing than this. But I would also encourage more folks with a background in climate science to focus on these tail risks if they were excited by questions in this space.
What is you high-level on take on social justice in relation to EA?