More EA success stories:
Pandemics. We have now had the first truly global pandemic in decades, perhaps ever.
Nuclear war. Thanks to recent events, the world is closer than ever to a nuclear catastrophe.
It's not all good news though. Unfortunately, poverty seems to be trending down, there's less lead in the paint, and some say AI could solve most problems despite the risks.
Summaries of papers on the nature of consciousness (focusing on artificial consciousness in particular).
A post on how EA research differs from academic research, why people who like one distrust the other, and how in the longterm academic research may be more impactful.
A post explaining what I take to be the best reply to Thorstad's skeptical paper On the Singularity Hypothesis.
Very personal and unconventional research advice that no-one told me that I would have found helpful in my first 2 years of academic research. What I would change about this advice after taking a break and then starting a PhD.
I feel like these actions and attitudes embody many of the virtues of effective altruism. You really genuinely wanted to help somebody, and you took personally costly actions to do so. I feel great about having people like you in the EA Community. My advice is to keep the feeling of how important you were to Tlalok's life as you do good effectively with other parts of your time and effort, knowing you are perhaps making a profound difference in many lives.
Are these fellowships open to applicants outside of computer science/engineering etc. doing relevant work?
I really like time shifter but honestly the following has worked better for me:
Fast for ~16 hours prior to 7am in my new time-zone.
Take melatonin, usually ~10pm in my new timezone and again if I wake up and stop feeling sleepy before around 5am in my new timezone. (I have no idea if this second dosing is optimal but it seems to work).
I highly recommend getting a good neck pillow, earplugs, and eye mask if you travel often or on long trips (e.g. if you are Australian and go overseas almost anywhere).
Thanks to Chris Watkins for suggesting the fasting routine.
A quick clarification: I mean that "maximize expected utility" is what both CDT and EDT do, so saying "In other words, this would be the kind of decision theory that recommends decisions that maximize expected utility" is perhaps misleading
I quite like this post. I think though that your conclusion, to use CDT when probabilities aren't affected by your choice and use EDT when they are affected, is slightly strange. As you note, CDT gives the same recommendations EDT in cases where your decision affects the probabilities, so it sounds to me like you would actually follow CDT in all situations (and only trivially follow EDT in the special cases where EDT and CDT make the same recommendations).
I think there's something to pointing out that CDT in fact recommends one boxing wherever your action ...
David Thorstad (Reflective Altruism/GPI/Vanderbilt) Tyler John (Longview) Rory Stewart (GiveDirectly)
+1 on Rory Stewart- as well as being the President of GD, he was the Secretary of State for International Development in the UK, has started and run his own charity (I believe with his wife) in the developing world, has mentioned EA previously, is known to be an enjoyable person to listen to (judging by the success of his podcast), and has just released a book- and therefore might be more likely than usual to engage with popular media.
Thanks for posting, I have a few quick comments I want to make:
I recently got into a top program in philosophy despite having clear association with EA (I didn't cite "EA sources" in my writing sample though, only published papers and OUP books). I agree that you should be careful, especially about relying on "EA Sources" which are not widely viewed as credible.
Totally agree that prospects are very bad outside of top 10 and lean towards "even outside of top 5 seriously consider other options"
On the other hand, if you really would be okay with fail
My understanding is that, at a high level, this effect is counterbalanced by the fact that a high rate of extinction risk means the expected value of the future is lower. In this example, we only reduce the risk this century to 10%, but next century it will be 20%, and the one after that it will be 20% and so on. So the risk is 10x higher than in the 2% to 1% scenario. And in general, higher risk lowers the expected value of the future.
In this simple model, these two effects perfectly counterbalance each other for proportional reductions of existenti...
"There are three main branches of decision theory: descriptive decision theory (how real agents make decisions), prescriptive decision theory (how real agents should make decisions), and normative decision theory (how ideal agents should make outcomes)."
This doesn't seem right to me, I would say: an interesting way you can divide up decision theory is between descriptive decision theory (how people make decisions) and normative decision theory (how we should make decisions).
The last line of your description, "how ideal agents should make outcomes" seems es...
This is a fantastic initiative! I'm not personally vegan, but believe the "default" for catering should be vegan (or at least meat and egg free) with the option for participants to declare special diatery requirements. This would lower consumption of animal products as most people just go with the default option, and push the burden of responsibility to the people going out of their way to eat meat.
How should applicants think about grant proposals that are rejected. I especially find newer members of the community can be heavily discouraged by rejections, is there anything you would want to communicate to them?
If a project is partially funded by e.g. open philanthropy, would you take that as a strong signal of the projects value (e.g. not worth funding at higher levels)?
My entry is called Project Apep, it's set in a world where alignment is difficult, but a series of high profile incidents lead to extremely secure and cautious development of AI. It tugs at the tensions between how AI can make the future wonderful or terrible.
I'm working on a related distillation project, I'd love to have a chat so we can coordinate our efforts! (riley@wor.land)
I agree that regulation is enormously important, but I'm not sure about the following claim:
"That means that aligning an AGI, while creating lots of value, would not reduce existential risk"
It seems, naively, that an aligned AGI could help us detect and prevent other power seeking AGIs. It doesn't completely eliminate the risk, but I feel even a single aligned AGI makes the world a lot safer against misaligned AGI.
What do you think are the biggest wins in technical safety so far? What do you see as the most promising strategies going forward?
Great to see attempts to measure impact in such difficult areas. I'm wondering if there's a problem of attribution that looks like this (I'm not up to date on this discussion):
Thanks for writing such a thoughtful comment. The post has to reflect the content of the paper, so I'm glad your comment can provide extra context. The post now reflects that the paper was written in 2019, and I plan to address the 30x figure soon.
Thanks, this is really helpful information about trusts and the 4% rule!
On self trust: I feel that a common pattern might be that when you're young, you're 'idealistic' and want to do things like donate. When you're older, you feel like spending your money (if you have it) in ways that might not make you particularly happy. I might even decide I would rather give it all to my kids (if I have some). This makes me think there's a good chance I won't donate it later if I haven't pre-committed.
On safety: I am from Australia, and to some extent my c...
Here are some articles I think would make good scripts (I'll also be submitting one script of my own).
Summaries of the following papers:
This is really great to see!
I think economic growth is rated too highly by this framework. It gets a very high rating on the first criteria because many organisations think it's something worth considering—but none of them rate it as their top priority, or even a particularly high priority (to my knowledge). My intuition is that it wouldn't get such a high rating if the criteria was importance, rather than consensus that it is one of the issues worth considering—and that importance is what matters here?
Ask him about counterfactuals, ask him if his views have any implications for our ideas of counterfactual impact?
Ask him whether relative expectations can help us get out wagers like this one from Hayden Wilkinson's paper:
Dyson's Wager
You have $2,000 to use for charitable purposes. You can donate it to either of two charities.
The first charity distributes bednets in low-income countries in which malaria is endemic. With an additional $2,000 in their budget this year, they would prevent one additional death from malaria. You are certain of this. ...
Recently, I was reading David Thorstad’s new paper “Existential risk pessimism and the time of perils”. In it, he models the value of reducing existential risk on a range of different assumptions.
The headline result is that 1) most plausibly, existential risk reduction is not overwhelmingly valuable–though it may still be quite valuable, it doesn’t probably swamp all other cause areas. And 2) thinking that extinction is more likely tends to weaken the case for existential risk reduction rather than strengthen it.
It struck me that one of the results is part...
Thanks for the post - this seems like a really important contribution!
[Caveat: I am not at all an expert on this and just spent some time googling]. Snake antivenom actually requires that you milk venom from a snake to produce, and I wonder how much this is contributing to the high cost ($55–$640) of snake venom [1]. I wonder if R&D would be a better investment, especially given the potentially high storage and transport costs for snake venom (see below). It would be interesting to see someone investigate this more thoroughly.
Storage costs ...
Thanks, it looks like you've put a lot of effort into summarising this information (it actually looks better and higher effort than my original post, oop).
I'm all for pricing in carbon and sensible policy that regulates in proportion to our best estimate of the risk!
Digging into this a bit, I may have gotten the original argument for nuclear wrong - it does seem like some countries would struggle to source their energy from renewables due to space constraints (arguably, less of a problem in Australia).
"I’m not even sure it’s physically possible with 100% renewables... if you were to try and just replace oil in a country like Korea or Japan, so a densely populated country without huge amounts of spare land, you have to take up a significant proportion of the entire nation with solar panels... In the UK... if you ...
If someone was looking to work for OPP would an honours* or masters program be more beneficial than an undergraduate degree?
Are there particular questions or areas that could be worked on for a research project in honours/masters that are particularly helpful directly or develop the right kinds of skills for OPP? (especially in economics, philosophy or cognitive science)
("Honours" in Australia is a 1 year research/coursework program)
I actually think this is a pretty reasonable division now, removed the automatic upvote on my comment.