MathiasKB

1770Joined Jul 2018

Comments
106

To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism's conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues.

If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it's from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest.

This might be the correct call, but I think it's a reasonable thing to disagree with.

Thank you, this is an excellent post. This style of transparent writing can often come across as very 'ea' and gets made fun of for its idiosyncrasies, but I think it's a tremendous strength of our community.
 

I would advise you to shorten the length of the total application to around one fourth of what it is currently. Focus on your strong points (running a growing business, strong animal welfare profile) and leave out the rest. The weaker parts of your application water down the parts which are the strongest.

Admissions are always a messy process and good people get rejected often. A friend of mine who I'm sure will go on to become a top-tier ai safety engineer, got rejected from eag, because there isn't a great way to convey this information through an application form. Vetting people at scale is just really difficult.

Thanks for writing this Jonas. As someone much below the lesswrong average at math, I would be grateful for a clarification of this sentence:

Provided  and  are independent when 

What does  and  refer to here? Moreover is it a reasonable assumption, that the uncertainties of existential risks are independent? It seems to me that many uncertainties run across risk types, such as chance of recovery after civilisations collapse.

For anyone interested in pursuing this further Charity Entrepreneurship is looking to incubate a charity working on road traffic safety.

Their report on the topic can be found here: https://www.charityentrepreneurship.com/research

Thanks for giving everyone the opportunity to provide feedback!

I'm unsure how I feel about the section on global poverty and wellbeing. As of now, the section mostly just makes the same claim over and over that some charities are more effective than others, without much rigorous discussion around why that might be.

There's a ton of great material under the final 'differences in impact' post that I would love to see as part of the main sequence. Right now, I'm worried that people new to global health and development will leave this section feeling waay overconfident about how sure we are about all of this charity stuff. If I was a person with experience working in the aid sector and decided to go through the curricula as it is, I think I would be left thinking that EAs are way overconfident despite barely knowing a thing about global poverty.

Here is an example of a potential exercise you could include that I think might go a long way to convey just how difficult it is to gain certainty about this stuff:

Read and evaluate two RCT's on vaccine distribution in two southern Indian states. What might these RCT's tell us about vaccine distribution in India? Have the reader try to assess which aspects of these RCT's will generalise to the rest of India and which aspects won't. They could for example make predictions (practicing another relevant EA skill!) on the results of an RCT for a northern Indian state.

You only have to do one deep dive on a topic to gain an appreciation for how little we know.

Words cannot express how much I appreciate your presence Nuno.

Sorry for being off-topic but I just can't help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.

That puts EA in an even better light!

"While the rest of the global health community imposes its values on how trade-offs should be made, the most prominent global health organisation in EA actually surveys and asks what the recipients prefer."

[This comment is no longer endorsed by its author]Reply

I think the meta-point might be the crux of our disagreement.

I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and I'm often a bit perplexed as to how quick people are to jump from 'nearly everyone dies' to 'literally everyone dies'. Similarly I'm sympathetic to the point that it's difficult to imagine particularly compelling scenarios where AI doesn't radically alter the world in some way.

But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didn't predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldn't have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.

Load More