I'm a doctor working towards the dream that every human will have access to high quality healthcare. I'm a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda
Global health knowledge
Thanks @mal_graham🔸 this is super helpful and makes more sense now. I think it would make your argument far more complete if you put something like your third and fourth paragraphs here in your main article.
And no I'm personally not worried about interventions being ecologically inert.
As a side note its interesting that you aren't putting much effort into making interventions happen yet - my loose advice would be to get started trying some things. I get that you're trying to build a field, but to have real-world proof of this tractability it might be better to try something sooner rather than later? Otherwise it will remain theory. I'm not too fussed about arguing whether an intervention will be difficult or not - in general I think we are likely to underestimate how difficult an intervention might be.
Show me a couple of relatively easy wins (even small-ish ones) an I'll be right on board :).
I found this super helpful thank you, probably the best thing I've read about AI timelines in the last year actually. So so well communicated with small words and minimal jargon thank you!
I know you're mainly taking about the best thinking approach here, but how does this translate to communication about AI timelines? Distributions make a lot of sense to me but are very hard for most people to think in. This wouldn't be useful to communicate with for most of my friends, unless I maybe had an hour and a large napkin... I wonder if there is a way to communicate in a "distributy" like way with people who just aren't statistically minded?
If some regular person asks me when i think the AI apocalypse is coming, what's a good way to communicate? I don't want to just guess a year for all the reasons you've stated, but a distribution won't be understood either. In the past I've said something like " I really don't know but it could well be between 2030 and 2040", but my impression has been this seems pathetically vague and unhelpful to most people. Any ideas on communicating AI timelines with integrity to non-statsy folks?
As a side note it seems strange that the guy who wrote the AI 2027 story's 50 percent point is at about 2031ish? Why wasn't the story then AI 2031?
talk to @David Nas and @Karthik Tadepalli ha. There's increasing work within EA on development directly There are big questions around how tractable it is, how much EA influence can actually move the needle with huge money injectors active like the imf and world back, to and market forces as well.
And yeah like @Evan LaForge said to some extent development needs good health and education to happen (a bit of chicken and egg)
"think the problem is that it’s hard to establish expert “baselines” via which to measure uplift"
If you could find enough experts (say 100) then randomisation is probably enough to solve this problem even if they have a wide range of capabilities. I agree though that a category such as "2-5 years post-doc would be even nicer. Maybe could find a couple of large PHD or Post-doc cohorts.
This is one of the most inspiring things I've read in months. Its such a good example to have someone with a illustrius tech background like you involved in a protest like this. It might jolt some into action or at least make us think a bit harder about whether we are really morally courageous enough to do the best that we can.
I agree its fantastic, not only for Wellbeing itself, but also for disrupting the status quo. I hardly thing that even the problem of "DALYs" is solved though. Even the moral weights issue which plays into it will never be solved as such, Givewell's piecemeal approach (which I absolutely love and think is a great way to do it) shows how tricky it is.
Yep this is a legitimate concern, its hard for new projects that aren't being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help.
The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander's yearly giveaway is open to new ideas and they fund a bunch of GHD stuff
Thanks for the update, and the reasons for the name change make s lot of sense
Instinctively i don't love the new name. The word "coefficient" sounds mathsy/nerdy/complicated, while most people don't know what the word coefficient actually means. The reasoning behind the name does resonate through and i can understand the appeal.
But my instincts are probably wrong though if you've been working with an agency and the team likes it too.
All the best for the future Coefficient Giving!