N

NickLaing

CEO and Co-Founder @ OneDay Health
13216 karmaJoined Working (6-15 years)Gulu, Ugandaonedayhealth.org

Bio

Participation
1

I'm a doctor working towards the dream that every human will have access to high quality healthcare.  I'm a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community  in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.

How I can help others

Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda 
Global health knowledge
 

Comments
1687

Thanks for the update, and the reasons for the name change make s lot of sense

Instinctively i don't love the new name. The word "coefficient" sounds mathsy/nerdy/complicated, while most people don't know what the word coefficient actually means. The reasoning behind the name does resonate through and i can understand the appeal.

But my instincts are probably wrong though if you've been working with an agency and the team likes it too.

All the best for the future Coefficient Giving!

Thanks @mal_graham🔸  this is super helpful and makes more sense now. I think it would make your argument far more complete if you put something like your third and fourth paragraphs here in your main article. 

And no I'm personally not worried about interventions being ecologically inert. 

As a side note its interesting that you aren't putting much effort into making interventions happen yet - my loose advice would be to get started trying some things. I get that you're trying to build a field, but to have real-world proof of this tractability it might be better to try something sooner rather than later? Otherwise it will remain theory. I'm not too fussed about arguing whether an intervention will be difficult or not - in general I think we are likely to underestimate how difficult an intervention might be.

Show me a couple of relatively easy wins (even small-ish ones) an I'll be right on board :).

This is a brilliant summary of the situation. I actually find a straightforward list of bullets like this more compelling and easier to understand than something like Yudowsky's book.

Thanks appreciate that a lot :)

For the record my vote is for cG. 

But you might struggle to control "the people" on this one, there has been as lot of "CoGi" and other variations floating around. When said out loud starting with "co" is catchier than starting with the letter "c". Also there's a strong associatoin between CG and computer generated? There are like 3 separate threads in the replies to your renaming post discussing possible shortenings, and I think all suggestions start with "co" lol.

https://forum.effectivealtruism.org/posts/vkvtu6xbvfkHPhJkC/open-philanthropy-is-now-coefficient-giving
 

These are the important things which define organizations.

As for me I I will respect cG's wishes ;). 

Yeah I tihnk that's soemthing like the approach Toby and I were discussing!

I'm not sure I can get away with that? I would say for over 90% of people 3 numbers would add even more confusion than 2.  The SAT example is encouraging, although  Americans make up a small proportion of my friends and acquaintances.

The concreteness is fine makes sense for sure

Isn't then somewhere between 2028 and 2031 really "things go roughly as expected" and 2027 is "things go faster than expected if every AI improvement rolls out without roadblocks?" I feel like if you're going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent. Not the biggest deal though I suppose

Thanks Toby interesting one on the communication. For policy makers I think that communcation style can work OK, less so with my friends haha.

I'm still confused by why they picked 2027 even in 2025. Back when they made it, Daniel's median forecast was 2028 and Eli's 2031. Surely you then pick 2029 or 2030 for your scenario? Picking the "most likely year for it to happen" still feels a bit disingenous to me. 

I found this super helpful thank you, probably the best thing I've read about AI timelines in the last year actually. So so well communicated with small words and minimal jargon thank you! 

I know you're mainly taking about the best thinking approach here, but how does this translate to communication about AI timelines? Distributions make a lot of sense to me but are very hard for most people to think in. This wouldn't be useful to communicate with for most of my friends, unless I maybe had an hour and a large napkin... I wonder if there is a way to communicate in a "distributy" like way with people who just aren't statistically minded?

If some regular person asks me when i think the AI apocalypse is coming, what's a good way to communicate? I don't want to just guess a year for all the reasons you've stated, but a distribution won't be understood either. In the past I've said something like " I really don't know but it could well be between 2030 and 2040", but my impression has been this seems pathetically vague and unhelpful to most people. Any ideas on communicating AI timelines with integrity to non-statsy folks?

As a side note it seems strange that the guy who wrote the AI 2027 story's 50 percent point is at about 2031ish? Why wasn't the story then AI 2031? 

I would say in general major funds = money goes to major orgs, is there evidence against this? GiveWell for example gives most of its money to very big orgs. Even if the major orgs give some donations to smaller orgs, that's usually a smlall percent of what they do.

Load more