I'm a doctor working towards the dream that every human will have access to high quality healthcare. I'm a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda
Global health knowledge
Thanks @mal_graham🔸 this is super helpful and makes more sense now. I think it would make your argument far more complete if you put something like your third and fourth paragraphs here in your main article.
And no I'm personally not worried about interventions being ecologically inert.
As a side note its interesting that you aren't putting much effort into making interventions happen yet - my loose advice would be to get started trying some things. I get that you're trying to build a field, but to have real-world proof of this tractability it might be better to try something sooner rather than later? Otherwise it will remain theory. I'm not too fussed about arguing whether an intervention will be difficult or not - in general I think we are likely to underestimate how difficult an intervention might be.
Show me a couple of relatively easy wins (even small-ish ones) an I'll be right on board :).
From what I can see, the main issue here who writes the words, about how much LLMs are used in the process.
If most of the brainstorming, research and structuring was done by the LLM but you wrote the words yourself, from my perspective that wouldn't require any caveat at all. But if LLM's wrote half of the words than I would definitely want to know at the top of the post (and personally I probably wouldn't read it).
That's why it's so important that we get clear labelling. On this forum we should be able to choose whether or not to read something not written by a human. I would hope that only a minority of posts will have heavy LLM writing, so most posts won't need any disclosure at all.
I completely agree with @Austin that people shouldn't write anything if they use LLMs for feedback and copy editing - like he said they shouldn't have to under this policy. I have seen people stating doing that, but hopefully it will settle down when they realise it isn't necessary.
In the AI frame I remember reading about 3 situations on the forum (one of which was mechanise). I also consider this to a lesser extent around animal sentience arguments from those deep in the animal welfare world.
the most pertinent example for me would be Anthropic's top leadership ditching their solid safety plan with clear red lines for a vague and practically useless one, and the justifications by @Holden Karnofsky (who's wife owns the company) which felt strange to me. He usually makes such compelling arguments, and that one seemed less so. I'm not the most rational person, but Habryka's arguments against the safety plan change on less wrong were compelling to me.
I'm not saying we shouldn't argue the object point, but just that we should consider people's incentives and weight the opinions of those with power/money conflicts of interest somewhat less heavily than those without.
"I think EA’s failure to grapple with the corrupting influence of power is among its greatest failures. "
This has been the feature of forum discussions that has disturbed me possibly the most since joining. People don't like to put any weight on conflicts of interest even when the person arguing a point has a huge amount to gain. "Just argue the object point" people say, don't bring up the conflict of interest...
People seem surprised and bewildered when AI folks defect away from AI safety towards capabilities. People trust that as AI companies grow, those gaining power and money from shares will not be adversely influenced by that power and money.
Even as I have gained a teeny weeny bit of power just in a teeny weeny power of the global health world I have felt a little of the corrupting influence. Living far away from this in Uganda, I'm not part of this at all and like you it's very unclear what can be done to help, but talking about it a bit could be better than nothing. I loved this post thank you!
I think it's too overreaching a claim. I actually agree with you as an overall statement but many here would not. Many on the forum here though think that because humans drastically reduce wild animal populations, that may well help animals through reducint overall suffering.
Also it depends how general you are trying to be. I would personally argue that in New Zealand, humans have brought sheep more joy than suffering, through treating their diseases well and ensuring good nutrition. Because most animals are factory farmed and so overall humans bring more suffering than joy to farmed animals. But in places where they are well looked after and not family farmed, the inverse could be troo.
Thanks for the update, and the reasons for the name change make s lot of sense
Instinctively i don't love the new name. The word "coefficient" sounds mathsy/nerdy/complicated, while most people don't know what the word coefficient actually means. The reasoning behind the name does resonate through and i can understand the appeal.
But my instincts are probably wrong though if you've been working with an agency and the team likes it too.
All the best for the future Coefficient Giving!