N

NickLaing

Country Director @ OneDay Health
2882 karmaJoined Oct 2018Working (6-15 years)Gulu, Ugandaonedayhealth.org

Bio

Participation
1

I'm a doctor working towards the dream that every human will have access to high quality healthcare.  I'm a medic and director of OneDay Health, which has launched 35 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community  in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.

How I can help others

Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda 
Global health knowledge
 

Comments
461

"hesitate to pay for ChatGPT because it feels like they're contributing to the problem"

Yep that's me right now and I would hardly call myself a Luddite (maybe I am tho?)

Can you explain why you frame this as an obviously bad thing to do? Refusing to help fund the most cutting edge AI company, which has been credited by multiple people with spurring on the AI race and attracting billions of dollars to AI capabilities seems not-unreasonable at the very least, even if that approach does happen to be wrong.

Sure there are decent arguments against not paying for chat GPT, like the LLM not being dangerous in and of itself, and the small amount of money we pay not making a significant difference, but it doesn't seem to be prima-facie-obviously-net-bad-luddite behavior, which is what you seem to paint it as in the post.

Good call, strategy of protest is far far more than numbers. I hope you are ok contact with climate change and animal rights activists too, as they have a lot of experience in this area.

Thanks Lukas I agree. I just quickly made a list of potential positives and negatives, to illustrate the point that e situation was complex and that it wasn't obvious to me that the pubic investigation here was net negative. I didn't mean to say that was a "key takeaway".

Thanks for the interesting article - very easy to understand which I appreciated.

"Even if AIs end up not caring much for humans, it is dubious that they would decide to kill all of us."

If you really don't think unchecked AI will kill everyone, then I probably agree that the argument for a pause becomes weak and possibly untenable. 

Although its probably not possible, for readers like me it would be easier to read these pause arguments all  under the assumption GAI = doom. Otherwise some of these posts make arguments based on different assumptions, so are difficult to compare.
 

One comment though, when you are talking of safety I found striking.
if we require that AI companies “prove” that their systems are safe before they are released, I do not think that this standard will be met in six months, and I am doubtful that it could be met in decades – or perhaps even centuries.

I would have thought that if a decades long pause gave us even something low like a 20% chance of being 80% sure of AI safety then that would be pretty good EV....

What a fantastic post thanks!

Personally I think farmed animal numbers will rise faster than you projected in other African countries too - it all depends on how quickly they develop. I'm guessing Uganda is one of your lower lines, but I wouldn't be surprised if it is already increasing faster than that

Here in Gulu Northern Uganda, even in the last 10 years we've gone from the situation where most people ate almost exclusively animals which were reared on personal homesteads or farms (either home kill or bought locally to market) to the advent of factory farming, especially among chickens. When I first came here there were maybe 5 stalls selling fried chicken in town, now there are over 50.

Of all animals in Uganda, I'm fairly certain layers have the highest degree of suffering. Many "broilers" are brought up in barns, which are still bad but not as bad as battery cages.

My very uncertain personal opinion is that most home reared animals have net positive lives, and were good for the nutrition of the family so that was actually a pretty good situation. Now we are quickly descending from perhaps a slight net good to a massive net harm which is aweful to see in front of my eyes.

Good point, I agree that second order effects like this make the situation even more complex and can even make a seemingly negative effect net positive in the long run.

I agree with it taking a lot of time (take your 500 hours).

I just don't weight one person spending 500 hours as highly (although very important, as its 3 monthish work) as other potential positives/negatives. I don't think its the crux for me of whether a public investigation is net positive/negative. I think its one factor but not necessarily the most important.

Factors I would potentially rate as more important in the discussion of whether this public investigation is worth it or not.

- Potential positives for multiple EA orgs improving practices and reducing harm in future.
- Potential negatives for the org Nonlinear in question, their work and the ramifications for the people in it.

I agree with you that it could be asymmetrical, but its not the crux for me.

Personally in this case I would weight "time spent on the investigation" as a pretty low downside/upside compared to many of the other positive/negatives I listed, but this is subjective and/or hard to measure.

Am interested to hear why you think the public investigation is "obviously" net negative. You can make a strong argument for net negativity, but I'm not sure it would meet the "obvious" bar in this kind of complex situation  There are plenty potential positives and negatives with varying wieghtings IMO. Just a quick list I made up in a couple of minutes (missed heaps)

Potential Positives
- Post like Rockwell's with good discussions about shifting EA norms
- Individual EA orgs look at their own policies and make positive changers
- Likelihood of higher level institutional change to help prevent these kinds of issues
- Encouragement for other whistleblowers
- Increased sense and confidence from the community that EA is more serious about addressing these kind of workplace issues.
- Sense of "public justice" for potential victims

Potential Negatives
- More negative press for EA (which I haven't seen yet)
- Reducing morale of EA people in general, causing lower productivity or even people leaving the movement.
- Shame and "cancelling" potential within EA for Nonlinear staff (even those who may not have done much wrong) and even potential complainants
- Risks of fast public "justice" being less fair than a proper investigative process.
- Lightcone time (Although even if it wasn't public, someone would have to put in this kind of time counterfactually anyway)

Just a few like I said, not even necessarily the most important

Thanks for the reply

I was interpreting your comment that they had separate advisory roles for orgs like yours outside of the community health sphere, which would be much more problematic.

If their advisory role is around community health issues that makes more sense, It still is a potentially problematic COI, as there is potential to breach confidentiality in that role. For example hope they have permission to share info like "we would advise against them doing in-person community building", from the people who gave them that info. By default everything shared with community health should (I imagine) be confidential unless the person who shares it explicitly gives permission to pass the info on.

 but I agree with you its not as much of a concern, although it requires some care.

Load more