N

NickLaing

CEO and Co-Founder @ OneDay Health
12560 karmaJoined Working (6-15 years)Gulu, Ugandaonedayhealth.org

Bio

Participation
1

I'm a doctor working towards the dream that every human will have access to high quality healthcare.  I'm a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community  in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.

How I can help others

Understanding the NGO industrial complex, and how aid really works (or doesn't) in Northern Uganda 
Global health knowledge
 

Comments
1625

Thanks for the update, and the reasons for the name change make s lot of sense

Instinctively i don't love the new name. The word "coefficient" sounds mathsy/nerdy/complicated, while most people don't know what the word coefficient actually means. The reasoning behind the name does resonate through and i can understand the appeal.

But my instincts are probably wrong though if you've been working with an agency and the team likes it too.

All the best for the future Coefficient Giving!

Thanks @mal_graham🔸  this is super helpful and makes more sense now. I think it would make your argument far more complete if you put something like your third and fourth paragraphs here in your main article. 

And no I'm personally not worried about interventions being ecologically inert. 

As a side note its interesting that you aren't putting much effort into making interventions happen yet - my loose advice would be to get started trying some things. I get that you're trying to build a field, but to have real-world proof of this tractability it might be better to try something sooner rather than later? Otherwise it will remain theory. I'm not too fussed about arguing whether an intervention will be difficult or not - in general I think we are likely to underestimate how difficult an intervention might be.

Show me a couple of relatively easy wins (even small-ish ones) an I'll be right on board :).

That is true. 

Even when they do become better than us at writing, I might be keen to keep discourse spaces separate. some spaces where humans can talk just with humans and others where it's everyone together. obviously the AIs will be taking to each other on a scale hard to fathom. 

I think if we get too used to mixing it might be difficult to separate down the line too.

This is absurdly specutalive though.

It wasn't a very well written comment, was a bit benign and generic which is maybe why it ot flaggerd. Here it is below To their credit though they reinstated it.


"This seems to be a nice observational study which analyses already available data, with an interesting and potentially important finding.

They didn't do "controlling" in the technical sense of the word, they matched cases and controls on 40 baseline variables in the cohort with "demographics, 15 comorbidities, concomitant cardiometabolic drugs, laboratories, vitals, and health-care utilization"

The big caveat here is that these impressive observational findings often disappear, or become much smaller when a randomised controlled trial is done. Observational studies can never prove causation. Usually that is because there is some silent feature about the kind of people that use melatonin to sleep, that couldn't be matched for or was missed in the matching. A speculative example here could be that some silent, unknown illnesses could have caused people to have poor sleep - which lead to melatonin use. Also what if poor sleep itself led to poor cardiovascular health not the melatonin itself?

This might be enough initial data to trigger a randomised placebo control trial using melatonin. It might be hard to sign enough people up to detect an effect on mortality - although a smaller study could still at least pick up if melatonin caused cardiovascular disease.

I agree with their conclusion which I think is a great takeaway

"These findings challenge the perception of melatonin as a benign chronic therapy and underscore the need for randomized trials to clarify its cardiovascular safety profile."

"

This is the pangram result
 


This was the lesswong rejection.


Literally just cranked out a 2 minute average quality comment and got accused of being a bot lol. Great introduction to the forum. To be fair they followed up well and promptly, but it was a bit annoying because it was days later and by that stage the thread had passed ant the comment was irrelevent.

I agree that It may enable you to share ideas a little faster (although I'm not sure by how much). Most individual good ideas could be expressed in a couple of paragraphs if need be. 

 I don't buy though that you "wouldn't be able to share them" otherwise. I'm happy for AI to help with your thoughts and ideas (brainstorming, ideating, research), just not with your final writing. I'm not convinced at all yet that AI is "enabling the proliferation of good thoughts and ideas" in a significant way. Can you share any evidence of that? I've not been very impresses with posts on the forum here that heavily use AI

I don't think writing the final draft without AI is a huge barrier to sharing thoughts quickly and effectively. Insofar as it might be, I'd take the tradeoff the other way. 

Its interesting that this is so polarising. I'm certainly one of those witch hunters at the moment at least. A year ago I was more OK with AI writing but I'm now vehemently against it after seeing Linkedin, which 2 years ago was a pretty interesting platform, deteriorate to low quality discourse full of AI slop in both the posts and the comments. On that platform at least it has lowered the quality of ideas and discourse, not improved them.  I hope Substack doesn't go the same way.

Wow I'm almost polar opposite I think - the writing world you envisage feels sad and a bit scary to me. I would want close to zero AI involvement in the final draft. I think there's far far more value in messy thoughts out there, than bullet points which an AI "expands" on to a polished final product. AI in its current state inevitably changes or measures arguments. It also makes writing feel more "voiceless" and samey. 

I want to engage with humans here on the forum. Someone's voice is an extention of them. When we type on the keyboard its coming almost directly from our brains. Brains to hands to brains. Sure we're not face to face or on zoom but when its just my words and your words I like that the conversation is direct. There is soul there. The more AI gets in the way, the less I feel we have a deep discourse.

I'm very OK with people using AI for brainstorming, researching, and testing arguments. But let every word of your final draft come straight from you. Your heart, your voice with all your quirks and problems.

I might do a poll to see what people think about this. I might be in the minority.

read above she changed it. 

  1. excessive colons
  2. it's not x it's y
  3. some language which was technically correct but seemed hollow. 

but I'll find it hard to describe exactly why sometimes

Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don't like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already.

In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt' perfect.

Hey @Clara Torres Latorre 🔸your point isn't bad but some of this this feels heavily AI written to me and I don't love it. I could be wrong again (would not be the first time).

Load more