Hide table of contents

July 9, 2022 is International Skinny Dip Day.

Join us (skinnydipday.org) as we 

  1. EXPERIENCE and grow our own body positivity and self/other acceptance.
  2. CURE women who have had their bodies injured and shamed (via Fistula Foundation).
International Skinny Dip Day

Fistula Foundation 

...is widely regarded as one of the most effective charities in the world

 

Skinny Dip Day 

...is an established (yet scattered / underutilized) concept:

 

🌊 Dippin’ — On a Mission 🌊

We are growing — and we hope you will ride the wave with us?

2019: $871 raised across 3 locations
2021: $5275 raised across 8 locations (6x increase)
2022: 14 locations (and counting) signed up so far.

2021 Results

 

QUESTIONS:

  1. Are publicity stunt style events (which are not explicitly linked to EA; and which are not overly dangerous etc) an effective method at highlighting effective giving, and EA in general?

     
  2. Is Fistula Foundation a good "gateway drug" to effective giving, and EA in general?
  • pulls at heart strings
  • easy to understand quickly
    • Fistula Foundation clearly CURES and EMPOWERS specific people for every x amount of money you raise for them. And builds up local health infrastructure.
    • In contrast to, for example, GiveWell's first listed charity, Malaria Consortium, which gives chemo drugs to children in order to prevent some of them from dying in the future from a disease not many in the developed world are even familiar with. (I'm sure it's all well and good; however many questions arise from the average non-EA and veteran EA alike.)
       

 

3. Will you skinny dip (at an event or on your own) and/or donate to support the project? 🌊 😊 🌊
 

-20

0
0

Reactions

0
0
Comments5


Sorted by Click to highlight new comments since:

I have some serious issues with the way the information here is presented which make me think that this is best shared as something other than an EA forum post. My main issues are:

  1. This announcement is in large part a promotion for the Fistula Foundation, with advertising-esque language. It would be appropriate in an advertising banner of an EA-aligned site but not on the forum, where critical discussion or direct information-sharing is the norm.
  2. It includes the phrase that Fistula Foundation is "widely regarded as one of the most effective charities in the world" (in addition to some other similar phrases). This is weaselly language which should not be used without immediate justification (e.g. "...according to X rating").
  3. In this case this is also misleading/false. I went to the EA impact assessment page for the foundation and it is actually estimated to be below the cost-effectiveness range of EA programs (while it seems to be considered a reasonable charity).

In general the language here together with the fact that the OP is affiliated with this foundation makes me update to take the fistula foundation much less seriously and to avoid donating there in the future. I would suggest for the OP to remove this post or to edit it in a way that is informative rather than pandering (e.g. something like "join me to go skinny-dipping for fistula foundation on X day. While it has a mediocre impact assessment, I like the cause and think skinny dipping would be a good way to support it while also becoming less insecure about our bodies").

fwiw I disagree with this. People often 'advertize' or argue for things on the Forum - e.g. promoting some new EA project, saying 'come work for us at X org!', or arguing strongly that certain cause areas should be considered. The main difference with this post is that the language is more 'advertizing-esque' than normal - but this seems to me an aesthetic consideration. I'm not sure what would be gained by OP rewriting it with more caveats. 

Re "one of the most effective charities", OP does immediately justify this in the bullet points below - it's recommended by The Life You Can Save, and Givewell says it 'may be in the range of cost-effectiveness of our top charities'. 

Thank you Amber !

  1. I  am using the same language here, that I present this project to the media and to others with. I thought this would be beneficial. You are seeing the same thing that the general public sees. Except (I hope) with a lot of background info and links to explain my thinking.
     
  2. My language in that is not weaselly because it links to a page that shows exactly what I'm stating. It's indeed, widely regarded, as one of the most effective charities in the world. By (as the linked page shows) The Life You Can Save; CharityWatch; Great Nonprofits; GuideStar; Charity Navigator. 

    Do you have any evidence that Fistula Foundation is NOT widely regarded to be one of the most effective charities in the world? Maybe you don't think it is one of the most effective? But it's widely regarded to be, and by some prominent and well-regarded third parties.
     
  3. Your link here is exactly my same link that I put; I think you missed that. Yes I agree, that it may not be an upper elite ranked charity. And that's why I linked to the same page. However, within this link that we both posted, they do state: "We think that Fistula Foundation may be in the range of cost-effectiveness of our current top charities. However, this estimate is highly uncertain for a number of reasons." It seems to be well within a good range of high effectiveness. But if you are a stickler for elite effectiveness only, a great case can be made for that, to NOT donate to them, and fair enough. We seem in general agreement here. I'm not sure what I'm stating that's false. It did not make the GiveWell cut after they looked into them. I agree.
     

+ I am in no way whatsoever affiliated with Fistula Foundation. Why do you think so? If you are going to donate less to them in the future, based just on the wording of this post from a random person you don't know, and not based on the evidence of the work that they do, I'm not sure your reasoning on that.

+ I do hope you join me in skinny dipping on this day. I'm not sure why it's 'pandering' for me to say that. However, if you don't want to, that's all good too!

Thank you for writing back your thoughts. Helps me to get the idea of why I am getting downvotes. I do hope to get some feedback on the project itself outside of the wording, if anyone has any ! Thank you again!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI
Relevant opportunities