Camille

Independent @ Effective Disagreements
433 karmaJoined Working (0-5 years)94110 Arcueil, France
www.effectivedisagreement.org

Bio

Participation
4

Background in cognitive science. I run a workshop with the aim to teach methods to manage strong disagreements (including non-EA people). Also community building.

Interested in cyborgism and AIS via debate.

https://typhoon-salesman-018.notion.site/Date-me-doc-be69be79fb2c42ed8cd4d939b78a6869?pvs=4

How others can help me

I often get tremendous amounts of help from people knowing how to program being enthusiastic for helping over an evening.

Comments
57

Strong upvoted.

I find this refreshing and going back to what EA is about, and would definitely point at it as "the sort of experience I expect an EA org to allow for". I also appreciate the moral character that you're showing up. I want more testimonies like this on the forum to help people get [back] in touch with this spirit.

I sometimes feel like I had partially lost my initial ambition, and your post corrected for some value drift. Thank you.

Note: You're focusing on the English speaking sphere, yet e.g. the French EA community owes its existence in no insignificant part to two non-EA YouTubers. A third (very popular) one is soon adding himself to the list. 

Sorry, I understand this is a bit confusing. 
I was hesitant to spell it out, because I'm afraid of building a strawman:
 
My interpretation is that some people have an issue with non-self-oriented wishes or desires, because they can feel like virtue-signalling or guilt-tripping. Expressing things such as "I really want a world without malaria" can be interpreted as condoning the use of suffering as a negotiation tool.

I.e : 
Step 1: People are suffering from malaria
Step 2: This prompts me to fight malaria
Step 3: Someone concludes that suffering causes me to help them
Step 4: They self-inflict suffering to them
Step 5: This prompts me to help them regardless
Step 6: The world is now made up of people who self-inflict suffering as a way to manipulate others, which suck.

I'm not sure this is an accurate reconstruction, but this is what I can do to the best of my abilities.

I'd rather not encourage arguing with this version of the argument, since I'm not a genuine proponent.

Be helpful, considerate, generous, genuine in your belief that (for example) malaria is bad and a world without malaria is a world you want to see.

Admittedly bitter take: you'd be surprised to learn this is far from consensual in some EA circles. I got surprising reactions for applying this example.

There seems to have been a surge for interest in AI Risk and Safety culminating on August 14th, far surpassing all other levels of interest in time.

I'm not sure what caused this. [Update : the EU Code of Practice ?] Google Trends lists seemingly topic-specific points (the "chain-of-thought as a fragile opportunity" paper from different AI Companies, an apparent interest in China about AI Safety) while one could intuitively bring up events that happened over the summer (several papers, the suicide case, etc). 

Marcus, Austin, thank you so much! This is exactly the sort of tools Effective Giving Initiatives sorely lack whenever they're asked about AI Safety (so far the answer was "well we spoke to an evaluator and they supported that org"). @Romain Barbe🔸 hopefully that'll inspire you!

On my side, I'd be happy to compare that to the cost- effectiveness of reaching out to established YouTubers and encouraging them to talk about a specific topic. I guess it can turn out more cost-effective, per intervention, than a full-blown channel. I'm unwilling to discuss it at length but France has some pretty impressive examples.

Point of confusion:

I sometimes see linkpost or reposts on the forum, where I think to myself:

1-...is the person who's posting standing by and defending what is said? Why are they posting this? Why don't I have more context or commentary? If you do these sort of posts, what's your rationale?

2-I sometimes disagree or think the epistemics are bad or the content of the post is clearly corroding the forum's norms (often but not always hooligan-ish). I want to downvote them as hard as if they were genuinely written and endorsed as an EA forum post with an EA public in mind, despite the fact they're presumably subject to softer moderation rules, and despite the fact that even my best criticism won't reach the author if it's a repost. Am I alone in thinking I'll downvote anyhow and would encourage others to do so, in this context? If you disagree, what's your take?

(e.g. DOGE cuts were a total surprise to most but are probably the single biggest event in global health this decade)

I share your take, and would add that this example is even more central than you seem to suggest. Not foreseeing this is a clear, traceable mistake. These decisions caused such a death toll that even moderate (bipartisan) efforts on that front would have been justified in terms of EV.

As a forum user, I'd say I think that it's possible to discuss politics coldly (as long as moderation is stringent on tribal dynamics) and would appreciate this done more often (by the right people).

Hot take : France, Italy and Portugal should do the same thing.

There seems to be some cultural reluctance to EA, or analytic thinking, or weirdness, or ideas and arguments from the anglosphere more broadly in those countries. Under this view, simple translations aren't enough, and it seems to be your perspective.

Some people argue this is not the case, however, and defend it's rather structural. EA found the wrong audience in those countries, but even classical utilitarian EA would find supporters if oriented towards the right audience. They'd rather fund a university group in engineering and programming schools (as opposed to generalist universities) than write a whole new book.

What's your take?

I'd be very grateful to have :

1-A precise example of systemic change given

2-Explanations for why e.g. ARMoR, Concentric Policies or Kooperation Global (or any other policy-focused org or conjunction thereof) don't count as systemic change approaches (despite being definitely EA aligned)

3-An example of a tried and tested "solve the root cause" intervention, something I can look at and think "Oh, I want that but for GHD!".

Another question : How did we come to the conclusion that a root cause existed for all/a large chunk of GHD issues ? This sounds like an extremely complex hypothesis to me. What evidence have we observed that is more probable to observe in case of a root cause rather than not ?

Load more