I'm living in Lyon, France. Learned about EA in 2018, found that great, digged a lot into the topic. The idea of "what in the world improves well-being or causes suffering the most, and what can we do" really influenced me a whole lot - especially when mixed with meditation that allowed me to be more active in my life.
I'm doing a lot of personal research on a whole lot of topics. I also co-wrote a book in French with a few recommendations on how to take action for a better world, and included a chapter on EA (the title is "Agir pour un Monde Durable"). I've participated in a few conferences after that, it's a good way to improve oral skills.
One of the most reliable thing I have found so far is helping animal charities : farmed animals are much more numerous than humans (and have much worse living conditions), and there absolutely is evidence that animal charities are getting some improvements (especially from The Humane League). I tried to donate a lot there.
Long-termism could also be important, but I think that we'll hit energy limits before getting to an extinction event - I wrote an EA forum post for that here: https://forum.effectivealtruism.org/posts/wXzc75txE5hbHqYug/the-great-energy-descent-short-version-an-important-thing-ea
If I can get a job in EA one day, in a position where I can analyze and synthetize important stuff, I'd be really happy!
I just have an interest in whatever topic sounds really important, so I have a LOT of data on a lot of topics. These include energy, the environment, resource depletion, simple ways to understand the economy, limits to growth, why we fail to solve the sustainability issue, and how we got to that very weird specific point in history.
I also have a lot of stuff on Buddhism and meditation and on "what makes us happy" (check the Waking Up app!)
Hi - Thanks for the post, it contained many interesting insights, even though it was quite messy in some parts (maybe too many elements).
Was is your current probability for AI risk, now ? You said that 5% wasn't enough, and from reading the post 10% doesn't seem to be enough either. I'm curious about the current number.
Thanks - advice on "how to message complex things" is really useful - I'm always surprised by how neglected this is.
By the way, if at some point you were to redirect people toward a link explaining the problem with AI (article, website, video), as a resource they can use to understand the problem, what would you provide? I'm looking for a link in English - so far it's not clear what to point to.
For instance, the FLI tribune makes a clear case that many knowledgeable people care about this, but it's not very good at explaining what the risk really is.
Then again, in insight, right now I'm doing my workday in a train station since I know that if I stay at home, I'll end up way too distracted.
And I asked someone to check on me every day by SMS at 10am to make sure that I started working.
And I spent some time to make a pretty agressive todo list that contains columns like "URGENT AAAAAHH".
Uh.
Thanks for writing this, it's true that the expectation of "I must post high-level content" can seriously hinder our willingness to post stuff. Even though writing simple things does add something.
I suppose a simple thing that we can do about this, as a commenter, is saying thanks to posts we liked, even if we don't have anything to add. So thanks again !
My impression was that philosophers tended to disagree a lot on what moral truths are?
Consider that philosophy seems to have helped the West become the dominant civilization on Earth, for example by inventing logic and science
I'd argue that the process of Western civilization dominating the Earth was not a very moral process, and was actually pretty immoral, despite the presence of logic and science. It involved several genocides (in the Americas), colonization, the Two World Wars... In the process, some good things definitely happened (medicine progressing, for instance), but mostly for humans. The status of farmed animals seems to have consistently worsened with factory farming stepping in.
So I'd argue that was the West dominating the world happened because it was more powerful, not because it was more moral (see The End of the Magemachine by Fabian Scheidler for more on this topic).
In that view, science and logic matter because they allow you to have more power. They allow to have a more truthful picture of how the universe works, which allows making stuff like firearms and better boats and antibiotics and nuclear bombs. But this is the process of "civilizations competing with each other" described above. It's not a comparison based on "who is acting closer to what is morally good"?
This is a very good post that asks a very important question: how differently would I act if I had actually experienced extreme amounts of suffering ?
I suppose that I would have a lot more motivation at preventing the worst kinds of suffering (a lot of the possible work would be in animal welfare, I suppose).
This is neglected and your post is written in a very vivid and clear tone, much better than abstract stuff.
Thank you for this, it's important.
Thanks, I'll be interested in that.