Which metric would you use to compare welfare across species?
I don't think we know enough about consciousness/qualia/etc. to say anything with conviction about what it's like to be a nematode. And operationally, I don't think you won't be able to convince enough people/funders to take real action on soil animals because it's just too epistemically unsound and doesn't fit into people's natural world views.
When I say net negative, I don't mean if you try to help soil animals you somehow hurt more animals on the whole.
I mean that you will turn people aw...
Thank you!
On the code sharing. Yes, I thought about it, but it would take us a bit of effort to pull it all together and publish it online, I didn't want to spend that effort if no one was going to get value from it. So far, no one has found the courage and 3 seconds of effort to put a comment asking for the data/code (or more likely, people just don't want to spend the time wading through the code/data)
On nematodes, I think 169x the total number of neurons compared to humans is a poor/confused way to attempt to measure total welfare. And I think the secon...
From Rob (waiting for his comment to be approved):
Thanks for trying Winnow! My guess is that you were redirected to the homepage after logging in and created a fresh document (no reviewers included by default). Now that you're logged in, try creating a document directly from this page and it should work: https://www.winnow.sh/templates/ea-rationalist-writing-assistant
On the Egregore / religion part
I agree! Egregore is occult so definitely religion-adjacent. But I also believe EA as a concept/community is religion-adjacent (not necessarily in a bad way).
It's a community, ethical belief, there is suggested tithing, sense of purpose/meaning, etc.
Funny - I don't think it feels written by a critic, but definitely a pointed outsider (somewhat neutral?) 3rd party analysis.
I do expect the Egregore report to trigger some people (in good and bad ways, see the comment below about feeling heard). The purpose is to make things know...
I'll let @alejbo take this question - I think it's a good one
Although at the high level I somewhat disagree with "I don't think chatbots are very good at epistemology", my guess would be they're better than you think, but I agree not perfect or amazing
But as you admit, most humans aren't either, so it's already a low bar
I'd ask you to consider, when have you ever taken action due to a 0.1–1 min video?
I think basically no one takes action from any video, even 30 min high quality YouTube videos.
But what you get when you have someone watch your 1 min video is that their feed will steer in that direction and they will see more videos from your channel and other AI Safety-aligned channels.
I think this might be where a lot of the value is.
If you can get 20M people to watch a few 45 second videos, you are making the idea more salient in their minds for future videos/discussions ...
I mean more specifically, what is the additional risk of death per person across the next 10 years if you're lonely vs not lonely? Is it even 1/1000 an affect compared to deaths due to cars?
How tractable are the interventions? It might take hundreds of hours over many months to solve your loneliness. That's actually pretty hard / costly
I'm not saying it's not a problem, it definitely is, but I'm just trying to understand if it makes sense to be in this particular guide. Short term serious (yet tractable) risks
Hmm... but wouldn't the main impact of loneliness be suicide in the short term (the relevant part for this guide)? Which we're already addressing?
I'm sure loneliness impacts your long-term health, but I don't think it's going to raise your likelihood of death in the next 10 years if you're relatively young and healthy
I think this type of thinking and work is useful and important.
It's very surprising (although in some ways not surprising) that this analysis hasn't been done before elsewhere.
Have you searched for previous work on analyzing the AI safety space?
If OpenPhil (and others) are in the process of funding billions of dollars of AI safety work and field building, wouldn't they themselves do some sort of comprehensive analysis or fund / lean on someone else to do it?
Btw, two small suggestions for the chatbot:
before : https://i.imgur.com/AHjaJHD.png
after: https://i.imgur.com/jAO4ozG.png
I like this idea and it looks great!
I had a similar concept in mind that I wanted to build but with more of a questionnaire/survey design rather than solely text articles or an open-ended chatbot. More of a hand-holding guided experience through the concerns/debate points.
How's it going so far? How many daily active users do you have?
I don't really have the time, skills, or contacts to make this happen, if you want to pick up the torch I would gladly pass it to you.
Tyler seems keen although worried about censors: https://twitter.com/tylercowen/status/1614402492518785025
It seems from the podcast he wanted to only release the book in Chinese (maybe especially at this point due to the decline in willingness for the west to work with China) but I'm not sure, maybe the book would help westerns understand China's culture as much as Chinese to understand the west. A lot of great power war-con...
From the way Tyler was talking about the book and topics, it did not seem to me like a politically controversial book "it was a book designed to explain America to the Chinese, and make it more explicable, more understandable".
Or at least the controversial parts could be taken out if required and a lot of the value could remain.
Though I covered a lot of basic differences across the economies, the policies, why are the economies different?
Why is there so little state ownership in America?
Why are so many parts of America so bad at infrastructure?
Why do Americans save less?
How is religion different in America?
Hey Robbert-Jan,
Sorry, somehow I missed your comment but saw it once Simon replied and I got a notification.
We're likely staying in the web2 world for now, but there is a chance we graduate to web3/crypto in the future.
Check out our website here: https://impactmarkets.io/ Join our Discord here: https://discord.gg/7zMNNDSxWv Read (or skim) our long EA post here: https://forum.effectivealtruism.org/posts/7kqL4G5badqjskYQs/toward-impact-markets-1
Hey Simon,
We've been funded by the FTX Future Fund regrantor program!
Check out our website here: https://impactmarkets.io/ Join our Discord here: https://discord.gg/7zMNNDSxWv Read (or skim) our long EA post here: https://forum.effectivealtruism.org/posts/7kqL4G5badqjskYQs/toward-impact-markets-1
I think this is really difficult to truly assess because there is a huge confounder. The more you age the worse your memory gets, your creativity decreases, your ability to focus decreases, etc., etc.
If all of that was fixed with anti-aging it may not be true that science progresses one funeral at a time because the people at the top of their game can keep producing great work instead of becoming geriatric while still holding status/power in the system.
Also, it could be a subconscious thing: "why bother truly investigating my beliefs at age 70, I'm going t...
This is a good comment. I'd like to respond but it feels like a lot of typing... haha
but that’s not the same as seeing improvements in leaders’ quality
I just mean the world is trending towards democracies and away from totalitarianism.
It’s inherently easier to attain and keep power by any means necessary with zero ethics
Yes, but 100x easier? Probably not. What if the great minds have 100x the numbers and resources? Network effects are strong
There’s another asymmetry where it’s often easier to destroy/attack/kill than build something.
Same ...
I agree, it feels like a stakesy decision! And I'm pretty aligned with longtermist thinking, I just think that "entire future at risk due to totalitarianism lock-in due to removing death from aging" seems really unlikely to me. But I haven't really thought about it too much so I guess I'm really uncertain here as we all seem to be.
"what year you guess it would first have been good to grant people immortality?"
I kind of reject the question due to 'immortality' as that isn't the decision we're currently faced with. (unless you're only interested in this spec...
The thing that's hard to internalize (at least I think) is that by waiting 200 years to start anti-aging efforts you are condemning billions of people to an early death with a lifespan of ~80 years.
You'd have to convince me that waiting 200 years would reduce the risk of totalitarian lock-in so much that it offsets billions of lives that would be guaranteed to "prematurely end".
Totalitarian lock-in is scary to think about and billions of people's lives ending prematurely is just text on a screen. I would assume that the human brain can easily simulate the everyday horror of a total totalitarian world. But it's impossible for your brain to digest even 100,000,000 premature deaths, forget billions and billions.
But we're not debating if immortality over the last thousand years would have been better or not, we're looking at current times and then estimating forward, right? (I agree a thousand years ago immortality would have been much much riskier than starting today)
In today's economy/society great minds can instantly coordinate and outnumber the dictators by a large margin. I believe this trend will continue and that if you allow all minds to continue the great minds will outgrow the dictator minds and dominate the equation.
Dictators are much more likely to die...
When thinking about the tail of dictators don't you also have to think of the tail of good people with truly great minds you would be saving from death? (People like John von Neumann, Benjamin Franklin, etc.)
Overall, dictators are in a very tough environment with power struggles and backstabbing, lots of defecting, etc. while great minds tend to cooperate, share resources, and build upon each other.
Obviously, there are a lot more great minds doing good than 'great minds' wishing to be world dictators. And it seems to trend in the right direction. Com...
Really impressive post, some great deep thinking here. I saw earlier drafts and it has definitely grown and improved. I'm proud to be working with you on this project, thank you for your time and effort! And thanks for 1% of the impact as well, I appreciate that.
For anyone looking for a simple real-world example I have this WIP document on how it could have been used to kickstart climate change action in the past. I'm trying to figure out a way to easily convey the concept and benefits of an impact market to a general audience (non EA/Rationalist people)
Thanks for shouting out https://www.impactcerts.com/ and our Discord!
We just hacked together V1 during ETH Denver's hackathon last weekend but we're going to be iterating towards a very serious market over the next few months.
If anyone wants to stay up-to-date on impact certificates or even better, wants to help build it (or support us in any way) then feel free to join our small but growing Discord.
If you're thinking about something about GiveWell a catchy line might be something like:
"The best charities are 100 times more effective than others, GiveWell is a nonprofit that finds these charities and recommends them so your donation goes the furthest (or so your donation can save the most lives)."
Your post/idea reminds me of "Social Impact Bonds": https://www.goldmansachs.com/insights/pages/social-impact-bonds.html
Seems sorta similar except instead of private investors it's an open market for regular people to make a profit from their good policy investments.
This is great!
I had a similar idea regarding the crypto community as a great potential source of EA funding but I'm thinking more for-profit than strictly donation. It might be possible to tie crypto profits to EA funding by creating an impact certificate DAO.
I have a working project proposal I've been getting feedback on, if anyone is interested in reading it and giving me their thoughts just message me and I'll send you the link!
I would love to connect with the EffectiveCrypto.org team to see if we can bounce some ideas off or learn from each other.
oh good call out, I'll ping Niki to make sure he sees this comment