NB: I have my own little archive of EA content and I got an alert that several links popped up as dead - typically I would just add it to a task list and move on… but I was surprised to see Joe’ Rogan’s (full) interview with Will Macaskill in 2017 was no longer available on YouTube. So I investigated and found out Rogan recently sold his entire catalog and future episodes to Spotify (for $100 million!). Currently Spotify is removing episodes from other platforms like Apple, Youtube and Vimeo. They’ve also decided to not transfer certain episodes that... (read more)
EA (forum/community) and Kialo?
TL;DR: I’m curious why there is so little mention of Kialo as a potential tool for hashing out disagreements in the EA forum/community, whereas I think it would be at least worth experimenting with. I’m considering writing a post on this topic, but want to get initial thoughts (e.g., have people already considered it and decided it wouldn’t be effective, initial impressions/concerns, better alternatives to Kialo)
The forum and broader EA community has lots of competing ideas and even some direct disagreements. Will Bradshaw's ... (read more)
Do you just mean this shortform or do you mean the full post once I finish it? Either way I’d say feel free to post it! I’d love to get feedback on the idea
I regularly see people write arguments like "One day, we'll colonize the galaxy - this shows why working on the far future is so exciting!"
I know the intuition this is trying to trigger is bigger = more impact = exciting opportunity.
The intuition it actually triggers is expansion and colonization = trying to build an empire = I should be suspicious of these people and their plans.
Is there some tension between population ethics + hedonic utilitarianism and the premises people in wild animal suffering use (e.g., negative utilitarianism, or the negative welfare expectancy of wild animals) to argue against rewilding (and in favor of environment destruction)?
Does your feeling that the default state is positive also apply to farm animals? Their reward system would be shaped by aritifical selection for the past few generations, but it is not immediately clear to me if you think that would make a difference.
Sometimes the concern is raised that caring about wild animal welfare is seen as unituitive and will bring conflict with the environmental movement. I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor. Rather, I think environmentalist concerns are partially taken as seriously as they are because people see it as helping wild animals as well. (In some perhaps not fully thought out way.) I do not think it is a coindince that the ext... (read more)
On the difference between x-risks and x-risk factors
I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:
I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.
It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can't capture casualty basically because the equality sign in →F=m→a lacks direction. Similarly, it's hard to capture causality in probability spaces.
Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.
So I think it's fair to speak about fac... (read more)
"Quality-adjusted civilization years"
We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.
For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is 0.5×2=1 QACYs.
Another example: suppose climate change will reduce the ... (read more)
Scott Aaronson just published a post announcing that he has won the ACM Prize in Computing and the $250k that come with it, and is asking for donation recommendations. He is particularly interested "in weird [charities] that I wouldn’t have heard of otherwise. If I support their values, I’ll make a small donation from my prize winnings. Or a larger donation, especially if you donate yourself and challenge me to match." An extremely rough and oversimplified back-of-the-envelope calculation suggests that a charity recommendation will cause, in expectation, ~$500 in donations to the recommended charity (~$70–2800 90% CI).
Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Armed with this concept, I try to s... (read more)
The definition of existential risk as ‘humanity losing its long term potential’ in Toby Ord precipice could be specified further. Without (perhaps) loss of generality, assuming finite total value in our universe, one could specify existential risks into two broad categories of risks such as:
I was planning to donate some money to a climate cause a few months ago, and I decide to give some money to Giving Green (this was after the post here recommending GG). There were some problems with the money going through (unrelated to GG), but anyways now I can still decide to send the money elsewhere. I'm thinking about giving the money elsewhere due to the big post criticizing GG. However, I still think it's probably a good giving opportunity, given that it's at an important stage of its growth and seems to have gotten a lot of public... (read more)
Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term “authoritarianism” rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.
I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.)
But my understanding is that political scientists typically consider to... (read more)
A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.
Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.
The O*NET database includes a list of about 20,000 different tasks that American workers currently need to perform as part of their jobs. I’ve found it pretty interesting to scroll through the list, sorted in random order, to get a sense of the different bits of work that add up to the US economy. I think anyone who thinks a lot about AI-driven automation might find it useful to spend five minutes scrolling around: it’s a way of jumping yourself down to a lower level of abstraction. I think the list is also a little bit mesmerizing, in its own right.
One up... (read more)
I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.
Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries.
Why is there such a big disparity in focus areas between grassroots groups and NGOs/think-tanks?
I’m thinking primarily in the two cause areas I’m most involved in: animal welfare and climate change. Animal Welfare NGOs focus a lot on corporate cage-free reforms (the EA ones anyway) whilst most grassroots groups are talking about ending factory farming, fur or individual vegan outreach. For climate, it’s even worse: Think-tanks recommend clean energy R&D and innovation whilst most grassroots groups often reject nuclear and other tech-focused... (read more)
I'm not very familiar with the grassroots, so maybe I'm way off.
I think some of the big effective animal adocacy groups started as grass roots, and then because they were judged to be cost-effective, they were recommended by ACE or funded by Open Phil until they became big and weren't really grassroots anymore.
Random, time-sensitive charity idea: start a pledge drive for people who have received their COVID vaccine to contribute the cost of at least one vaccine to the COVAX facility. Unfortunately, Americans can’t directly donate to COVAX, but people from the UK can.
How much money is required to raise a family?A big part of many peoples motivation for earning a high income seems to be the perception that it is a necessity in order to raise a family. Many EA-aligned jobs are in the public or NGO sector and are less paid than what people could earn in the private sector, and since close to 80% people have children, this could be big factor for people to give up on an EA-aligned career. I am wondering whether this reasoning is valid, and where the extra cost for children comes from. In most western countries, there ... (read more)
I was thinking of a salary in the mid £40k range when I said that I feel like I need a higher salary to be able to afford living in London with children as it is my salary as a civil servant. :-) That is significantly above median and average UK salary. And still ~20% above median London salary, though I struggled to quickly find numbers for average London salary.
I think if you have two people earning £40k+ each having kids in London is pretty doable even if both are GWWC pledgers. I think I'd feel uncomfortable if both parents brought in less than £30k, t... (read more)
~140,000 people from Hong Kong might move to the UK this year (~322k in total over the next 5 years [source]).
Are they particularly well placed to work on Sino-Western relations? (Because they're better at bridging cultural (and linguistic) gap and are likely highly determined). Should we prioritize helping them somehow?
Working on human rights were just an example, because of the comparison you raised, it could also be CSET type work.
Maybe this isn't something people on the forum do, but it is something I've heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I've heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren't EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common.