Shortform Content [Beta]

anonysaurus30k's Shortform

NB: I have my own little archive of EA content and I got an alert that several links popped up as dead - typically I would just add it to a task list and move on… but I was surprised to see Joe’ Rogan’s (full)  interview with Will Macaskill in 2017 was no longer available on YouTube. So I investigated and found out Rogan recently sold his entire catalog and future episodes to Spotify (for $100 million!). Currently Spotify is removing episodes from other platforms like Apple, Youtube and Vimeo. They’ve also decided to not transfer certain episodes that... (read more)

Harrison D's Shortform

EA (forum/community) and Kialo?

TL;DR: I’m curious why there is so little mention of Kialo as a potential tool for hashing out disagreements in the EA forum/community, whereas I think it would be at least worth experimenting with. I’m considering writing a post on this topic, but want to get initial thoughts (e.g., have people already considered it and decided it wouldn’t be effective, initial impressions/concerns, better alternatives to Kialo)

The forum and broader EA community has lots of competing ideas and even some direct disagreements. Will Bradshaw's ... (read more)

2casebash12hHow would you feel about reposting this in EAs for Political Tolerance (https://www.facebook.com/groups/159388659401670) [(https://www.facebook.com/groups/159388659401670)] ? I'd also be happy to repost it for you if you'd prefer.

Do you just mean this shortform or do you mean the full post once I finish it? Either way I’d say feel free to post it! I’d love to get feedback on the idea

Khorton's Shortform

I regularly see people write arguments like "One day, we'll colonize the galaxy - this shows why working on the far future is so exciting!"

I know the intuition this is trying to trigger is bigger = more impact = exciting opportunity.

The intuition it actually triggers is expansion and colonization = trying to build an empire = I should be suspicious of these people and their plans.

Ramiro's Shortform

Is there some tension between population ethics + hedonic utilitarianism and the premises people in wild animal suffering use (e.g., negative utilitarianism, or the negative welfare expectancy of wild animals) to argue against rewilding (and in favor of environment destruction)?

Showing 3 of 6 replies (Click to show all)

Does your feeling that the default state is positive also apply to farm animals? Their reward system would be shaped by aritifical selection for the past few generations, but it is not immediately clear to me if you think that would make a difference. 

1Ramiro7dGood point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good - even if it's not immediately effective.
1Ramiro7dGood point, thanks. However, even if EE and Wild animals welfare advocates do not conflict in their intermediary goals, their ultimate goals do collide, right? For the former, habitat destruction is an evil, and habitat restoration is good - even if it's not immediately effective.
RogerAckroyd's Shortform

Sometimes the concern is raised that caring about wild animal welfare is seen as unituitive and will bring conflict with the environmental movement. I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor. Rather, I think environmentalist concerns are partially taken as seriously as they are because people see it as helping wild animals as well. (In some perhaps not fully thought out way.) I do not think it is a coindince that the ext... (read more)

evelynciara's Shortform

On the difference between x-risks and x-risk factors

I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:

  1. We can treat them the same in terms of probability theory. For example, if  is an "x-risk" and  is a "risk factor" for , then . But we can also say that , because both statements are equivalent to . We can similarly speak of the total probability of an x-risk factor becaus
... (read more)

I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.

It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can't capture casualty basically because the equality sign in lacks direction. Similarly, it's hard to capture causality in probability spaces.

Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.

So I think it's fair to speak about fac... (read more)

evelynciara's Shortform

"Quality-adjusted civilization years"

We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.

For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is  QACYs.

Another example: suppose climate change will reduce the ... (read more)

Pablo_Stafforini's Shortform

Scott Aaronson just published a post announcing that he has won the ACM Prize in Computing and the $250k that come with it, and is asking for donation recommendations. He is particularly interested "in weird [charities] that I wouldn’t have heard of otherwise. If I support their values, I’ll make a small donation from my prize winnings. Or a larger donation, especially if you donate yourself and challenge me to match." An extremely rough and oversimplified back-of-the-envelope calculation suggests that a charity recommendation will cause, in expectation, ~$500 in donations to the recommended charity (~$70–2800 90% CI).

MichaelA's Shortform

Independent impressions

Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.

Armed with this concept, I try to s... (read more)

Tankrede's Shortform

The definition of existential risk as ‘humanity losing its long term potential’ in Toby Ord precipice could be specified further. Without (perhaps) loss of generality, assuming finite total value in our universe, one could specify existential risks into two broad categories of risks such as:

  • Extinction risks (X-risks): Human share of total value goes to zero. Examples could be extinction from pandemics, extreme climate change or some natural event.
  • Agential risks (A-risks): Human share of total value could be  greater than in the X-risks scenarios but k
... (read more)
ag4000's Shortform

I was planning to donate some money to a climate cause a few months ago, and I decide to give some money to Giving Green (this was after the post here recommending GG).  There were some problems with the money going through (unrelated to GG), but anyways now I can still decide to send the money elsewhere.  I'm thinking about giving the money elsewhere due to the big post criticizing GG.  However, I still think it's probably a good giving opportunity, given that it's at an important stage of its growth and seems to have gotten a lot of public... (read more)

MichaelA's Shortform

Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term “authoritarianism” rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.

I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.) 

But my understanding is that political scientists typically consider to... (read more)

Nathan Young's Shortform

A friend asked about effective places to give. He wanted to donate through his payroll in the UK. He was enthusiastic about it, but that process was not easy.

  1.  It wasn't particularly clear whether GiveWell or EA Development Fund was better and each seemed to direct to the other in a way that felt at times sketchy.
  2. It wasn't clear if payroll giving was an option
  3. He found it hard to find GiveWell's spreadsheet of effectiveness
     

Feels like making donations easy should be a core concern of both GiveWell and EA Funds and my experience made me a little embarrassed to be honest.

Ben Garfinkel's Shortform

The O*NET database includes a list of about 20,000 different tasks that American workers currently need to perform as part of their jobs. I’ve found it pretty interesting to scroll through the list, sorted in random order, to get a sense of the different bits of work that add up to the US economy. I think anyone who thinks a lot about AI-driven automation might find it useful to spend five minutes scrolling around: it’s a way of jumping yourself down to a lower level of abstraction. I think the list is also a little bit mesmerizing, in its own right.

One up... (read more)

I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.

Nathan_Barnard's Shortform

Two books I recommend on structural causes and solutions to global poverty. The bottom billion by Paul Collier focuses on the question how can you get failed and failing states in very poor countries to middle income status and has a particular focus on civil war. It also looks at some solutions and thinks about the second order effects of aid. How Asia works by Joe Studwell focus on the question of how can you get poor countries with high quality, or potentially high quality governance and reasonably good political economy to become high income countries. It focuses exclusively on the Asian developmental state model and compares it with neoliberalish models in other parts of Asia that are now mostly middle income countries. 

JamesOz's Shortform

Why is there such a big disparity in focus areas between grassroots groups and NGOs/think-tanks? 

 

I’m thinking primarily in the two cause areas I’m most involved in: animal welfare and climate change. Animal Welfare NGOs focus a lot on corporate cage-free reforms (the EA ones anyway) whilst most grassroots groups are talking about ending factory farming, fur or individual vegan outreach. For climate, it’s even worse: Think-tanks recommend clean energy R&D and innovation whilst most grassroots groups often reject nuclear and other tech-focused... (read more)

I'm not very familiar with the grassroots, so maybe I'm way off.

I think some of the big effective animal adocacy groups started as grass roots, and then because they were judged to be cost-effective, they were recommended by ACE or funded by Open Phil until they became big and weren't really grassroots anymore.

  1. Maybe it's primarily because big funders don't value a lot of grassroots work (rightly or wrongly), and if they did, those orgs would professionalize and scale up.
  2. Or, some grassroots work is necessarily too low-scale (even if cost-effective) and it's
... (read more)
Cullen_OKeefe's Shortform

Random, time-sensitive charity idea: start a pledge drive for people who have received their COVID vaccine to contribute the cost of at least one vaccine to the COVAX facility. Unfortunately, Americans can’t directly donate to COVAX, but people from the UK can.

Jakob_J's Shortform

How much money is required to raise a family?

A big part of many peoples motivation for earning a high income seems to be the perception that it is a necessity in order to raise a family. Many EA-aligned jobs are in the public or NGO sector and are less paid than what people could earn in the private sector, and since close to 80% people have children, this could be big factor for people to give up on an EA-aligned career. 

I am wondering whether this reasoning is valid, and where the extra cost for children comes from. In most western countries, there ... (read more)

Showing 3 of 4 replies (Click to show all)
1Jakob_J13dThanks for sharing your perspective! It seems like having a family in major metropolitan areas are especially challenging due to the much higher housing cost. I am wondering if you have any examples of the types of jobs you think would be difficult to afford raising a family in London (alternatively, what salary)? For example, it seems that a civil servant could earn £40,000 per year after a few years of experience, and I suspect other sectors where EAs would want to work might pay a similar amount (academia, NGOs etc). Regarding having lots of time, it is true that being a stay at home parent leads to substantial loss of income. What I was wondering was more along the lines of: is it worth trying to earn say £80,000+ per year working in finance just to be able to afford a larger house, but working 80+hours/week, when say a civil servant would have fixed 40 working hours per week, free weekends, but earning half as much. In terms of income vs time, my intuition is that time is more valuable than income when having children, even if it means saving on housing costs.

I was thinking of a salary in the mid £40k range when I said that I feel like I need a higher salary to be able to afford living in London with children as it is my salary as a civil servant. :-) That is significantly above median and average UK salary. And still ~20% above median London salary, though I struggled to quickly find numbers for average London salary.

I think if you have two people earning £40k+ each having kids in London is pretty doable even if both are GWWC pledgers. I think I'd feel uncomfortable if both parents brought in less than £30k, t... (read more)

5Larks14dChildcare is a very big cost - if you think that you are trying to spend a portion of your income on childcare, but for the worker this is their entire income, and the number of children one person can look after is limited (both by practicalities and also regulation), you can see why it tends to be expensive. I would however keep in mind that most people who work in the public or NGO sector do manage to raise families though!
HaukeHillebrandt's Shortform

~140,000 people from Hong Kong might move to the UK this year (~322k in total over the next 5 years [source]).  

Are they particularly well placed to work on Sino-Western relations? (Because they're better at bridging cultural (and linguistic) gap and are likely highly determined). Should we prioritize helping them somehow?

Showing 3 of 5 replies (Click to show all)
4HaukeHillebrandt12dThat was precisely my point actually—just like Hirsi Ali might be well-placed to advocate for women's rights within Islam, people from Hong Kong might be well placed to highlight e.g. human rights issues in China.
12Larks12dAhh, in that case I agree that HKers, or even better Uighurs, would be well placed. But my impression was that 80k etc.'s concerns about China mainly revolved around things like improving Western-Chinese coordination to reduce the risk of war, AI race or climate change, rather than human rights. I would think that putting pressure on them for human rights abuses would be likely to make this worse, as the CCP views such activism as an attack on their system. It is hard to cooperate with someone if they are denouncing you as evil and funding your dissidents.

Working on human rights were just an example, because of the comparison you raised,  it could also be CSET type work.

Nathan_Barnard's Shortform

Maybe this isn't something people on the forum do, but it is something I've heard some EAs suggest. People often have a problem when they become EAs that they now believe this really strange thing that potentially is quite core to their identity now and that can feel quite isolating. A suggestion that I've heard is that people should find new, EA friends to solve this problem. It is extremely important that this does not come off as saying that people should cut ties with friends and family who aren't EAs. It is extremely important that this is not what you mean. It would be deeply unhealthy for us as community is this became common. 

Load More