Shortform Content [Beta]

Khorton's Shortform

Reducing procrastination on altruistic projects:

I have often struggled to get started on projects that are particularly important to me so I thought I'd jot down a couple ways I handle procrastination.

  1. Check if I actually want to do the project. Sometimes I like the idea of the project but don't actually want to do it (maybe I can post the idea here instead), or I'm conflicted because working on this task would conflict with my other values (can I change the plan so it meets my needs more fully?).
  2. Check if I have an actually realistic plan. My subconsciou
... (read more)
vaidehi_agarwalla's Shortform

CGD launched a Global Skills Partnership program to reduce brain drain and improve migration (https://gsp.cgdev.org/)

It would be interesting to think about this from the perspective of EA groups, where brain drain is quite common. Part of their solution is to offer training and recognized certifications to a broader group of people in the home country to increase the overall pool of talent.

I will probably add more thoughts in the coming days when I have time to read the case studies in more depth.

Ben_Snodin's Shortform

Takeaways from some reading about economic effects of human-level AI

I spent some time reading things that you might categorise as “EA articles on the impact of human-level AI on economic growth”. Here are some takeaways from reading these (apologies for not always providing a lot of context / for not defining terms; hopefully clicking the links will provide decent context).

... (read more)
vaidehi_agarwalla's Shortform

Quick BOTEC of person-hours spent on EA Job Applications per annum.

I created a Guesstimate model to estimate a total of ~14,000 to 100,000 person-hours or ~7 to 51 FTE are spent per year (90% CI). This comes to an estimated USD $ 320,000 to $3,200,000 unpaid labour time. 

  • All assumptions for my calculations are in the Guesstimate
  • The distribution of effort spent by candidates is heavy-tailed; a small percentage of candidates may spend 3 to 10x more time than the median candidate.
  • I am not very good at interpreting the guesstimate, so if someone can
... (read more)
Showing 3 of 5 replies (Click to show all)
1Josh Jacobson6dI did not review the model, but only 75% of hours being unpaid seems much too low based on my experience having gone through the job hiring process (including later stages) with 10-15 EA orgs.
1vaidehi_agarwalla6dOkay, so I used a different method to estimate the total manhours and my new estimate is something like 60%. I basically assumed that 50% of Round 2 -4 in the application process is paid, and 100% of the work trial. I expect that established / longtermist orgs are disproportionately likely to pay for work tests, compared to new or animal / GH&D orgs.

I think Josh was claiming that 75% was "too low", as in the total % of unpaid hours being more like 90% or something.

When I applied to a bunch of jobs, I was paid for ~30 of the ~80 hours I spent (not counting a long CEA work trial — if you include that, it's more like 80 out of 130 hours). If you average Josh and I, maybe you get back to an average of 75%?

*****

This isn't part of your calculation,  but I wonder what fraction of unique applicants to EA jobs have any connection to the EA community beyond applying for one job?

In my experience trying to h... (read more)

Arne's Shortform

Dear forum,

I was wondering if the repugnant conclusion could be responded by an argument of the following form: 

Considering planet earth and a given happiness distribution of its citizens with total happiness h, there is simply not enough space or resources or whatsoever to let an arbitrary large number of people n live with an average amount of happiness epsilon, such that n * epsilon > h. At even larger scales, the observable universe is finite and thus for the same reason as above n does not need to exist.

What do you think of such an argument?

I am not sure, whether the nature of the repugnant conclusion is really affected by such an argument. Can you help me to understand?

Showing 3 of 5 replies (Click to show all)
1Arne4dAnd thank you as well for the short, but helpful answer. The relevance of the thought of mine for philosophy gives also confidence to that thinking. Btw we have a some friends in common of which I am aware: EdoArad -> (Shay ben moshe) -> Amit -> Arne ^^

Cool! Through data science I guess? 

1Arne4dThank you very much, you put it words, what I could not. Your answer gave me not only the assurance that my doubts were justified, but also some confidence to ask more questions of that kind.Thank you.
david_reinstein's Shortform

Variant of Chinese room argument? This seems ironclad to me, what am I missing:

My claims:

Claim: AI feelings are unknowable: Maybe an advanced AI can have positive and negative sensations. But how would we ever know which ones are which (or how extreme they are?

Corollary: If we cannot know which are which, we can do nothing that we know will improve/worsen the “AI feelings”; so it’s not decision-relevant

Justification I: As we ourselves are bio-based living things, we can infer from the apparent sensations and expressions of bio-based living things that they... (read more)

anoni's Shortform

TL;DR silly critics to long-termism, can you convince me to keep donating to EA funds?

See positive arguments and introduction below. The following are supposed to be naive Critics/questions:

  1.  Utilitarianism critics. Even with the newer formulation of long-termism,
    1. Why should I care about people that won't exist? Say we go extinct, then what? (like in the movie "Her", this could be the smart choice). I'm more on the pro-abortion side of the discussion here. Why should X-risks be costly because of their opportunity cost and not because of the immediate su
... (read more)

1.

1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disva... (read more)

1AbigailT24dHi - my intuitions fall in the other direction here, so I'm keen to explain why. Implicit IMOs in front of everything here. 1: 1.1: I have a younger brother. My parents could have stopped at one, and my family would broadly still be happy, but my brother is generally happy and leads a good life. Similarly, if they'd had a third child they probably would have been happy and great too, and I would have loved them. All else being equal I wish that youngest sibling could have existed. IMO these two sentiments aren't meaningfully distinct. 1.2: We don't only care about humans. Sure, the argument for making more humans would apply to insects or something as well. However, most of the things that would kill all the humans would also kill everything else, so for me not letting that happen is still much more of a priority. 1.3: True on the specifics, false more generally. I don't know exactly what the world should look like, but I'm pretty sure people being happy is good, more people being happy is better, and everything being unrecoverably dead is neutral at most. 2: 2.1: If we weren't potentially about to all die I'd be more willing to think about this, but we have to survive the next century or two first. Whether capitalism makes things better or worse for now depends much more on whether it makes us more or less likely to all die, than on anything else (again, for now). 2.2: I'm pretty sure non-privileged people also want to be alive and happy. 2.3: Possibly, and I'm ok with that. I'd rather live a worse life if it means my grandkids are more likely to survive and have happy ones. Although it's definitely better for everyone to be happier now, I feel like it doesn't amount to much if we all die in the next century. 2.4: If I can choose between a surviving but stable society, and a growing one, I would choose the growing one. But both are better than an empty rock, so the priority now is not dying either way. 3: 3.1: I'm pretty sure we'll continue to want to
1AbigailT24dI should clarify 3.3. For me, longtermism is partly the acknowledgement of much vaster moral stakes - so long as there are things we can do to help, they're no less important to do as short-termist interventions. (The usual arguments about it not being helpful to demand too much of people still apply though).
Linch's Shortform

I know this is a really mainstream opinion, but I recently watched a recording of the musical Hamilton and I really liked it. 

I think Hamilton (the character, not the historical figure which I know very little about) has many key flaws (most notably selfishness, pride, and misogyny(?)) but also virtues/attitudes that are useful to emulate.

I especially found the Non-stop song(lyrics) highly relatable/aspirational,  at least for a subset of EA research that looks more like "reading lots and synthesize many thoughts quickly" and less like "think ver... (read more)

3Miranda_Zhang7dI love Hamilton! I wrote my IB Extended Essay on it! I also really love and relate to Non-Stop but in the obsessive, perfectionist way. I like + appreciate your view on it, which seems quite different in that it is more focused on how Hamilton's brain works rather than on how hard he works.

Hello fellow Zhang!

Thanks! I don't see nearly as much perfectionism (like none of the lyrics I can think of talks about rewriting things over and over), but I do think there's an important element of obsession to Hamilton/Non-Stop, which I relate pretty hard to.  Since I generate a lot of my expected impact from writing, and it's quite hard to predict which of my ideas are the most useful/promising in advance, I do sometimes feel a bunch of internal pressure to write/think faster, produce more, etc, like a bit of a race against the clock to produce as... (read more)

taoroalin@gmail.com's Shortform

Near term AI risk: simulated social status. Social status is a big downside to playing video games all day / abusing drugs. GPT3 and friends are probably capable of giving you believable simulated social status, far beyond what current video games provide. This could be a big theraputic boon as well as well as a threat to people's happiness and contribution to society.

Miranda_Zhang's Shortform

I'm (still!!!) thinking about my BA thesis research question and I think my main uncertainty/decision point is what specific policy debate to investigate. I've narrowed it down to two so far - hopefully I don't expand - and really welcome thoughts.

Context: I am examining the relationship between narratives deployed by experts on Twitter and the Biden Administration's policymaking process re: COVID-19 vaccine diplomacy. Specifically,  I want to examine a debate on an issue wherein EA-aligned experts have generally coalesced around one stance.

Motivating... (read more)

Showing 3 of 6 replies (Click to show all)
1Miranda_Zhang6dHmm! Yes, that's interesting - and aligns with the fact that many different policy influencers weighed in, ranging from former to current policymakers. Thank you very much for this! I think something I'm worried about is how I can conceptualize [inside experts] vs. [outside experts] ... It seems like a potentially arbitrary divide and/or a very complex undertaking given the lack of transparency into the policy process (i.e. who actually wields influence and access to Biden and Katherine Tai, on this specific issue?). It also complicates the investigation by adding in the element of access as a factor, rather than purely thinking about narrative strategies - and I very much want to focus on narratives. On one hand, I think that could be interesting - e.g. looking at narrative strategies across levels of access. On the other, I'm uncertain that looking at narrative strategies would add much compared to just analyzing the stances of actors within the sphere of influence. What do you think of this alternate RQ: "How did pro/anti-waiver coalitions use evidence in their narratives?" Moves away from the focus on experts but still gets to the scientific/epistemic component. (I'm also wondering whether I am being overly concerned with theoretically justifying things!)
2IanDavidMoss6dI think I would agree with this. It seems like you're trying to demonstrate your knowledge of a particular framework or set of frameworks through this exercise and you're letting that constrain your choices a lot. Maybe that will be a good choice if you're definitely going into academia as a political scientist after this, but otherwise, I would structure the approach around how research happens most naturally in the real world, which is that you have a research question that would have concrete practical value if it were answered, and then you set out to answer it using whatever combination of theories and methods makes sense for the question.

Thanks! I'll take a break from thinking about the theory - ironically, I am fairly confident I don't want to go into academia.

Again, appreciate your thoughts on this. Hope I'll hear from you again if I post another Shortform about my thesis!

Buck's Shortform

[This is an excerpt from a longer post I'm writing]

Suppose someone’s utility function is

U = f(C) + D

Where U is what they’re optimizing, C is their personal consumption, f is their selfish welfare as a function of consumption (log is a classic choice for f), and D is their amount of donations.

Suppose that they have diminishing utility wrt (“with respect to”) consumption (that is, df(C)/dC is strictly monotonically decreasing). Their marginal utility wrt donations is a constant, and their marginal utility wrt consumption is a decreasing function. There has t... (read more)

Showing 3 of 5 replies (Click to show all)
2Linch9dI thought you were making an empirical claim with the quoted sentence, not a normative claim.

Ah, fair.

3Stefan_Schubert9dThe GWWC pledge is akin to a flat tax, as opposed to a progressive tax - which gives you a higher tax rate when you earn more. I agree that there are some arguments in favour of "progressive donations". One consideration is that extremely high "donation rates" - e.g. donating 100% of your income above a certain amount - may affect incentives to earn more adversely, depending on your motivations. But in a progressive donation rate system with a more moderate maximum donation rate that would probably not be as much of a problem.
Buck's Shortform

[epistemic status: I'm like 80% sure I'm right here. Will probably post as a main post if no-one points out big holes in this argument, and people seem to think I phrased my points comprehensibly. Feel free to leave comments on the google doc here if that's easier.]

I think a lot of EAs are pretty confused about Shapley values and what they can do for you. In particular Shapley values are basically irrelevant to problems related to coordination between a bunch of people who all have the same values. I want to talk about why. 

So Shapley values are a sol... (read more)

But if you already have this coalition value function, you've already solved the coordination problem and there’s no reason to actually calculate the Shapley value! If you know how much total value would be produced if everyone worked together, in realistic situations you must also know an optimal allocation of everyone’s effort. And so everyone can just do what that optimal allocation recommended.

This seems correct


A related claim is that the Shapley value is no better than any other solution to the bargaining problem. For example, instead of allocat

... (read more)
NunoSempere's Shortform

Notes on: A Sequence Against Strong Longtermism

Summary for myself. Note: Pretty stream-of-thought.

Proving too much

  • The set of all possible futures is infinite which somehow breaks some important assumptions longtermists are apparently making.
    • Somehow this fails to actually bother me
  • ...the methodological error of equating made up numbers with real data
    • This seems like a cheap/unjustified shot. In the world where we can calculate the expected values, it would seems fine to compare (wide, uncertain) speculative interventions with harcore GiveWell data (note that
... (read more)
Khorton's Shortform

I'm becoming concerned that the title "EA-aligned organisation" is doing more harm than good. Obviously it's pointing at something real and you can expect your colleagues to be familiar with certain concepts, but there's no barrier to calling yourself an EA-aligned organisation, and in my view some are low or even negative impact. The fact that people can say "I do ops at an EA org" and be warmly greeted as high status even if they could do much more good outside EA rubs me the wrong way. If people talked about working at a "high-impact organisation" instead, that would push community incentives in a good way I think.

10Aaron Gertler11dI have exactly the opposite intuition (which is why I've been using the term "EA-aligned organization" throughout my writing for CEA and probably making it more popular in the process). "EA-aligned organization" isn't supposed to mean "high-impact organization". It's supposed to mean "organization which has some connection to the EA community through its staff, or being connected to EA funding networks, etc." This is a useful concept because it's legible in a way impact often isn't. It's easy to tell whether an org has a grant from EA Funds/Open Phil, and while this doesn't guarantee their impact, it does stand in for "some people at the community vouch for their doing interesting work related to EA goals". I really don't like the term "high-impact organization" because it does the same sneaky work as "effective altruist" (another term I dislike). You're defining yourself as being "good" without anyone getting a chance to push back, and in many cases, there's no obvious way to check whether you're telling the truth. Consider questions like these: * Is Amazon a high-impact organization? (80K lists jobs at Amazon on their job board, so... maybe? I guess certain jobs at Amazon are "high-impact", but which ones? Only the ones 80K posts?) * Is MIRI a high-impact organization? (God knows how much digital ink has been spilled on this one) * Is SCI a high-impact organization? [https://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/] * Is the Sunrise Movement a high-impact organization? [https://www.givinggreen.earth/post/the-sunrise-movement] It seems like there's an important difference between MIRI and SCI on the one hand, Amazon and Sunrise on the other. The first two have a long history of getting support, funding, and interest from people in the EA movement; they've given talks at EA Global. This doesn't necessarily make them most impactful than Amazon and Sunrise, but it does mean that working at one of those orgs puts you in the c

Terms that seem to have some of the good properties of "EA-aligned" without running into the "assuming your own virtue" problem:

  • "Longtermist" (obviously not synonymous with "EA-aligned", but it accurately describes a subset of orgs within the movement)
  • "Impact-driven" or something like that (indicating a focus on impact without insisting that the focus has led to more impact) 
  • "High-potential" or "promising" (indicating that they're pursuing a cause area that looks good by standard EA lights, without trying to assume success — still a bit self-promotion
... (read more)
5jiayang13dOh, I would have thought it's the other way around - sometimes people don't want to be known as EA-aligned because that can have negative connotations (being too focused on numbers, being judgmental of "what's worthy", slightly cult-like etc). I think "high-impact organisation" may be a good idea as well.
RyanCarey's Shortform

EA Highschool Outreach Org (see Catherine's and Buck's posts, my comment on EA teachers)

Running a literal school would be awesome, but seems too consuming of time and organisational resources to do right now.Assuming we did want to do that eventually, what could be a suitable smaller step? Founding an organisation with vetted staff, working full-time on promoting analytical and altruistic thinking to high-schoolers - professionalising in this way increases the safety and reputability of  these programs. Its activities should be targeted to top schools... (read more)

nora's Shortform

Below, I briefly discuss some motivating reasons, as I see them, to foster more interdisciplinary thought in EA. This includes ways EA's current set of research topics might have emerged for suboptimal reasons. 


More EA-relevant interdisciplinary research : why?


The ocean of knowledge is vast. But the knowledge commonly referenced within EA and longtermism represents only a tiny fraction of this ocean. 

I argue that EA's knowledge tradition is skewed for reasons including but not-limited-to the epistemic merit of those bodies of knowledge. There are... (read more)

I think RAND is a good case study for interdisciplinary approaches to problem solving, though I'm biased. The key there, as in industry and most places other than academia, but unlike Santa Fe and the ARPAs, is a focus on solving concrete specific problems regardless of the tools used.

Also, big +1 to cybernetics, which is an interesting case study for 2 reasons, first because of what worked, and second because of how it was supplanted / coopted into narrow disciplines, and largely fizzled out as its own thing.

konrad's Shortform

Our World in Data has created two great posts this year, highlighting how the often proposed dichotomy between economic growth & sustainability is false.

In The economies that are home to the poorest billions of people need to grow if we want global poverty to decline substantially, Max Roser points out that given our current wealth,

the average income in the world is int.-$16 per day

Which is far below what we'd think of as the poverty line in developed countries. This means that mere redistribution of what we have is insufficient - we'd all end up poor ... (read more)

RyanCarey's Shortform

Making community-building grants more attractive

An organiser from Stanford EA asked me today how community building grants could be made more attractive. I have two reactions:

  1. Specialised career pathways. To the extent that this can be done without compromising effectiveness, community-builders should be allowed to build field-specialisations, rather than just geographic ones. Currently, community-builders might hope to work at general outreach orgs like CEA and 80k. But general orgs will only offer so many jobs. Casting the net a bit wider, many activities
... (read more)
Miranda_Zhang's Shortform

Would it be useful to compile EA-relevant press?

Inspired by me seeing this Vice article on wet-bulb conditions (a seemingly unlikely route for climate change to become an existential risk): Scientists Studying Temperature at Which Humans Spontaneously Die With Increasing Urgency 

If so, what/how? I don't think full-time monitoring makes sense (first rule of comms: do everything with a full comms strategy in mind!) but I wonder if a list or Airtable would still be useful for organizations to pull from or something...

My hope is that people who see EA-relevant press will post it here (even in Shortform!). 

I also track a lot of blogs for the EA Newsletter and scan Twitter for any mention of effective altruism, which means I catch a lot of the most directly relevant media. But EA's domain is the entire world, so no one person will catch everything important. That's what the Forum is for :-)

I'm not sure whether you're picturing a project specific to stories about EA or one that covers many other topics. In the case of the former, me and others at CEA know about nearly... (read more)

4Misha_Yagudin24dI think David Nash [https://forum.effectivealtruism.org/users/davidnash] does something similar with his EA Updates (here is the most recent one [https://forum.effectivealtruism.org/posts/i6X38nx3dDtEiEE5J/ea-updates-for-july-2021] ). While most of the links are focused on EA Forum and posts by EA/EA-adj orgs, he features occasional links from other venues.
1Miranda_Zhang24dGood flag, thanks!
Aaron Gertler's Shortform

I enjoyed learning about the Henry Spira award. It is given by the Johns Hopkins School of Public Health to "honor animal activists in the animal welfare, protection, or rights movements who work to achieve progress through dialogue and collaboration."

The criteria for the award are based on Peter Singer's summary of the methods Spira used in his own advocacy. Many of them seem like strong guiding principles for EA work in general:

  1. Understands public opinion, and what people outside of the animal rights/welfare movement are thinking.
  2. Selects a course of actio
... (read more)
Load More