rodeo_flagellum
Working (0-5 years experience)

Topic Contributions

Comments

Contest: 250€ for translation of "longtermism" to German

Any updates on this? I'm interested to see your thoughts on all these good responses.

[Linkpost] World Mental Health Report: Transforming Mental Health for All

Thank you for sharing this. For some reason, a lot of WHO's reports usually escape my radar.

What Is Most Important For Your Productivity?

Thank you for posting this.

I want to direct more attention to Decreasing Anxiety. If these observations and pieces of advice were weighted, I would expect reducing one's anxiety to be near or at the top.  

Many environments and activities within the EA-sphere (e.g., research or grant-making) are quite stressful, and operating continually in these environments can lead to burnout or the consequences of anxiety. 

Here is a simple reminder of the activities that are fundamental for flourishing (and for reducing anxiety) as a human:

Though seemingly obvious, many people in Western civilization (especially the USA) routinely fail at these things. 

Unflattering reasons why I'm attracted to EA

I admit, some of these apply to me as well. I would be interested in reading further on the phenomenon, which I can't seem to find a term for, of "ugly intentions (such as philanthropy purely for status) that produce a variety of good outcomes for self and others, where the actor knows that this variety of good outcomes for others is being produced but is in it for other reasons".

Your post reminds me of some passages from the chapter on charity in the book The Elephant in the Brain (rereading it now to illustrate some points), and could probably be grouped under some of the  categories in the final list. I would recommend this reading this book, generally speaking. 

Intro.

What Singer has highlighted with this argument is nothing more than simple, everyday human
hypocrisy—the gap between our stated ideals (wanting to help those who need it most) and our
actual behavior (spending money on ourselves). By doing this, he’s hoping to change his readers’
minds about what’s considered “ethical” behavior. In other words, he’s trying to moralize.
 

Our goal, in contrast, is simply to investigate what makes human beings tick. But we will still
find it useful to document this kind of hypocrisy, if only to call attention to the elephant. In
particular, what we’ll see in this chapter is that even when we’re trying to be charitable, we
betray some of our uglier, less altruistic motives.

Warm Glow

Instead of acting strictly to improve the well-being of others, Andreoni theorized, we do charity in part because of a selfish psychological motive: it makes us happy. Part of the reason we give to homeless people on the street, for example, is because the act of donating makes us feel good, regardless of the results.


Andreoni calls this the “warm glow” theory. It helps explain why so few of us behave like effective altruists. Consider these two strategies for giving to charity: (1) setting up an automatic monthly payment to the Against Malaria Foundation, or (2) giving a small amount to every panhandler, collection plate, and Girl Scout. Making automatic payments to a single charity may be more efficient at improving the lives of others, but the other strategy—giving more widely, opportunistically, and in smaller amounts—is more efficient at generating those warm fuzzy feelings. When we “diversify” our donations, we get more opportunities to feel good.

...

  • Visibility. We give more when we’re being watched.
  • Peer pressure. Our giving responds strongly to social influences.
  • Proximity. We prefer to help people locally rather than globally.
  • Relatability. We give more when the people we help are identifiable (via faces and/or stories) and give less in response to numbers and facts.
  • Mating motive. We’re more generous when primed with a mating motive.

This list is far from comprehensive, but taken together, these factors help explain why we donate so inefficiently, and also why we feel that warm glow when we donate. Let’s briefly look at each factor in turn.

Simler and Hanson then cover each of the listed entities in greater depth.

Little (& effective) altruism

Thank you Parmest for writing this post. Shared reflections and experiences such as this one seem to occur somewhat infrequently on the EAF, and I appreciate your perspective. 

Some things came to my when reading this. 

A post that you may find enjoyable and insightful is Keeping Absolutes in Mind. Here, Michelle Hutchinson writes about altruistic baselines: 

In cases like those above, it might help to think more about the absolute benefit our actions produce. That might mean simply trying to make the value more salient by thinking about it. The 10% of my income that I donate is far less than that of some of my friends. But thinking through the fact that over my life I’ll be able to do the equivalent of save more than one person from dying of malaria is still absolutely incredible to me. Calculating the effects in more detail can be even more powerful – in this case thinking through specifically how many lives saved equivalent my career donations might amount to. Similarly, when you’re being asked to pay a fee, thinking about how many malaria nets that fee could buy really makes the value lost due to the fee clear. That might be useful if you need to motivate yourself to resist paying unnecessary overheads (though in other cases doing the calculation may be unhelpfully stressful!).

which I believe is in line with your idea that local altruism, or the baseline altruism most people unfamiliar with EA think of when they imagine "altruism", is still absolutely good even if it's less good relative to other actions, and might support or drive other, more "macro-scale" altruistic action.

After days of reflection, I understood what the problem was with me. The big talks on the forum had overshadowed my modesty. This was a profound and important realization for me. I recognized that a sudden jump to the big things was not making me an altruistic human being. Even if I would have managed to make contributions, I would never have become a part of EA. 

In most instances, I suspect lowering the bar for noticing, recognizing, or being cognizant of altruistic deeds probably will not detract significantly from the expected effectiveness of the most altruism deeds, so at minimum it wouldn't hurt to care more about and help those around you in whatever ways possible and would likely improve

Again, thank you for sharing these thoughts.

Contest: 250€ for translation of "longtermism" to German

Entering "longtermism" into Google Translate produces Langfristigkeit, which has already been stated below. 

To add additional weight to this definition,  my native-speaking German grandmother believes that "Langfristigkeit" is probably the best or near-best translation for longtermism, after thinking about it for around 10 minutes and reading the other responses, although she is not terribly familiar with the idea of longtermism.

For additional context, the following means "long-term future" in German:

  • langzeitige Zukunft

One problem is properly getting "ism" in the word, and also capturing the idea within longtermism that actions with high (positive) expected value for the long-term future should be prioritized. 

One final phrase for consideration is:

  • Maximierung des zukünftigen Wohlwollens

which means roughly "maximizing future good for mankind". Despite not being a single word, this phrase is also sendorsed by my grandmother. 

Global health is important for the epistemic foundations of EA, even for longtermists

Thank you for contributing this. I enjoyed reading it and thought that it made some people’s tendency in EA (which I might be imagining) to "look at other cause-areas with Global Health goggles" more explicit.

Here are some notes I’ve taken to try to put everything you’ve said together. Please update me if what I’ve written here omits certain things, or presents things inadequately. I’ve also included additional remarks to some of these things.

  • Central Idea: [EA’s claim that some pathways to good are much better than others] is not obvious, but widely believed (why?).
    • Idea Support 1: The expected goodness of available actions in the altruistic market differs (across several orders of magnitude) based on the state of the world, which changes over time.
      • If the altruistic market was made efficient (which EA might achieve), then the available actions with the highest expected goodness, which change with the changing state of the world, would routinely be located in any  world state. Some things don't generalize.
    • Idea Support 2: Hindsight bias routinely warps our understanding of which investments, decisions, or beliefs were best made at the time, by having us believe that the best actions were more predictable than they were in actuality. It is plausible that this generalizes to altruism. As such, we run the risk of being overconfident that, despite the changing state of the world, the actions with the highest expected goodness presently will still be the actions with the highest expected goodness in the future, be that the long-term one or the near-term one.
    • (why?): The cause-area of global health has well defined metrics of goodness, i.e. the subset of the altruistic market that deals with altruism in global health is likely close to being efficient.
      • Idea Support 3: There is little cause to suspect that since altruism within global health is likely close to being efficient, altruism within other cause-areas are close to efficient or can even be made efficient, given their domain-specific uncertainties.
    • Idea Support 4: How well “it’s possible to do a lot of good with a relatively small expenditure of resources” generalizes beyond global health is unclear, and should likely not be a standard belief for other cause-areas. The expected goodness of actions in global health is contingent upon the present world state, which will change (as altruism in global health progresses and becomes more efficient, there will be diminishing returns in the expected goodness of the actions we take today to further global health)
    • Action Update 1: Given the altruistic efficiency and clarity within global health, and given people’s support for it, it makes sense to introduce EA’s altruist market in global health to newcomers; however, we should not “trick” them into thinking EA is solely or mostly about altruism in global heath - rather, we should frame EA’s altruist market in global health as an example of what a market likely close to being efficient can look like.
What YouTube channels do you watch?

Thank you for doing this. 

Even though aggregating what media forum members learn from and interact with seems obviously useful, I am surprised this hasn't been done more frequently (I have not seen a form of this nature, but only have a fractional sample of what's out there). 

I am very interested to see what you find (partially to find some new content to absorb) and hope that many people fill out this form. 

Why You Should Earn to Give in Tulsa, OK, USA

Thank you for sharing this experience. It upweights the idea of me moving to another state, partially on the basis of grant relocation programs.

I remember seeing, in the past, that Vermont would pay remote workers 10k USD to relocate (here). I can't find much on this now, but did find that Vermont has a New Relocating Worker Grant (here)

QUALIFYING RELOCATION EXPENSES

Upon successful relocation to Vermont and review of your application, the following qualifying relocation expenses may be reimbursed:  

  • Closing costs for a primary residence or lease deposit and one month rent,
  • Hiring a moving company,
  • Renting moving equipment,
  • Shipping,
  • The cost of moving supplies

Incentives are paid out as a reimbursement grant after you have relocated to Vermont. Grants are limited and available on a first-come, first-served basis. 

There are probably states other than OK or VT that do such a thing.  

Revisiting the karma system

This post has a fair number of downvotes but is also generating, in my mind, a valuable discussion on karma, which heavily guides how content on EAF is disseminated. 

I think it would be good if more people who've downvoted share their contentions (it may well be the case that those who've already commented contributed the contentions). 

Load More
Working (0-5 years experience)

rodeoflagellum.github.io