AppliedDivinityStudies

Topic Contributions

Comments

Thought experiment: If you had 3 months to turn a stressed and unhappy person into a calm and happy one, what meta approach would you take?

Antidepressants do actually seem to work, and I think it's weird that people forget/neglect this. See Scott's review here and more recent writeup. Those are both on SSRIs, there is also Wellbutrin (see Robert Wiblin's personal experience with it here) and at least a few other fairly promising pharmacological treatments.

I would also read the relevant Lorien Psych articles and classic SSC posts on depression treatments and anxiety treatments.

Since you asked for the meta-approach: I think the key is to stick with each thing long enough to see if it works, but also do actually move on and try other things. 

COVID memorial: 1ppm

Ideas are like investments, you don't want just want a well diversified portfolio, you want to intentionally hedge against other assets. In this view, the best way to develop a scout's mindset for yourself is to read a wide variety of writers, many of whom will be quite dogmatic. The goal shouldn't be to only read other reasonable people, but to read totally unreasonable people across domains and synthesize their claims into something coherent.

As you correctly note, Graeber is a model thinker in a world of incoherent anarchist/marxist ramblings. I think our options are to either dismiss the perspective altogether (reasonable, but tragic) or take his factual claims with a grain of salt, while acknowledging his works as fountain of insight. 

I would happily accept the criticism if there were any anarchist/marxist thinker alive today reasoning more clearly than Graeber, but I don't think there is.

EA should taboo "EA should"

Strongly agree on this. It's been a pet peeve of mine to hear exactly these kinds of phrases. You're right that it's nearly a passive formulation, and frames things in a very low-agentiness way.

At the same time, I think we should recognize the phrasing as a symptom of some underlying feeling of powerlessness. Tabooing the phrase might help, but won't eradicate the condition. E.g.:
- If someone says "EA should consider funding North Korean refugees"
- You or I might respond "You should write up that analysis! You should make that case!"
- But the corresponding question is: Why didn't they feel like they could do that in the first place? Is it just because people are lazy? Or were they uncertain that their writeup would be taken seriously? Maybe they feel that EA decision making only happens through "official channels" and random EA Forum writers not employed by large EA organizations don't actually have a say?

 

What is the new EA question?

I would add that we should be trying to increase the pool of resources. This includes broad outreach like Giving What We Can and the 80k podcast, as well as convincing EAs to be more ambitious, direct outreach to very wealthy people, and so on.

 

Some thoughts on vegetarianism and veganism

It sounds wild, but AFAIK, the cotton gin and maybe some other forms of automation actually made slavery more profitable! 

From Wikipedia:
> Whitney's gin made cotton farming more profitable, so plantation owners expanded their plantations and used more slaves to pick the cotton. Whitney never invented a machine to harvest cotton, it still had to be picked by hand. The invention has thus been identified as an inadvertent contributing factor to the outbreak of the American Civil War.

 

Future-proof ethics

across the board the ethical trend has been an extension of rights, franchise, and dignity to widening circles of humans

 

I have two objections here.
1) If this is the historical backing for wanting to future-proof ethics, shouldn't we just do the extrapolation from there directly instead of thing about systematizing ethics? In other words, just extent rights to all humans now and be done with it.
2) The idea that the ethical trend has been a monotonic widening is a bit self-fulfilling, since we don't no longer consider some agents to be morally important. I.e. the moral circle has narrowed to exclude ancestors, ghosts, animal worship, etc. See Gwern's argument here:
https://www.gwern.net/The-Narrowing-Circle

Idea: Red-teaming fellowships

One really useful way to execute this would be to bring in more outside non-EA experts in relevant disciplines. So have people in development econ evaluate GiveWell (great example of this here), engage people like Glen Wely to see how EA could better incorporate market-based thinking and mechanism design, engage hardcore anti-natalist philosophers (if you can find a credible one), engage anti-capitalist theorists skeptical of welfare and billionaire philanthropy, etc.

One specific pet project I'd love to see funded is more EA history. There are plenty of good legitimate expert historians, and we should be commissioning them to write for example on the history of philanthropy (Open Phil did a bit here), better understanding the causes of past civilizations' ruin, better understanding intellectual moral history and how ideas have progressed over time, and so on. I think there's a ton to dig into here, and think history is generally underestimated as a perspective (you can't just read a couple secondary sources and call it a day).

Idea: Red-teaming fellowships

I agree that it's important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they weren't or forced an update in that direction, the overall amount of funding shifted would be fairly small.

You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I don't think that's true. EAs are easily nerd sniped, and there isn't any kind of "efficient market" for prioritizing high impact questions. There's also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But that's precisely where we should be focusing more attention.

Load More