aman-patel

Wiki Contributions

Comments

High School Seniors React to 80k Advice

This thinking has come up in a few separate intro fellowship cohorts I’ve facilitated. Usually, somebody tries to flesh it out by asking whether it’s “more effective” to save one doctor (who could then be expected to save five more lives) or two mechanics (who wouldn’t save any other lives) in trolley-problem scenarios. This discussion often gets muddled, and many people have the impression that “EAs” would think it’s better to save the doctor, even though I doubt that’s a consensus opinion among EAs. I’ve found this to be a surprisingly large snag point that isn’t discussed much in community-building circles.

I think it would be worth it to clarify the difference between intrinsic and instrumental value in career advice/intro fellowships/other first interactions with the EA community, because there are some people who might agree with other EA ideas but find that this argument undermines our basic principles (as well as the claim that you don’t need to be utilitarian to be an EA). Maybe we could extend current messaging about ideological diversity within EA.

That said, I read Objection 4 differently. Many people (especially in cultures that glorify work) tie their sense of self-worth to their jobs. I don’t know how universal this is, but at least in my middle-class American upbringing, there was a strong sense that your career choice and achievement is a large part of your value as a person. 

As a result, some people feel personally judged when their intended careers aren’t branded as “effective”. If you equate your career value with your personal value, you won’t feel very good if someone tells you that your career isn’t very valuable, and so you’ll resist that judgment.

I don’t think that this feeling precludes people from being EAs. It takes time to separate yourself from your current or intended career, and Objection 4 strikes me as a knee-jerk defensive reaction. Students planning to work in shipping logistics won’t immediately like the idea that the job they’ve been working hard to prepare for is “ineffective,” but they might come around to it after some deeper reflection. 

I could be misreading Objection 4, though. It could also mean something like “shipping logistics is valuable because the world would grind to a halt if nobody worked in shipping logistics,” but then that’s just a variant of Objection 5.

I’m very curious to know more about the sense in which these students gave Objection 4. 

[Creative Writing Contest] The Legend of the Goldseeker

Changed "guilt" to "responsibility," but I'm not sure if that's much better.

[Creative Writing Contest] The Legend of the Goldseeker

Thanks for the feedback! I think this is probably a failure of the story more than a failure of your understanding--after all, a story that's hard to understand isn't fulfilling its purpose very well. Jackson Wagner's comment below is a good summary of the main points I was intending to get across.

Next time I write, I'll try to be more clear about the points I'm trying to convey. 

[Creative Writing Contest] The Legend of the Goldseeker

"As tagged, this story strikes me as a fable intended to explain one of the mechanisms behind so-called "S-risks", hellish scenarios that might be a fate worse than the "death" represented by X-risks."

That's what I was going for, although I'm aware that I didn't make this as clear as I should have.

"Of course it's a little confusing to have the twist with the sentient birds -- I think rather than a literal "farmed animal welfare" thing, this is intended to showcase a situation where two different civilizations have very different values."

Same thing here. This is what I was trying to get at, but couldn't think of many other scenarios involving suffering agents where one group of people cares and another doesn't.

"I don't really understand why the story is a frame story, or why the main purpose of the ritual is for all the Kunus to feel "collective guilt"... EA is usually trying to steer away from giving the impression that we want everyone to feel guilty all the time."

This is really helpful feedback--I didn't realize that "collective guilt" came across as the point of the story, and I definitely agree that making people feel guilty is counterproductive. I can't remember why I threw in that phrase (probably because I couldn't think of anything else), but I'll change it now. 

Totally unrelated point, but I thought the economics of this story were a little wacky.

Yup, definitely more than a "little" wacky :) Maybe using another resource like food or water or land would be better--but then it would have been harder to make the point that each country thought were doing the right thing.

This is a good part of the parable -- if S-risks ever occur, the civilizations that commit those galactic war crimes will probably be convinced of their righteousness, and indeed probably won't even recognize that they are committing a wrong.

This is the central point that I wanted to get across. Whether we're considering a civilization or an advanced AI, s-risks need not result from intentional malevolence. I'm glad it didn't get too distorted, but it seems like there are better ways to build a story around this point.

Another side-note: a lot of the ideas behind this story are discussed in the Center on Long-Term Risk's research agenda. I don't know whether they would agree with my presentation or conceptualization of those ideas.

Thank you so much for the feedback!

[Creative Writing Contest] The Rise of The Effective Shoppers

Thanks! I'm glad you enjoyed it. The main reason I wrote this was to practice creative writing--and the Forum contest seemed to be a good place to do that. This is the first time I tried writing short stories--the only other creative writing piece I've published anywhere is this one, which I also wrote for the Forum contest: https://forum.effectivealtruism.org/posts/sGTHctACf73gunnk7/creative-writing-contest-the-legend-of-the-goldseeker

I hope that helps!

How to Train Better EAs?

I recently learned about Training for Good, a Charity Entrepreneurship-incubated project, which seems to address some of these problems. They might be worth checking out.

I think this is a great exercise to think about, especially in light of somewhat-recent discussion on how competitive jobs at EA orgs are. There seems to be plenty of room for more people working on EA projects, and I agree that it’s probably good to fill that opportunity. Some loose thoughts:

There seem to be two basic ways of getting skilled people working on EA cause areas:
1.  Selectively recruiting people who already have skills.
2. Recruiting promising people who might not yet have needed skills and train them. 

Individual organizations can choose both options, depending on their level of resources. But if most organizations choose option 1, the EA community might be underutilizing its potential pool of human resources. So we might want the community in general to use option 2, so that everyone who wants to be involved with EA can have a role—even if individual EA organizations still choose option 1. For this to happen, the EA community would probably need a program whereby motivated people can choose a skillset to learn, are taught that skillset, and are matched with a job at the end of the process. 

Currently, motivated people who don’t yet possess skills are placed into a jumble of 1-on-1 conversations, 80k advising calls, and fellowship and internship listings. Having those calls and filling out internship and fellowship applications takes a ton of time and mental energy, and might leave people more confused than they were initially. A well-run training program could eliminate many of these inefficiencies and reduce the risk that interested people won’t be able to find a job in EA. 

We can roughly rank skill-building methods by the number of people they reach (“scale”), and the depth of training that they provide. In the list below, “high depth” skill development could lead to being hired for that skill (when one would not have been hired for that skill otherwise), “medium depth” as warranting a promotion or increase in seniority level, and “low depth” as an enhancement of knowledge that can help someone perform their job better, but probably won’t lead to new positions or higher status.

  • Internal development within organizations, like Aaron Gertler mentioned (small scale, medium depth)
  • Internship/fellowship programs (medium scale, medium depth)
  • One-off workshops and lectures (small scale, low depth)
  • Cause area-specific fellowships, like EA Cambridge's AGI Safety Fellowship (large scale, low depth) 
  • A training program like the one I described above (large scale, high depth)
  • An EA university, as proposed here (large scale, high depth)

If we choose option 2, we probably want large scale, high depth ways to train people. I’m interested in hearing people’s thoughts on whether this is a good way to evaluate skill-building methods.

One caveat: there’s a lot more interest in working for the military than there is in working for EA orgs. Since this interest already exists, the military just needs to capitalize on it (although they still spend lots of money on recruitment ads and programs like ROTC). The EA community doesn’t even have great name recognition, so it’s probably premature to assume that we’d have waves of people signing up for such a training program—but it’s possible that we could get to that point with time.

What we learned from a year incubating longtermist entrepreneurship

Thanks for this post! Reading through these lessons has been really informative. I have a few more questions that I'd love to hear your thinking on:

1) Why did you choose to run the fellowship as a part-time rather than full-time program?

2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?

3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?

4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?

5) Was there a reason you didn't have a public online presence, or was it just not a priority?

Should Chronic Pain be a cause area?

Thanks for the post, this is an important and under-researched topic. 

Examples include some well-known conditions (chronic migraine, fibromyalgia, non-specific low-back pain), as well as many lesser-known ones (trigeminal neuralgia, cluster headache, complex regionary pain syndrome)

Some of these well-known chronic pain conditions can be hard to diagnose, too. Chronic pain conditions like fibromyalgia, ME/CFS, rheumatoid arthritis, and irritable bowel syndrome are frequently comorbid with each other, and may also be related to depression and mental health disorders. This overlap probably makes it harder for doctors to tease out the root cause of patients’ symptoms.

As an anecdote, a close relative spent around a year bouncing around various doctors before she got a useful diagnosis, and even then the recommended therapies didn’t help much. So far, her pain is managed best by a diet she found on the internet herself.

I speculate that conventional medicine’s relative lack of machinery for identifying and treating some of these chronic illnesses may cause some patients to turn to pseudoscience instead—which could be another downstream harm of neglecting chronic pain treatments. (I haven’t tried to look for evidence for/against this conclusion.) 

saulius's Shortform

This is an interesting idea. I'm trying to think of it in terms of analogues: you could feasibly replace "digital minds" with "animals" and achieve a somewhat similar conclusion. It doesn't seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible. 

Perhaps the reason we don't see this happening is that nobody really wants to influence vegans alone. There aren't many strategic reasons to target an unorganized group of people whose sole common characteristic is that they care about animals. There isn't much that an agent could gain from a threat.

I imagine the same might be true of digital minds. If it's anything similar to the animal case, moral circle expansion to digital minds will likely occur in the same haphazard, unorganized way--and so there wouldn't be much of a reason to specifically target people who care about digital minds. That said, if this moral circle expansion caught on predominantly in one country (or maybe within one powerful company), a competitor or opponent might then have a real use for threatening the digital mind-welfarists. Such an unequal distribution of digital mind-welfarists seems quite unlikely, though.

At any rate, this might be a relevant consideration for other types of moral circle expansion, too.

Load More