aman-patel

Wiki Contributions

Comments

What we learned from a year incubating longtermist entrepreneurship

Thanks for this post! Reading through these lessons has been really informative. I have a few more questions that I'd love to hear your thinking on:

1) Why did you choose to run the fellowship as a part-time rather than full-time program?

2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?

3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?

4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?

5) Was there a reason you didn't have a public online presence, or was it just not a priority?

Should Chronic Pain be a cause area?

Thanks for the post, this is an important and under-researched topic. 

Examples include some well-known conditions (chronic migraine, fibromyalgia, non-specific low-back pain), as well as many lesser-known ones (trigeminal neuralgia, cluster headache, complex regionary pain syndrome)

Some of these well-known chronic pain conditions can be hard to diagnose, too. Chronic pain conditions like fibromyalgia, ME/CFS, rheumatoid arthritis, and irritable bowel syndrome are frequently comorbid with each other, and may also be related to depression and mental health disorders. This overlap probably makes it harder for doctors to tease out the root cause of patients’ symptoms.

As an anecdote, a close relative spent around a year bouncing around various doctors before she got a useful diagnosis, and even then the recommended therapies didn’t help much. So far, her pain is managed best by a diet she found on the internet herself.

I speculate that conventional medicine’s relative lack of machinery for identifying and treating some of these chronic illnesses may cause some patients to turn to pseudoscience instead—which could be another downstream harm of neglecting chronic pain treatments. (I haven’t tried to look for evidence for/against this conclusion.) 

saulius's Shortform

This is an interesting idea. I'm trying to think of it in terms of analogues: you could feasibly replace "digital minds" with "animals" and achieve a somewhat similar conclusion. It doesn't seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible. 

Perhaps the reason we don't see this happening is that nobody really wants to influence vegans alone. There aren't many strategic reasons to target an unorganized group of people whose sole common characteristic is that they care about animals. There isn't much that an agent could gain from a threat.

I imagine the same might be true of digital minds. If it's anything similar to the animal case, moral circle expansion to digital minds will likely occur in the same haphazard, unorganized way--and so there wouldn't be much of a reason to specifically target people who care about digital minds. That said, if this moral circle expansion caught on predominantly in one country (or maybe within one powerful company), a competitor or opponent might then have a real use for threatening the digital mind-welfarists. Such an unequal distribution of digital mind-welfarists seems quite unlikely, though.

At any rate, this might be a relevant consideration for other types of moral circle expansion, too.

Introducing a project on accountability in governance, plus a call for volunteers

Thanks for the tip! I'll try contacting him through the website you linked--it would be great to hear more from people who have attempted this sort of project before.

AMA: Owen Cotton-Barratt, RSP Director

How do you think the EA community can improve its interactions and cooperation with the broader global community, especially those who might not be completely comfortable with the underlying philosophy? Do you think it's more of a priority to spread those underlying arguments, or to simply grow the network of people sympathetic to EA causes, even if they disagree with the principles of EA?

Open and Welcome Thread: August 2020

Hi everyone! I'm Aman, an undergrad at USC currently majoring in computational neuroscience (though that might change). I'm very new to EA, so I haven't yet had the chance to be involved with any EA groups, but I would love to start participating more with the community. I found EA after spending a few months digging into artificial general intelligence, and it's been great to read everyone's thoughts about how to turn vague moral intuitions into concrete action plans.

I have a soft spot for the standard big-picture philosophy/physics topics, like the nature of intelligence and meta-ethics and epistemology and theories of everything, but also the wildly unpragmatic questions (like whether we might consider directing ourselves into a time loop once heat death comes around, if it's possible).

As a career, I tentatively want to focus on improving global governance capacity, since I'm inclined to think that it might ultimately determine how well EA-related research and prioritization can be implemented (and also how well we are able to handle x- and s-risks, and capitalize on safe AI). I realize that this is probably one of the least tractable goals to have, so I might end up working in another area, like international development, mental health, science policy, or something else entirely. Amusingly, all the EA career advice out there has only made me more confused about what I should be doing (but I'm probably approaching it wrong).

Anyway, I'm excited to be here and grateful for the opportunity to start interacting with the EA community!