Aman Patel

Topic Contributions

Comments

Institution design for exponential technology

Thanks for posting your attempt! Yeah, it does seem like you ran into some of those issues in your attempt, and it's useful information to know that this task is very hard. I guess one lesson here is that we probably won't be able to build perfect institutions on the first try, even in safety-critical cases like AGI governance.

Institution design for exponential technology

Just stumbled upon this post--I like the general vein in which you're thinking. Not sure if you're aware of it already, but this post by Paul Christiano addresses the "inevitable dangerous technology" argument as it relates to AI alignment. 

 - "First-principles design is intractable and misses important situation-specific details" - This could easily be true, I don't have a strong opinion on it, just intutions.

I think this objection is pretty compelling. The specific tools that an institution can use to ensure that a technology is deployed safely will ultimately depend on the nature of that technology itself, its accessibility/difficulty of replication, the political/economic systems it's integrated into, and the incentives surrounding its deployment. (Not an exhaustive list.)

Usually, any type of regulation or "responsible power-wielding" comes with tradeoffs (to freedom, efficiency, equitability, etc.), and it'll be hard to assess whether these accepting these tradeoffs is prudent without a specific technology in mind.

That said, I think it can still be a worthwhile exercise to think about how we can build governance practices that are robust to worst-case scenarios for all of the above. I can imagine some useful insights coming out of that kind of exercise!

EA culture is special; we should proceed with intentionality

Thanks, great points (and counterpoints)!

If you are a community builder (especially one with a lot of social status), be loudly transparent with what you are building your corner of the movement into and what tradeoffs you are/aren’t willing to make.

I like this suggestion--what do you imagine this transparency looks like? Do you think, e.g., EA groups should have pages outlining their community-building philosophies on their websites? Should university groups should write public Forum posts about their plans and reasoning before every semester/quarter or academic year? Would you advocate for more community-building roundtables at EAGs? (These are just a few possible example modalities of transparency that just came to my head, very interested in hearing more.)

A hypothesis for why some people mistake EA for a cult

Yeah, I've had several (non-exchange) students ask me what altruism means--my go-to answer is "selflessly helping others," which I hope makes it clear that it describes a practice rather than a dogma. 

A hypothesis for why some people mistake EA for a cult

Thanks for the comment! I agree with your points--there are definitely elements of EA, whether they're core to EA or just cultural norms within the community, that bear stronger resemblances to cult characteristics. 

My main point in this post was to explore why someone who hasn't interacted with EA before (and might not be aware of most of the things you mentioned) might still get a cult impression. I didn't mean to claim that the Google search results for "altruism" are the most common reason why people come away with a cult impression. Rather, I think that they might explain a few perplexing cases of cult impressions that occur before people become more familiar with EA. I should have made this distinction clearer, thanks for pointing it out :)

Which Post Idea Is Most Effective?

Hey Jordan! Great to see another USC person here. The best writing advice I've gotten (that I have yet to implement) is to identify a theory of change for each potential piece--something to keep in mind!

6 sounds interesting, if you can make a strong case for it. Aligning humans isn't an easy task (as most parents, employers, governments, and activists know very well), so I'm curious to hear if you have tractable proposals.

7 sounds important given that a decent number of EAs are vegan, and I'm quite surprised I haven't heard of this before. 15 IQ points is a whole standard deviation, so I'd love to see the evidence for that.

8 might be interesting. I suspect most people are already aware of groupthink, but it could be good to be aware of other relevant phenomena that might not be as widely-known (if there are any).

From what I can tell, 11 proposes a somewhat major reconsideration of how we should approach improving the long-term future. If you have a good argument, I'm always in favor of more people challenging the EA community's current approach. I'm interested in 21 for the same reason.

(In my experience, the answer to 19 is no, probably because there isn't a clear, easy-to-calculate metric to use for longtermist projects in the way that GiveWell uses cost-effectiveness estimates.)

Out of all of these, I think you could whip up a draft post for 7 pretty quickly, and I'd be interested to read it!

What are some heuristics for longtermist project evaluation?

Thanks Linch! This list is really helpful. One clarifying question on this point: 

Relatedly, what does the learning/exploration value of this project look like?

  1. To the researcher/entrepreneur?
  2. To the institution? (if they're working in an EA-institutional context)
  3. To the EA or longtermist ecosystem as a whole?

For 1) and 2), I assume you're referring to the skills gained by the person/institution completing the project, which they could then apply to future projects. 

For 3), are you referring to the possibility of "ruling out intervention X as a feasible way to tackle x-risks"? That's what I'm assuming, but I'm just asking to make sure I understand properly.

Thanks again!

High School Seniors React to 80k Advice

This thinking has come up in a few separate intro fellowship cohorts I’ve facilitated. Usually, somebody tries to flesh it out by asking whether it’s “more effective” to save one doctor (who could then be expected to save five more lives) or two mechanics (who wouldn’t save any other lives) in trolley-problem scenarios. This discussion often gets muddled, and many people have the impression that “EAs” would think it’s better to save the doctor, even though I doubt that’s a consensus opinion among EAs. I’ve found this to be a surprisingly large snag point that isn’t discussed much in community-building circles.

I think it would be worth it to clarify the difference between intrinsic and instrumental value in career advice/intro fellowships/other first interactions with the EA community, because there are some people who might agree with other EA ideas but find that this argument undermines our basic principles (as well as the claim that you don’t need to be utilitarian to be an EA). Maybe we could extend current messaging about ideological diversity within EA.

That said, I read Objection 4 differently. Many people (especially in cultures that glorify work) tie their sense of self-worth to their jobs. I don’t know how universal this is, but at least in my middle-class American upbringing, there was a strong sense that your career choice and achievement is a large part of your value as a person. 

As a result, some people feel personally judged when their intended careers aren’t branded as “effective”. If you equate your career value with your personal value, you won’t feel very good if someone tells you that your career isn’t very valuable, and so you’ll resist that judgment.

I don’t think that this feeling precludes people from being EAs. It takes time to separate yourself from your current or intended career, and Objection 4 strikes me as a knee-jerk defensive reaction. Students planning to work in shipping logistics won’t immediately like the idea that the job they’ve been working hard to prepare for is “ineffective,” but they might come around to it after some deeper reflection. 

I could be misreading Objection 4, though. It could also mean something like “shipping logistics is valuable because the world would grind to a halt if nobody worked in shipping logistics,” but then that’s just a variant of Objection 5.

I’m very curious to know more about the sense in which these students gave Objection 4. 

[Creative Writing Contest] The Legend of the Goldseeker

Changed "guilt" to "responsibility," but I'm not sure if that's much better.

[Creative Writing Contest] The Legend of the Goldseeker

Thanks for the feedback! I think this is probably a failure of the story more than a failure of your understanding--after all, a story that's hard to understand isn't fulfilling its purpose very well. Jackson Wagner's comment below is a good summary of the main points I was intending to get across.

Next time I write, I'll try to be more clear about the points I'm trying to convey. 

Load More