akash

326 karmaJoined Sep 2022Pursuing a graduate degree (e.g. Master's)

Participation
5

  • Organizer of Tucson Effective Altruism
  • Attended an EAGx conference
  • Received career coaching from 80,000 Hours
  • Attended an EA Global conference
  • Completed the ML Safety Scholars Virtual Program

Posts
2

Sorted by New
3
akash
· 4mo ago · 1m read

Comments
46

I think that a human being in a constant blissful state might endanger someone's existence or make them non-functional

But if pure suffering elimination was the only thing that mattered, no one would be endangered, right? I am guessing there are some other factors you account for when valuing human lives?

which isn't much of an issue for a farm animal.

I suspect we share very different ethical intuitions about the intrinsic value of non-human lives.

But even from an amoral perspective, this would be an issue because if a substantial number of engineered chickens pecked each other to death (which happens even now), it would reduce profitability and uptake of this method.

The second-order considerations are definitely a problem once there is more widespread adoption. If only 0.001% of the population is using genetic enhancement, there are very little in the way of collective action problems.

I partially agree, but even a couple of malevolent actor who enhance themselves considerably could cause large amounts of trouble. See this section of Reducing long-term risks from malevolent actors.

If it is indeed possible to modify animal minds to such an extent that we would be 100% certain that previously displeasing experiences are now blissful, then couldn't we extend this logic and "solve" every single problem? Like, making starvation and poverty and disease and extinction blissful as well?

I feel there are crucial moral and practical (e.g., 2nd order effects) considerations to account for here.

Fascinating — skimmed his wikipedia and this video, and I think he is 100% serious. He even wrote a paper with Sandberg and Roache arguing the same. 

I posted this because it is an inside joke at our university group, but I appreciate that some professional philosophers have given it a more serious treatment.

Such rich literature! I think the major flaw in their methodology is lack of coordinated, incremental scaling (which seems to be the reason why the test subject faced quite a bit of trouble). That said, it still reinforces the arguments of the proposal above, so thank you for sharing these!

I was skeptical, and then I saw the menu. 

If Dustin wants to further diversify his investment portfolio, this might be a great choice.

akash
1mo68
16
0
3

David Nash's Monthly Overload of Effective Altruism seems highly underrated, and you should most probably give it a follow.

I don't think any other newsletter captures and highlights EA's cause-neutral impartial beneficence better than the Monthly Overload of EA. For example, this month's newsletter has updates about Conferences, Virtual Events, Meta-EA, Effective Giving, Global Health and Development, Careers, Animal Welfare, Organization updates, Grants, Biosecurity, Emissions & CO2 Removal, Environment, AI Safety, AI Governance, AI in China, Improving Institutions, Progress, Innovation & Metascience, Longtermism, Forecasting, Miscellaneous causes and links, Stories & EA Around the World, Good News, and more. Compiling all this must be hard work!

Until September 2022, the monthly overloads were also posted on the Forum and received higher engagement than the Substack. I find the posts super informative, so I am giving the newsletter a shout-out and putting it back on everyone's radar!

What do you think is the reason behind such a major growth? What are they doing differently that GWWC or other EA orgs could adopt?

  1. I think it would have been better if you distilled your responses; much of the 80K career sheet is trying to guide you towards next steps and clarify your priorities and preferences, so the initial set of questions may be kind of redundant. The post right now is kind of hard to parse.
    1. If I had to guess, this may be the reason behind the downvotes, although I am unsure.
  2. I see somewhere around 4-6 career directions right now. Since you have a few years of financial runaway and since you stated that "Exploration. I don't know what I'm going to do as a career," it might be worth meticulously planning out the next 6-12 months to explore the different options you are considering.
    1. SWE: do you have prior coding experience? If yes, how did you like programming and how good were you at it? If not, then have you checked short programs which will help you learn the basics of programming quickly and also gauge if you enjoy and are adept at it?
      1. Being a SWE is more than being a programmer, but programming is a necessary first step.
    2. Safety: Are you interested in Technical safety? If yes, do you enjoy programming, math, and research to a considerable degree? Are you also open to policy/governance roles? What about being an operations person involved in a Safety org?
    3. Journalism: Do you have prior experience with and enjoy research and writing? If not, maybe writing some sample pieces and getting feedback from friends/strangers who will be blunt about the quality and depth of your writing would help.
    4. Landlord/personal trainer/psychology: These might be the easiest for you given your financial situation and because you already have relevant work experience. That said, since effective giving will be your primary pathway to impact in this case:
      1. It would be worth spending lots of time learning about effective giving,
      2. Choosing which cause/interventions you want to donate to, and
      3. Maximizing the amount of money you can donate.

How did I miss this update? Either way, thank you for sharing!

What happened to US Policy Careers?

They had several in-depth, informative articles. Shame if they are off the Forum and there is no way to access them.

Load more