guneyulasturker 🔸

Bio

Participation
4

22 yo studying Philosophy double major in Business Administration. 

Currently a sustainability consulting  intern at PwC

 

I prioritize existential risks right now but I have not done my rigorous career prior yet. 

Please have a extreamly low barrier to reach out, we can have a sooo casual 1:1

Özellikle Türkiye'de ve EA ile ilgilenmeye başlıyorsanız lütfen yazın!! 
 

My Career Aptitude Interests:

  • Organization building, running, and boosting aptitudes
  • Communicator aptitudes
  • Entrepreneur aptitude
  • Community building aptitudes

Currently building career airbag so that I can do a moonshot project in few years.

Comments
11

Are these disagreements representative of the general disagreements between people with long and short AI timelines?

I think you didn't misunderstand me. 

If you are not assuming the longterm effects, the intervals below are only for shorttermish effects. (this is what I got from your comment, correct please if It's wrong)

H=[100,200]×[−300,50]×[−50,50]

Isn't one of the main problems brought by deep moral uncertinity is about how an intervention itself is benefical for the target group or not? 

An intervention effects far future. Far future effect is generally more important than short term effects.  So even if our short therm analysis shows the intervention was beneficial it doesn't mean in aggregate it is net positive. Thus we don't know the impact of the intervention. 

I believe this was the one of the main problems of moral uncertinity/cluenessness. What I don't get is how the model can solve moral uncertinity if it does not takes far longterm effects into account. 

I guess I didn't specify how far these sets of targets go into the future, so you could assume I'm ignoring far future moral patients, or we can extend the example to include them, with interventions positive in expectation for some sets of far future targets.

Is the bolded part even possible? Is there interventions that are highly likely to be positive for the target group in the very far future? 

 

P.S: Thank you for responding my comment this fast even if the post is 5 years old :) 

In the caculation  below it is assumed that, we know at least one intervention for each cause area that has no negatives so that we use our funds to compensate this net positive with negatives from other interventions. 

 

However isn't it hard to assume this? I personally can't think of an intervention that is definitely positive in the long term. It feels like instead of numbers there should be question marks due to complexity of everything.  Am I missing something? 

  • Write 12 Forum posts (each for every month)
  • Get As from all of my classes
  • Find a cool summer internship
  • Be done with my cause prio

Keep up the good habits and positive mental health I’ve been keeping up for some time

This is extremely relatable. Thank you for sharing it.  

Thank you for the inspiring post.

I’d like to introduce projects, buffer time and graduation to my group as well. Is it possible for you to tell me some examples of the projects? I didin’t get it from the post. I’d love to see it in more detail. Here’s my mail gturker21@ku.edu.tr

Also for not losing some people due to high commitment projects could be optional. Participants choose if they want to be in a project based or reading based group.

Small note: In my cohorts conversation nuance wise small size 4-6 was much better than bigger size 6-8. When the group is small its easier to feel comrftable and the discussion was much deeper.

Are there any updates on your group? How did you select your BMs and how did it go eventually? 

Thank you for this post it really resonated with me.

I think people get recruited by EA relatively fast. Go to intro fellowship, attend a eag/retreat, take ideas seriously, plan your career…

This process is too fast to actually grasp the complex ideas to do a good cause prioritization.  After this 30+ hour EA learning phase, you stop the “learning mindset” and start taking ideas seriously and act on top problems. I was surprised to see the amount of deference in my first EA event. Even a lot of “Experienced EAs” did not have an somewhat understanding on really serious topics like AGI timelines and their implications

Anyway, I was planning to take a “Prioritization Self-Internship” Where I study for my prioritization full-time for a month and this post made me take that more seriously. 

People get internships to explore really niche and untransferable stuff. Why not do an internship on something more important, neglected, and transferable like working on prioritization?

I’ll just read, write, and get feedback full-time. I want to have a visual map of arguments at the end of it where it shows all of my mental “turns” on how I ended up on that conclusion. I’d also be able to update my conclusion easily with new information I can add to the map. It’ll also be easy to get criticism with the map as well.

Load more