M

mhendric

277 karmaJoined Apr 2018

Posts
1

Sorted by New

Comments
31

It depends on what the final version looks like. Nick Laing makes some good suggestions; I could also see this go to e.g. a philosophy journal, given longtermism and intercultural philosophy are both topics that are currently popular in philosophy. Doing so would require a somewhat less EA-ish framing, and going away from the pragmatic "how do we make people" longtermist framing. Instead, it would require careful exposition of Longtermism, then exegetical work on how Muslim sources seem to accord or not accord with it. Most of this latter work is already quite well done in this post.

If you want to follow up on this, feel free to shoot me a message.

Have you considered polishing this for publication in a peer-reviewed academic journal? It is a very interesting case you make, and I think it would be valuable to make it citeable in academic publications.

Depends entirely on your interests! They are sorted thematically https://ineffectivealtruismblog.com/post-series/

Specific recommendations if your interests overlap with Aaron_mai's: 1(a) on a tension between thinking X-risks are likely and thinking reducing X-risks have astronomical value; 1(b) on the expected value calculation in X-risk; 6(a) as a critical review of the Carlsmith report on AI risk.

Re edit, you should definitely not feel embarrassed. A forum comment will often be a mix of a few sources and intuition rather than a rigorous review of all available studies. I don't think this must hold low epistemic status, especially for the purpose of the idea being exploration, rather than, say, a call for funding or such (which would require a higher standard of evidence). Not all EA discussions are literature reviews, otherwise chatting would be so cumbersome! 
I'd recommend using your studies to explore these and other ideas! Undergraduate studies are a wonderful time to soak up a ton of knowledge, and I look fondly upon mine - I hope you'll have a similarly inspiring experience. Feel free to shoot me a pm if you ever want to discuss stuff. 

That is interesting. I am not very familiar with Panksepp's work. That being said, I'd be surprised if his model ( _these specific basic emotions_ ; these specific interactions of affect and emotion) were the only plausible option in current cogsci/psych/neuroscience.

 

Re "all values are affective", I am not sure I understand you correctly. There is a sense in which we use value in ethics (e.g. Not helping while persons are starving faraway goes against my values), and a sense in which we use it in psychology (e.g. in a reinforcement learning paradigm). The connection between value and affect may be clearer for the latter than the former. As an illustration, I do get a ton of good feelings out of giving a homeless person some money, so I clearly value it. I get much less of a good feeling out of donating to AMF, so in a sense, I value it less. But in the ethical sense, I value it more - and this is why I give more money to AMF than to homeless persons. You claim that all such ethical sense values ultimately stem from affect, but I think that is implausible - look at e.g. Kantian ethics or Virtue ethics, both of which use principles that are not rooted in affect as their basis.

 

Re: value learning at the fundamental level, it strikes me as a non obvious question whether we are "born" with all the basic valenced states, and everything else is just learning history of how states in the world affected basic valenced states before; or whether there are valenced states that only get unlocked/learned/experienced later. Having a child is sometimes used as an example - maybe that is just tapping into existing kinds of valenced states, but maybe all those hormones flooding your brain do actually change something in a way that could not be experienced before. 

Either way, I do think it may make sense to play around with the idea more!

Welcome to EA! I hope you will find it a welcoming and inspiring community.

 

I dont think the idea is ridiculous at all! However, I am not certain 2. and 3. are true. It is unclear whether all our human values come from our basic emotions and affects (this would seem to exclude the possibility of value learning at the fundamental level; I take this to be still an open debate, and know people doing research on this). It is also unclear if the only way of guaranteeing human values in artificial agents is via emotions and affects or something resembling them, even if it may be one way to do so.  

David Thorstadt, who worked at GPI, Blogs about reasons for his Ai skepticism (and other EA critiques) here https://ineffectivealtruismblog.com/

Unless education differs starkly from other disciplines, note that you will apply to all top programs, not one top program. Applying to one program would be foolish as the chance of getting accepted, even if you are a top candidate, is rather on the low end.

Yes, if you intend to become an Ivy League professor, you need to get a degree from a top 5 institution of your field. Note that "becoming an Ivy League professor in my field" is somewhat akin to "becoming a top athlete in my sport", and similarly competitive - just like most folks wont break into the NFL or NBA, most academics (even super smart or diligent ones) will not make it into an Ivy League professorship. That is not meant to discourage you from trying, but you should realize that chances are slim even if you are really good.

Anyways, if that were your career path, you should try to
1. Excel in each course (4.0 GPA)

2. Make regular contact with your teachers (go to office hours to discuss material and questions etc). Try to find mentors, and try to heed their advice often.

3. Scour the web for resources on how grad applications in your chosen discipline work, and maximize what they'll require. 

A challenge is to do all this while not losing the passion for your subject - which you should explore, love and enjoy while studying. So I would strongly recommend against this career path unless you feel very distinctly passionate about the subject.

I cannot speak to the career of influencers, but if you were to opt for taking a shot at becoming a professor, the #1 priority should be to excel academically and take active steps towards getting into an as high-ranked program as you can for your grad studies.

Load more