I am looking for a mentor or accountability partner so I can do independent EA-aligned research. I would also love if our relationships exceeds this, such as co-researching, but this is optional.
If posting such requests on the forum is inappropriate, please let me know.
I have spent time finding mentors privately, but EA currently is fairly mentorship constrained, and I haven't succeeded.
Preferably, someone with significant exposure to existing EA-aligned research on x-risk and longtermism, including research that is not specifically on AI alignment.
Should be willing to dedicate atleast 15 minutes per 2-3 days
Please contact me even if you only have little (but non-zero) exposure to EA research on these topics. I would prefer that over my current situation of having no mentor.
What should I be looking for?
If you think I would benefit more if I look for something more narrow, wide or else different than what I am currently looking for please let me know.
I am currently studying engineering at IIT Delhi, completing my fourth year of a 5-year BTech+MTech degree. I discovered EA last year, before that I was involved in the cryptocurrency space for 1.5 years. Please find my CV here.
My ideal job after I graduate would be direct research work at an x-risk/longtermist org. I felt doing independent research work would help me get there.
I also believed it would help me improve at research and research writing. And ofcourse, I care about solving the problems I wish to research.
Why I am looking for mentorship?
I have found independent research fairly hard to stay motivated on. Nevertheless I feel I have made noticeable progress since when I started out, and hence would much rather get better at it than quit. ("Noticeable progress" is relative to my baseline of zero, rather than relative to people who are already experts on these topics.)
I have especially found writing difficult. I feel like I have formed primitive models of various topics but a) don't have sufficiently high confidence they can't be improved on, and b) generally find writing them hard.
I generally find writing much easier when writing to someone, so I wonder if that also applies in this circumstance.
If you are willing to help me, please message me on the forum itself, or email me at email@example.com
I realise one problem is that I have not narrowed down my research topic sufficiently.
In the broadest sense, I wish to study:
- the set of all theoretically possible social technologies that can exist, and especially those that can be useful. Social technologies here would be broadly defined as any means by which people are able to self-organise, including cultures, norms, groups and bureacracies. I am especially interested in understanding how bureaucracies function.
I am tentatively a huge fan of Samo Burja's work, and influenced by his writings.
I would also love to read from many other sources.
I may also need to read from sources that people with a background in social sciences consider obviously worth reading, as I do not have a formal social sciences background.
I am sympathetic to applying a Thielian analogue to social technologies - namely that few social technologies are thriving, most people make copies of the ones that thrive, and there is value in zero-to-one inventing new social technologies.
Reasons EA should probably study bureaucracies
I am not certain all these reasons are valid, but confident atleast some are valid.
Related to AGI and other existentially powerful tech
- Creating institutions with desirable properties* for a world to ban AI research (#)
- Creating institutions with desirable properties* for a world to use aligned AI (assuming alignment is solved)
- Creating institutions with desirable properties* for a world that wishes to ban other forms of research (for instance if building AGI or uploads turn out fundamentally intractable, and other technologies become more concerning).
*Desirable properties can include some or all of these: stable values over the long-term, democratic or atleast non-authoritarian, non-extremist, not prone to accidents, self-limiting to their domain of action, monopolising (so that bad actors don’t get access to same tech).
Related to totalitarian govts using near-term AI and other near-term technology
- Predicting how existing totalitarian govts such as those of China and Russia can snowball into stable totalitarian govts. Knowing how to assess their stability and likelihood of descent (to further stability). (#)
- Knowing how to compare and prioritise interventions that work against this. This is including but definitely not limited to technological interventions. (#)
- Understanding how digital world reshapes this landscape (#)
- Predicting how countries with currently democratic govts could yet fall into soft or hard forms of totalitarian control. Knowing how to assess likelihood and stability. Knowing how to compare and prioritise interventions that work against this.
Related to systemic change
- Understanding different kinds of interventions that indirectly help in systemic change, or interventions that target change in specific bureaucratic roles or functions. Such as interventions that promote better epistemics, cognitive enhancements, ways to reduce or increase virality of different messages and so on. Understanding cause prioritisation among them.
- Understanding whether EA communities and institutions are optimally designed, and new communities, institutions and behaviours they could design
- Understanding principal-agent problems inside of EA community - leaders versus followers, and so on.
(#) Hash-marked points are the ones I am most keen on studying. Although I am generally interested in almost all points here.