Hide table of contents

Epistemic Status

Rough and unpolished. I might refine it later.


Introduction

I want to optimise something like: "expected positive impact on a brighter world". Probably, the best way I can do this is through direct work and not via earning to give. I think I have substantial intellectual endowment (more on that later), and like I'm quite privileged (more on that later as well). So, I should choose a career plan that maximises (expected utility) of my positive contribution to a brighter world conditional on the space of possible people I can be.

Near term (within the next 30 years), I want to pursue a career trajectory as an AI safety researcher and a "radical transhumanist thinkfluencer". For the AI safety research part, I'm currently learning abstract maths (currently category theory) [maybe I'll write another post motivating that, but I basically want to try my hand at Agent Foundations style research and I think I have the intellectual endowment for this to have considerable positive value in expectation]. I'll be starting a CS Masters at the end of September (if my student VISA is granted), I'll probably take a gap year (for some intensive learning project probably) then pursue a PhD in CS/AI/computational neuroscience (not mathematics. I'm under the impression I can autodidact any mathematics I find myself compelled to learn).

I'm 24 now, so I'm hoping to start my career trajectory at 32 (8 years forms a natural/compelling Schelling point [At 24 I'm a quite different person from the person I was at 16. And he is in turn a remarkably different person from the person I was at 8. I can thus expect to be a pretty different person at 32. "Who is the person I want to be at 32? What do I need to do to become that person?").

I quit my job as a web developer at the end of July. I don't plan to return to software development (I found it frustrating, and I think it's neither my absolute nor comparative advantage).

 

Interlude

To disambiguate what I mean by "radical transhumanist thinkfluencer" a bit, I want to help sell the following ideas:

  • The current state of the world is very suboptimal
  • Vastly better world states are possible
  • We can take actions that would make us significantly more likely to reach those vastly better states
  • We should do this
    • I'd like to paint concrete and coherent visions for a much brighter future (not concrete utopias, but general ways that we can make the world much better off)
      • Paretopian outcomes
    • I want to get people excited about such a future as something we should aspire to and work towards.
  • Here are things we can do to reach towards that future

I'd like to convince people positioned to have a large positive influence the world or to attain the leverage to have such an influence.

 

Background

I've discovered that I basically can't effectively study maths for more than 4 - 6 hours a day (I've somewhat slacked on this over the past month or two, but I've not abandoned the project of studying maths [it may even be the case that the reason I'm slacking is that 3 - 4 hours is too much for my natural mental stamina for doing maths]). 

My mental stamina for learning non maths stuff seems to draw from a mostly different reserve and be 2x - 3x larger (I may have done 8+ hours of audiobooks/podcasts over the past couple of days? This wasn't due to a deliberate target; I just basically listen to music all my waking hours and I've decided to swap out my music with informational audio [at least until I exhaust my mental stamina]).

So, I have a lot of time I can use for learning but can't fung for learning more maths, so I might as well use it to try and build intellectual capital for becoming a radical transhumanist thinkfluencer.

Thus, I decided to start a new project. I want to build a comprehensive, rich and coherent world model of human civilisation (and of the world in which we inhabit).


Motivations

I'm quite economically privileged (relative to others in my age range in my country). My parents are pretty well off (and can afford to fund me to study a Masters program at a Russell Group university). I can somewhat afford to leech off them more? It is not the case that I need to start a career anytime soon to survive. Leeching of them would be distasteful and annoying, but the costs seem to be worth it.

I'm very epistemically privileged. I've been in the rationalist community since 2017. I've absorbed basically very good epistemic memes, and I know what to do to get even better epistemics. I think I can become someone with exceptional epistemics.

I am intellectually privileged. I have high quantitative and verbal aptitude. I was excellent at mathematics in high school (I let my aptitude rust in the 8 years since, but I've started learning mathematics again, and think I basically have the ability to learn any mathematics that I put my mind to. This is mostly relevant for the agent foundations style AI safety research I want to do, but the quantitative aptitude will also be useful for making sense of the world.).

I expect that I can become a prolific writer. I have been a prolific writer at various points in the past (I just don't think such writing was valuable and so won't link it here. It's probably worth it to learn enough so that such writing would become very valuable).

 

I think there's a chronic undersupply of people with:

  • Rich and comprehensive world models
  • Hiqh quantitative aptitude
  • Exceptionally good epistemics

And are prolific writers

 

I believe such people provide considerable value to the world (and specifically to the project of improving the world).

 

I think that I am unusually positioned to be able to become such a person. The main thing that might prevent me from becoming such a person is burnout/losing motivation, but like posting about it here makes me more likely to follow through on this (I don't want to disappoint people who believe in me, and their encouragement provides the motivation to push forwards [I do have intrinsic motivation but supplementing it with extrinsic motivation seems good?]).


Approach


Topics

This is a non-exhaustive list of topics I'd hope to cover at some point for the purpose of becoming a thinkfluencer. Things I'd be learning primarily to do AI safety research won't be covered here.

Other mathematics/computer science/statistics I'll be learning for other reasons also won't be covered here (I have a pretty extensive list and I think I already know what I need to learn here).

 

Less Quantitative

  • Existential security
  • Moral philosophy
    • Moral uncertainty
    • Longtermism
    • Meta ethics
  • Hinginess
  • Anthropology
  • Macro history
  • Psychology
    • Cognitive
    • Evolutionary
  • Evolutionary biology
  • History and philosophy of science
  • History and philosophy of technological innovation
  • Progress studies more generally
  • Political theory
  • Memetics (in the Dawkins sense of "meme")
    • How ideas spread

 

More Quantitative

  • Epistemics
  • Anthropics
  • Forecasting
  • Decision and game theory
  • Micro and macro economics
  • Behavioural economics
  • Causality
  • Statistical thinking/modeling/analysis
  • Complex systems
  • Chaos theory
  • Physics
  • Chemistry

 

I expect to spend more time on these topics in general, because quantitative fields require more effort from me. But I doubt I'll spend more than 3 months on any of them.


Conclusions

I'd appreciate feedback on my general plans/approach and on particular topics that you think I should add to my list (or remove from it).

17

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since: Today at 3:18 AM

I like your overall ambitions! I want to note a couple of things that seemed incongruous to me/things I'd change about your default plan.

I'm 24 now, so I'm hoping to start my career trajectory at 32 (8 years forms a natural/compelling Schelling point

This seems like very much the wrong mindset. You're starting this trajectory now. In order to do great intellectual work, you should be aiming directly at the things you want to understand, and the topics you want to make progress on, as early as you can. A better alternative would be taking the mindset that your career will end in 8 years, and thinking about what you'd need to produce great work by that time. (This is deliberately provocative, and shouldn't be taken fully literally, but I think points in the right direction, especially given that you're aiming to do research where the credentials from a PhD that's successful by mainstream standards don't matter very much, like agent foundations research and more general high-level strategic thinking).

Pick a new important topic each month (or 2 -3 months)

Again, I'd suggest taking quite a different strategy here. In order to do really well at this, I think you don't want the mindset of shallowly exploring other people's work (although of course it's useful to have that as background knowledge). I think you want to have the mindset of identifying the things which seem most important to you, pushing forward the frontier of knowledge on those topics, following threads which arise from doing so, and learning whatever you need as you go along. What it looks like to be successful here is noticing a bunch of ways in which other people seem like they're missing stuff/overlooking things, digging into those, and finding new ways to understand these topics. (That's true even if your only goal is to popularise existing ideas - in order to be able to popularise them really well, you want the level of knowledge such that, if there were big gaps in those ideas, then you'd notice them.) This is related to the previous point: don't spend all this time preparing to do the thing - just do it!

I think that I am unusually positioned to be able to become such a person.

I think that doing well at this research is sufficiently heavy-tailed that it's very hard to reason your way into thinking you'll be great at it in advance. You'll get far far more feedback on this point by starting to do the work now, getting a bunch of feedback, and iterating fast.

Good luck!

I plan to post my reports on LessWrong and the Effective Altruism forum

Why would posting mainly in these tiny communities be the best approach? First, I think these communities are already far more familiar with the topics you plan to publish on than the average reader. Second, they are – as I said – tiny. If you want to be a public intellectual, I think you should publish where public intellectuals generally publish. This is usually a combination of books, magazines, journals, and your own platforms (e.g. personal website/blog, social media etc.)

You could probably improve on your plan by making a much more in-depth analysis of what your exact goals are and what your exact audiences are. It seems to me a few steps are missing in this statement:

I believe such people provide considerable value to the world (and specifically to the project of improving the world).

What would probably be useful is, in a sense, a theory of change on how doing the things you want to do lead to the outcomes you want.

If you do decide to go ahead with this plan, I would also focus a lot on this part:

In contrast, I am quite below average on conscientiousness and related traits like diligence, perseverance, willpower, "work ethic", etc.

You are going to need those in the massively competitive landscape you aim for.

Why would posting mainly in these tiny communities be the best approach? First, I think these communities are already far more familiar with the topics you plan to publish on than the average reader. Second, they are – as I said – tiny. 

The reports are for my "learning about the world" phase, not attempts at becoming a public intellectual. 

As for why LW/EAF:

  • Feedback from my communities is more important to give me sustainable motivation than feedback from randoms
  • I'm more likely to get valuable feedback from these communities than others especially because they are more familiar with these ideas and have excellent epistemics
  • I don't want to delay my report writing and such by adding the extra burden of setting up a blog
  • The rationalist/EA communities provide a natural audience for the reports
  • Feedback will likely be faster
  • I intend to start out writing for rats/EAs and rat/EA curious
    • That is when I shift from writing about stuff I'm learning about to giving more original takes
  • I may want to work for/or apply for funding from EA organisations, so having a history of useful writing would be helpful
  • Etc.

 

 

Second, they are – as I said – tiny. If you want to be a public intellectual, I think you should publish where public intellectuals generally publish. This is usually a combination of books, magazines, journals, and your own platforms (e.g. personal website/blog, social media etc.)

Eventually, I'll do that. But I'll start out a rationalist blogger before broadening my audience.

 

You could probably improve on your plan by making a much more in-depth analysis of what your exact goals are and what your exact audiences are. It seems to me a few steps are missing in this statement:

I believe such people provide considerable value to the world (and specifically to the project of improving the world).

What would probably be useful is, in a sense, a theory of change on how doing the things you want to do lead to the outcomes you want.

I think I'd want to eventually write articles/books directed at broader audiences/the intellectual public/people interested in improving the world. Well, I'm hoping to change the minds of important people I guess.

 

I want to help sell the following ideas:

  • The current state of the world is very suboptimal
  • Vastly better world states are possible
  • We can take actions that would make us significantly more likely to reach those vastly better states
  • We should do this
    • I'd like to paint concrete and coherent visions for a much brighter future (not concrete utopias, but general ways that we can make the world much better off)
      • Paretopian outcomes
    • I want to get people excited about such a future as something we should aspire to and work towards.
  • Here are things we can do to reach towards that future

 

I'd like to convince people positioned to have a large positive influence the world or to attain the leverage to have such an influence.

 

You are going to need those in the massively competitive landscape you aim for.

Yeah, probably. But they're needed in general to improve the world, I think.

May I ask why you started by learning category theory?

As far as I've heard, learning category theory makes most sense if one knows a lot of mathematics already because it establishes equivalences between different parts of mathematics. I think that humans learn somewhat better in a examples→abstract pattern way and not the other way around, so I'd've personally put category theory relatively late when learning mathematics.

But maybe your mind works differently from most humans' in that regard?

May I ask why you started by learning category theory?

I started it on a whim (someone linked a LW post on it on Twitter) and I found it engaging enough to stick with it. I don't want to quit it because I am trying to break my habit of abandoning projects I start. It's also not the case that I am finding it too difficult to progress. I do think I'm slow going, but a more mathematically literate friend disagreed (from their perspective, I was going pretty fast), so I think I should stick with the project, so that I can form a habit of following my projects through to completion.


I like learning about abstract stuff, though I think I would find category theory easier if I knew other abstract maths, but I can always just use the concrete example of Set to try and think about the concept, and then imagine the category theory concept as a generalisation of the appropriate set theory concept.

Is promotion to the Frontpage automated? Do mods take an approve by default and then demote stuff they think is not worth promoting?

I'm used to the LessWrong approach of blogpost by default, and mods manually approve promotion to the front page. I'm not sure I want to make the decision for the community if my post should be on the front page. But I do want it to be promoted to the front page if the mods approve it.

I feel like promotion to front page by default will disincentivise demotion to personal blog post if a post has high engagement irrespective of whether it's otherwise frontpage material.

As I understand it, posts are frontpage by default unless you or a mod decide otherwise.

Curated and popular this week
Relevant opportunities