4454Joined Aug 2014


To PA or not to PA?

I reasonably often get asked about the value of executive assistants and other support staff. My estimate is that me + executive assistant is about 110%-200% of the value of me alone. 

The range is so wide because I feel very unsure about increasing vs diminishing returns. If having an ExA is equivalent to doing (say) 20% more work in a week, does that increase the value of a week by more or less than 20%? My honest guess is that, for many sorts of work we’re doing, the “increasing returns” model is closer to the truth, because so many sorts of work have winner-takes-all or rich-get-richer effects. The most widely-read books or articles get read far more than slightly-worse books or articles; the public perception of an academic position at Oxford is much greater than a position at UCL, even though the difficulty of getting the former is not that much greater than the difficulty of getting the latter. 

(Of course, there are also diminishing returns, which makes figuring this out so hard. E.g. there are only so many podcasts one can go on, and the listenership drops off rapidly.)

I think people normally think of the value of ExAs as just saving you time: doing things like emails, scheduling, and purchasing. In my experience, this is only a small part of the value-add. The bigger value-add comes from: (i) doing things that you just didn’t have capacity to do, or helping you do things to a higher level of quality; (ii) qualitative benefits that aren’t just about saving or gaining time. On (ii): For me that’s (a) meaning that I know that important emails, tasks, etc, won’t get overlooked, which dramatically reduces my stress levels, decreases burnout risk, and means I can do more deep work rather than feeling I need to check my emails and other messages every hour; (b) helping me prioritise (especially advising on when to say no to things, and making it easier to say no to things). Depending on the person, they can also bring skills that I simply lack, like graphic design, facility with spreadsheets, or mathematical knowledge.

Some caveats:

  • It is notable to me, and an argument against my view, that some of the highest-performing people I know don’t use ExAs. I’m not quite sure what’s going on there. My guess is that if you’re a really super-productive person, the benefits I list above aren’t as great for you.
  • It’s definitely an investment. It’s a short-term cost (to hire the person, think about a new structure for your life and workflow, think about what can be delegated, think about information security and data privacy, etc) for a longer-term gain. 
  • You should in general be cautious about hiring, and that applies to this, too: once you’ve hired someone, you now have an ongoing responsibility to them and their wellbeing, you have to think about things like compensation, performance evaluation, feedback, and so on.
Announcing What The Future Owes Us

I love this, haha.

But, as with many things, J.S. Mill did this meme first!!! 

In the Houses of Parliament on April 17th, 1866, he gave a speech arguing that we should keep coal in the ground (!!). As part of that speech, he said:

I beg permission to press upon the House the duty of taking these things into serious consideration, in the name of that dutiful concern for posterity [...] There are many persons in the world, and there may possibly be some in this House, though I should be sorry to think so, who are not unwilling to ask themselves, in the words of the old jest, "Why should we sacrifice anything for posterity; what has posterity done for us?"

They think that posterity has done nothing for them: but that is a great mistake. Whatever has been done for mankind by the idea of posterity; whatever has been done for mankind by philanthropic concern for posterity, by a conscientious sense of duty to posterity [...] all this we owe to posterity, and all this it is our duty to the best of our limited ability to repay."

all great deeds [and] all [of] culture itself [...] all this is ours because those who preceded us have cared, and have taken thought, for posterity [...] Not owe anything to posterity, Sir! We owe to it Bacon, and Newton, and Locke, and Bentham; aye, and Shakespeare, and Milton, and Wordsworth."

Huge H/T to Tom Moynihan for sending this to me back in December. Interestingly, in the 1860s there seems to have been a bit of a wave of longtermist thought among the utilitarians, though their empirical views about the amount of available coal were way off.

Announcing What We Owe The Future

Yeah I thought about this a lot, but I strongly prefer audiobooks to be read by the author, and anecdotally other people do, too. I didn't read DGB (to save time), and regretted that decision.

Democratising Risk - or how EA deals with critics

Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to  quickly  skim your post and paper.  But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that. 
(ii) Personally, I'm  excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism.  If anyone reading this comment  would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org.  I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of  January. 

Towards a Weaker Longtermism

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 


I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn't even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn't know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk  with a given unit of resources.

The language-game of 'writing a philosophy article' is very different than 'stating your exact views on a topic' (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying every nuance, piece of uncertainty, or in-practice constraints) and once philosophy articles get read more widely, that can cause confusion. Hilary and I didn't expect our paper to get read so widely - it's really targeted at academic philosophers. 

Hilary is on holiday, but I've  suggested we make some revisions to the language in the paper so that it's a bit clearer to people what's going on. This would mainly be changing  phrases like 'defend strong longtermism' to 'explore the case for strong longtermism', which I think more accurately represents what's actually going on in the paper.

Towards a Weaker Longtermism

I'm also not defending or promoting strong longtermism in my next book.  I defend (non-strong) longtermism, and the  definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.

(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)

Gordon Irlam: an effective altruist ahead of his time

I agree that Gordon deserves great praise and recognition! 

One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to  cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.)  So that discussion shouldn't be seen as independent convergence. 

Thoughts on whether we're living at the most influential time in history

Thanks Greg  - I asked and it turned out I had one remaining day to make edits to the paper, so I've made some minor ones in a direction you'd like, though I'm sure they won't be sufficient to satisfy you. 

Going to have to get back on with other work at this point, but I think your  arguments are important, though the 'bait and switch' doesn't seem totally fair - e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.

Thoughts on whether we're living at the most influential time in history

Thanks for this, Greg.

"But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million."

I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself.  

It's the former, and  for the reason of your explanation  (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the  blog post I describe what I call the outside-view arguments, including that we're very early on, and say: "My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable."

I'm going to think more about your claim that in the article I'm 'hiding the ball'. I say in the introduction that "there are some strong arguments for thinking that this century might be unusually influential",  discuss the arguments  that I think really should massively update us in section 5 of the article, and in that context I say "We have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years."   Then in the conclusion I say: "There are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too." That seemed clear to me, but I should judge clarity by how  readers interpret what I've written. 

Thoughts on whether we're living at the most influential time in history

Actually, rereading my post I realize I had already made an edit similar to the one you suggest  (though not linking to the article which hadn't been finished) back in March 2020:

"[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.

It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]"

Load More