All of Holden Karnofsky's Comments + Replies

Has Life Gotten Better?

The third piece in this series is Pre-agriculture gender relations seem bad. I suggest that any comments on it go in this thread.

1MugaSofer6dI'm pretty sure "man" here means "human", not "male"; and they're referring to the idea that human intelligence evolved primarily for hunting purposes as part of a "get smarter > hunt better > get nutrition from meat to support brain > get smarter still" feedback loop. [This doesn't have much direct implication regarding equality.]
2Kenny Easwaran7dI think that part of the issue is that people are sometimes mistaking a comparative claim for an absolute claim. Researchers claiming that hunter-gatherer societies had better gender relations than early agricultural ones aren't thereby claiming that hunter-gatherer societies are anywhere near equal - just less unequal than the agricultural societies that followed them. Searching a bit (using "origin of patriarchy" as the search term) I found two relevant books that seem to be the sources of a lot of claims: The Creation of Patriarchy, by Gerda Lerner, from 1986; The Civilization of the Goddess: The World of Old Europe, by Marija Gimbutaš, 1991. These seem to both often be described as stating that there was once an equal society, and a later society imposed patriarchy on it some time around 5000 years ago. But the former seems to be more specifically claiming that early Mesopotamian civilization was less unequal than later Mesopotamian civilization, and the latter seems to be more specifically claiming that the Neolithic agricultural inhabitants of Europe had a matrilocal goddess-oriented society that was disrupted by the patrilocal god-oriented nomadic society of the Indo-Europeans that gave rise to the later societies. Neither one of them particularly supports the claim that hunter-gatherer societies are egalitarian and agricultural societies are patriarchal (the latter even seems to reverse this!) But both do give some evidence for the claim that might be more plausible, that there was a period shortly before recorded history in which gender relations were not as bad as they became by the early period of recorded history. If true, this would be one more way in which one might expect pre-agricultural life to have been substantially worse than the present, but also better than much of agricultural history.
3AppliedDivinityStudies7dYou write: Out of curiosity, I wanted to check how many current societies (countries) have female leaders. This wikipedia page [https://en.wikipedia.org/wiki/List_of_elected_and_appointed_female_heads_of_state_and_government] lists 26, and there are ~195 countries total, which gives us 13%. To weigh by population and rule out ceremonial positions, I compiled some data in this Google Sheet [https://docs.google.com/spreadsheets/d/1KPwtg1TTBpEapmsCbt0YgLl1-mqO4T7BohwUJGEZC8I/edit?usp=sharing] , which gets us that 5.44% of the world population has a female leader. To be clear, I don't consider this a particularly strong counterpoint. You do go on to mention that even the societies with female leaders had serious gender inequality. Also, many of the countries I've listed have had female leaders in the past, or have laws allowing female leaders, so it's not as if they have "no possibility" as may have been the case in the past. But if I were writing the article "post-agricultural gender relations seem bad", I might say something like "169 out of 195 societies have no female leaders" and "19 out of 20 people don't have a female leader", and it would sound quite bad for the modern world.
3AppliedDivinityStudies7dI thought this was a helpful corrective to a largely unchecked popular narrative. That's part of it, but I think the stronger reason is something like "there were female leaders in the past, therefore today's gender inequality is the result of social norms". EDIT: Also FWIW, the Wikipedia page for Sexism [https://en.wikipedia.org/wiki/Sexism#Ancient_world] does note under Ancient world:
Has Life Gotten Better?

The second piece in this series is Has life gotten better?: the post-industrial era. I suggest that any comments on it go in this thread.

3Linch13dI think I agree with the broad thesis of your post, but I'm less sure about the claim for romantic relationships specifically, as well as the evidence for them. In particular, in addition to the emotional unreliability points you mentioned, I think there's systematic selection bias when you look at existing relationships, when the proportion of people in relationships have systematically changed [https://www.pewresearch.org/social-trends/2011/12/14/barely-half-of-u-s-adults-are-married-a-record-low/] over time (US data). So I wouldn't be surprised if average happiness in relationships have increased (because of better matching, etc), but average happiness about relationships have decreased. Anecdotally, if I look at my parents' or especially my grandparents' generation, being single is almost unheard of in general if you're in your late 20s/30s, never mind if you're an emotionally stable nice person in a high-status job. (I think the rate of sex-selective abortion in China [https://en.wikipedia.org/wiki/Sex-selective_abortion#China] probably has not helped for this). I think there are similar things in the West, if maybe less extreme for some people and with different causal attribution. To be clear, this is not a refutation for the broad thesis of your post -- I'd much rather be single and lonely in California in 2021 than being happily married during the cultural revolution in China, and I'm pretty confident this isn't just status quo bias talking -- just contesting a specific subpoint here.
Summary of history (empowerment and well-being lens)

I'd say some of both.

  • If I tried to start noting not just manifestly important changes in empowerment and well-being, but also earlier developments that might have been causally important for them, I think the project would get a lot more unwieldy and more packed with judgment calls, and I chose to mostly just refrain from doing that.

  • I am in fact skeptical by default of claims along the lines of: "Idea X was important for development Y, despite the observation idea X was around for centuries with little-to-no movement on Y, and then Y changed rapidly a very long time later."

Summary of history (empowerment and well-being lens)

Thanks for the kind words! I agree there are a number of works out there that do a good job presenting history as a "story." My comments were more about an impression I sometimes get from "history people" that this should be avoided.

Call to Vigilance

Thanks! I just used "galaxy" for convenience - it was easy to estimate certain figures for our galaxy (such as how long it would take to reach its outer limits), and I think "galaxy" gives a sufficient picture of the potential scale I'm envisioning. I do think it's possible to keep going beyond the galaxy, though at some point (beyond this galaxy) I'd expect to encounter another spacefaring civilization with a different origin, and getting into that could complicate some of the statements and calculations illustrating that civilization could get very large and last very long.

Call to Vigilance

Not particularly, sorry! There are communities that don't necessarily identify as "effective altruist" but are highly concerned with reducing potential risks from advanced AI, but I'm guessing you're already familiar with these (e.g., some people/organizations connected to or inspired by MIRI).

How to make the best of the most important century?

I'm not sure whether you're asking for academic literature on adversarial examples (I believe there is a lot) or for links discussing the link between adversarial examples and alignment (most topics about the "link between X and alignment" haven't been written about a ton). The latter topic is discussed some in the recent paper Unsolved Problems in ML Safety and in An overview of 11 proposals for building safe advanced AI.

How to make the best of the most important century?

An example would be voting (in an election - or donating, volunteering, etc.) for the candidate and/or party that you believe is more likely to act based on the best interests of humanity, vs. other considerations.

Forecasting Transformative AI: Are we "trending toward" transformative AI? (How would we know?)

Thanks for the correction! I've corrected the term in the Cold Takes version. (I'm confining corrections to that version rather than correct there, here, LessWrong, the PDF, etc. every time; also, editing posts here can cause bugs.)

Forecasting Transformative AI: Are we "trending toward" transformative AI? (How would we know?)

Hm, I may have simply misread or mis-recalled your piece w/r/t the parenthetical, apologies for that. I skimmed it again and didn't note any strong disagreements, except that "almost zero evidence" seems likely further than I would go (it would take me more time to figure out exactly where I stand on this).

2kokotajlod20dSounds good!
Forecasting Transformative AI: Are we "trending toward" transformative AI? (How would we know?)

On "transformative AI": I agree that this is quite vague and not as well-defined as it would ideally be, and is not the kind of thing I think we could just hand to superforecasters. But I think it is pointing at something important that I haven't seen a better way of pointing at.

I like the definition given in Bio Anchors (which you link to), which includes a footnote addressing the fact that AI could be transformative without literally causing GDP growth to behave as described. I'm sure there are imperfections remaining, and it remains vague, but I think m... (read more)

1pseudobison20dsidenote: There has been an argument [https://www.researchgate.net/profile/Jess-Whittlestone/publication/337702892_Defining_and_Unpacking_Transformative_AI/links/5df9fc2a299bf10bc3636ded/Defining-and-Unpacking-Transformative-AI.pdf] that 'radically transformative AI' is a better term for the Industrial Revolution definition, given the semantic bleaching already taking place with 'transformative AI'.
Forecasting transformative AI: what's the burden of proof?

Fair point re: economic trends vs. technological trends, though I would stand by the outline of what I said: your post seems to be arguing that current trends don't suggest a coming explosion, but not that they establish a super-high burden of proof for expecting one.

Re: "For example, the observation that new scientific insights per human have declined rapidly suggests that even getting digital people might not be enough to get us to a growth explosion, as most of the insights may have been plugged already."

Note that the growth modeling analyses I draw on ... (read more)

Forecasting transformative AI: what's the burden of proof?

Agreed that we probably disagree about lock-in. I don't want my whole case to ride on it, but I don't want it to be left out as an important possibility either.

With that in mind, I think the page I linked is conveying the details of what I mean pretty well (although I also find the "more change than X" framing interesting), and I think "most important century" is still the best headline version I've thought of.

Forecasting transformative AI: the "biological anchors" method in a nutshell

Thanks for this, I can see how that could be confusing language. I've changed "this would be enough to develop transformative AI" to "transformative AI would (likely) follow" and cut "But in fact" from the next bullet point. (I've only made these changes at the Cold Takes version; editing this version can cause bugs.)

I agree directionally with the points you make about "many transformative tasks" and "point of no return," but I still think AI systems would have to be a great deal more capable than today's - likely with a pretty high degree of generality (or at least far more sample-efficient learning than we see today) - to get us to that point.

4kokotajlod19dUpdate: I thought about it a bit more & asked this question [https://www.lesswrong.com/posts/Eg5AEMhGdyyKRWmZW/is-gpt-3-already-sample-efficient] & got some useful feedback, especially from tin482 and vladimir_nesov. I now am confused about what people mean when they say current AI systems are much less sample-efficient than humans. On some interpretations, GPT-3 is already about as sample-efficient as humans. My guess is it's something like: "Sure, GPT-3 can see a name or fact once in its dataset and then remember it later & integrate it with the rest of its knowledge. But that's because it's part of the general skill/task of predicting text. For new skills/tasks, GPT-3 would need huge amounts of fine-tuning data to perform acceptably."
2kokotajlod20dExcellent, thanks! The sample-efficient learning thing is an interesting crux. I tentatively agree with you that it seems hard for AIs that are as sample-inefficient as todays to be dangerous. However... on my todo list is to interrogate that. In my "median future" story, for example, we have chatbots that are talking to millions of people every day and online-learning from those interactions. Maybe it can make up in quantity what it lacks in quality, so to speak -- maybe it can keep up with world affairs and react to recent developments via seeing millions of data points about it, rather than by seeing one data point and being sample-efficient. Idk.
Forecasting transformative AI: the "biological anchors" method in a nutshell

There are contexts in which I'd want to use the terms as you do, but I think it is often reasonable to associate "conservatism" with being more hesitant to depart from conventional wisdom, the status quo, etc. In general, I have always been sympathetic to the idea that the burden of proof/argumentation is on those who are trying to raise the priority of some particular issue or problem. I think there are good reasons to think this works better (and is more realistic and conducive to clear communication) than putting the burden of proof on people to ignore some novel issue / continue what they were doing.

All Possible Views About Humanity's Future Are Wild

If humanity simply goes extinct without reaching meaningful space expansion, I agree that that outcome would not be particularly wild.

However, I would find it wild to think this is definitely (or even "overwhelmingly likely") where things are heading. (While I also find it wild to think there's a decent chance that we will reach galaxy scale.)

2evelynciara20dI agree with that. I think humanity (as a cultural community, not the species) will most likely have the ability to expand across the Solar System this century, and will most likely have settled other star systems by a billion years from now, when Earth is expected to become uninhabitable.
All Possible Views About Humanity's Future Are Wild

(It's on Apple Podcasts now, under Cold Takes Audio.)

Forecasting transformative AI: what's the burden of proof?

Most of your post seems to be arguing that current economic trends don't suggest a coming growth explosion.

If current economic trends were all the information I had, I would think a growth explosion this century is <<50% likely (maybe 5-10%?) My main reason for a higher probability is AI-specific analysis (covered in future posts).

This post is arguing not "Current economic trends suggest a growth explosion is near" but rather "A growth explosion is plausible enough (and not strongly enough contraindicated by current economic trends) that we shouldn't... (read more)

1MagnusVinding2moThanks for your reply :-) That's not quite how I'd summarize it: four of the six main points/sections (the last four) are about scientific/technological progress in particular. So I don't think the reasons listed are mostly a matter of economic trends in general. (And I think "reasons listed" is an apt way to put it, since my post mostly lists some reasons to be skeptical of a future growth explosion — and links to some relevant sources — as opposed to making much of an argument.) I get that :-) But again, most of the sections in the cited post were in fact about scientific and technological trends in particular, and I think these trends do support significantly lower credences in a future growth explosion than the ones you hold. For example, the observation that new scientific insights per human have declined rapidly suggests that even getting digital people might not be enough to get us to a growth explosion, as most of the insights may have been plugged already. (I make some similar remarks here [https://magnusvinding.com/2020/06/04/a-deceptive-analogy/].) Additionally, one of the things I had in mind with my remark in the earlier comment relates to the section on economic growth, which says: In relation to this point in particular, I think the observation mentioned in the second section of my post seems both highly relevant and overlooked, namely that if we take a nerd-dive into the data [http://holtz.org/Library/Social%20Science/Economics/Estimating%20World%20GDP%20by%20DeLong/Estimating%20World%20GDP.htm] and look at doublings, we have actually seen an unprecedented deceleration (in terms of how the growth rate has changed across doublings). And while this does not by any means rule out a future growth explosion, I think it is an observation that should be taken into account, and it is perhaps the main reason to be skeptical of a future growth explosion at the level of long-run growth trends. So that would be the kind of reason I think should ideally have
Forecasting transformative AI: what's the burden of proof?

Thanks! This post is using experimental formatting so I can't fix this myself, but hopefully it will be fixed soon.

Forecasting transformative AI: what's the burden of proof?

Agreed. This is similar in spirit to the "My cause is most important" part.

Forecasting transformative AI: what's the burden of proof?

It seems to me like "transformative AI is coming this century" and "this century is the most important century" are very different claims which you tend to conflate in this sequence.

I agree they're different claims; I've tried not to conflate them. For example, in this section I give different probabilities for transformative AI and two different interpretations of "most important century."

This post contains a few cases where I think the situation is somewhat confusing, because there are "burden of proof" arguments that take the basic form, "If this typ... (read more)

4richard_ngo2moThanks for the response, that all makes sense. I missed some of the parts where you disambiguated those two concepts; apologies for that. I suspect I still see the disparity between "extraordinarily important century" and "most important century" as greater than you do, though, perhaps because I consider value lock-in this century less likely than you do - I haven't seen particularly persuasive arguments for it in general (as opposed to in specific scenarios, like AGIs with explicit utility functions or the scenario in your digital people post [https://www.cold-takes.com/how-digital-people-could-change-the-world/]). And relatedly, I'm pretty uncertain about how far away technological completion is - I can imagine transitions to post-human futures in this century which still leave a huge amount of room for progress in subsequent centuries. I agree that 'extraordinarily important century" and "transformative century" don't have the same emotional impact as "most important century". I wonder if you could help address this by clarifying that you're talking about "more change this century than since X" (for x = a millennium ago, or since agriculture, or since cavemen, or since we diverged from chimpanzees). "Change" also seems like a slightly more intuitive unit than "importance", especially for non-EAs for whom "importance" is less strongly associated with "our ability to exert influence".
This Can't Go On

Thanks for all the thoughts on this point! I don't think the comparison to currency is fair (the size of today's economy is a real quantity, not a nominal one), but I agree with William Kiely that the "several economies per atom" point is best understood as an intuition pump rather than an airtight argument. I'm going to put a little thought into whether there might be other ways of communicating how astronomically huge some of these numbers are, and how odd it would be to expect 2% annual growth to take us there and beyond.

One thought: it is possible that... (read more)

Digital People Would Be An Even Bigger Deal

I think this depends on empirical questions about the returns to more compute for a single mind. If the mind is closely based on a human brain, it might be pretty hard to get much out of more compute, so duplication might have better returns. If the mind is not based on a human brain, it seems hard to say how this shakes out.

All Possible Views About Humanity's Future Are Wild

I'm not sure I'm fully following, but I think the "almost exactly the same time" point is key (and I was getting at something similar with "However, note that this doesn't seem to have happened in ~13.77 billion years so far since the universe began, and according to the above sections, there's only about 1.5 billion years left for it to happen before we spread throughout the galaxy"). The other thing is that I'm not sure the "observation selection effect" does much to make this less "wild": anthropically, it seems much more likely that we'd be in a later-in-time, higher-population civilization than an early-in-time, low-population one.

2WilliamKiely3moThat's a good point: my hypothesis doesn't help to make reality seem any less wild.
2Holden Karnofsky22d(It's on Apple Podcasts now, under Cold Takes Audio.)
Digital People FAQ

If we have advanced AI that is capable of constructing a digital human simulation, wouldn't it also by proxy be advanced enough to be conscious on its own, without the need for anything approximating human beings? I can imagine humans wanting to create copies of themselves for various purposes but isn't it much more likely for completely artificial silicon-first entities to take over the galaxy? Those entities wouldn't have the need for any human pleasures and could thus conquer the universe much more efficiently than any "digital humans" ever could.

It ... (read more)

1myst_053moAfter reading your latest post on temporary copies, I'm thinking that this would quickly become the #1 priority for brain simulation research. In a real life analogy, humans very quickly abandoned horses in favor of cars, as having a tool that works 24/7 without complaint is much better than a temperamental living being. So the phase of copies being treated with dignity would be relatively short-lived up until the underlying circuitry could be tweaked to make it morally okay to force simulations to work 24/7 without them "suffering" in any way, as they would be incapable of negative emotion. Now, allowing for unlimited tweaking of brain circuitry does make for bad science fiction (i.e. the mmacevedo short story breaks down in a world where its possible) but I suspect it would be the ultimate endpoint for virtual workers.
New blog: Cold Takes

Thanks, I agree it's not ideal, but haven't found a way to change the color of that button between light and dark mode.

New blog: Cold Takes

No need to follow any unusual commenting norms! The "cold" nature of the blog is due to my style and schedule, not a request for others.

All Possible Views About Humanity's Future Are Wild

I'm not sure I follow this. I think if there were extraterrestrials who were going to stop us from spreading, we'd likely see signs of them (e.g., mining the stars for energy, setting up settlements), regardless of what speed they traveled while moving between stars.

2WilliamKiely3moAdding to my other reply to your other comment I just made, let me just clarify that the model I'm working with is the "fast colonization" model from 25:20 of this Stuart Armstrong FHI talk [https://youtu.be/zQTfuI-9jIo?t=1518], in which von Nuemann probes are sent directly from their origin solar system to each other galaxy, rather than hopping from galaxy to galaxy (as in the "slow colonization" model used by Sagan/Newman/Fogg/Hanson according to Stuart's slide). So if >0.99c probes are possible, then I think the hypothesis I described is at least plausible, since civilizations indeed wouldn't see other expanding civilizations until those civilizations reached them.
2WilliamKiely3moTo clarify, I am pointing out that if extraterrestrials exist that are mining stars for energy and doing other large-scale things that we'd expect to be visbile from other solar systems or galaxies, and if those extraterrestrials are >X light-years away from us and only started doing those large-scale things <X years ago, then we would not expect to see them because the light from their civilization would not yet have had time to reach us. So the speed of expansion of their civilization isn't a necessary aspect of why we can't see them. However, if the nature of our universe is such that extraterrestrials are likely to have arisen elsewhere in our galaxy (meaning <100,000 ly from us), then what's the explanation for why they arose in the last <100,000 years and not in the billions of years before that? That sould seem improbable a priori. One (partial) explanation for that coincidence is if we hypothesize that the nature of our universe is such that any civilization that arises and reaches a point of doing large-scale things that would be visible from many light-years away also expands at near the speed of light beginning as soon as it starts having those large-scale effects. If we further assume that such expansion reaching our solar system before now would have prevented us from existing today (e.g. by extinguishing life on Earth and replacing it with something else), then this serves as a (partial) explanation for the above coincidence by introducing an observation selection effect where we only exist in the first place because no other extraterrestrials have arisen within X ly of us in the last X years. Note that I called this ("intelligence expands at (near) light speed once it starts having effects that would be visible from light years away") hypothesis a "partial" explanation above (for lack of a better word) to note that while it could explain why it's not surprising that we don't see signs of extraterrestrials mining stars (even conditional on them exi
All Possible Views About Humanity's Future Are Wild

I think your last comment is the key point for me - what's wild is how early we are, compared to the full galaxy population across time.

All Possible Views About Humanity's Future Are Wild

I think it's wild if we're living in the century (or even the 100,000 years) that will produce a misaligned AI whose values come to fill the galaxy for billions of years. That would just be quite a remarkable, high-leverage (due to the opportunity to avoid misalignment, or at least have some impact on what the values end up being) time period to be living in.

All Possible Views About Humanity's Future Are Wild

I'm not sure I can totally spell it out - a lot of this piece is about the raw intuition that "something is weird here."

One Bayesian-ish interpretation is given in the post: "The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher." In other words, there is something "suspicious" about a view that implies that we are in an unusually important position - it's the kind of view that seems (by default) more likely to be generated by wi... (read more)

All Possible Views About Humanity's Future Are Wild

Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.

I agree with both you and Arden that there is a certain sense in which the "conservative" view seems significantly less "wild" than my view, and that a reasonable person could find the "conservative" view significantly more attractive for this reason. But I still want to highlight that it's an extremely "wild" view in the scheme of things, and I think we shouldn't impose an inordinate burden of proof on updating from that view to mine.

The Duplicator: Instant Cloning Would Make the World Economy Explode

(Response to both AppliedDivinityStudies and branperr)

My aim was to argue that a particular extreme sort of duplication technology would have extreme consequences, which is important because I think technologies that are "extreme" in the relevant way could be developed this century. I don't think the arguments in this piece point to any particular conclusions about biological cloning (which is not "instant"), natalism, etc., which have less extreme consequences.

Digital People Would Be An Even Bigger Deal

It seems very non-obvious to me whether we should think bad outcomes are more likely than good ones. You asked about arguments for why things might go well; a couple that occur to me are (a) as long as large numbers of digital people are committed to protecting human rights and other important values, it seems like there is a good chance they will broadly succeed (even if they don't manage to stop every case of abuse); (b) increased wealth and improved social science might cause human rights and other important values to be prioritized more highly, and might help people coordinate more effectively.

Digital People Would Be An Even Bigger Deal

I broadly agree with this. The point of my post was to convey intuitions for why "a world of [digital people] will be so different from modern nations states just as modern states are from chimps," not to claim that the long-run future will be just as described in Age of Em. I do think despite the likely radical unfamiliarity of such a world, there are properties we can say today it's pretty likely to have, such as the potential for lock-in and space colonization.

My current impressions on career choice for longtermists

Thanks for the thoughtful comments, Linch.

Response on point 1: I didn't mean to send a message that one should amass the most impressive conventional credentials possible in general - only that for many of these aptitudes, conventional success is an important early sign of fit and potential.

I'm generally pretty skeptical by default of advanced degrees unless one has high confidence that one wants to be on a track where the degree is necessary (I briefly give reasons for this skepticism in the "political and bureaucratic aptitudes" section). This piece only... (read more)

My current impressions on career choice for longtermists

I like this; I agree with most of what you say about this kind of work.

I've tried to mostly list aptitudes that one can try out early on, stick with if they're going well, and pretty reliably build careers (though not necessarily direct-work longtermist careers) around. I think the aptitude you're describing here might be more of later-career/"secondary" aptitude that often develops as someone moves up along an "organization building/running/boosting" or "political/bureaucratic" track. But I agree it seems like a cluster of skills that can be intentionally developed to some degree and used in a lot of different contexts.

My current impressions on career choice for longtermists

Thanks for the thoughtful comments!

On your first point: the reason I chose to emphasize longtermism is because:

  • It's what I've been thinking about the most (note that I am now professionally focused on longtermism, which doesn't mean I don't value other areas, but does mean that that's where my mental energy goes).
  • I think longtermism is probably the thorniest, most frustrating area for career choice, so I wanted to focus my efforts on helping people in that category think through their options.
  • I thought a lot of what I was saying might generalize further, b
... (read more)
My current impressions on career choice for longtermists

I think a year of full-time work is likely enough to see the sort of "signs of life" I alluded to, but it could take much longer to fulfill one's potential. I'd generally expect a lot of people in this category to see steady progress over time on things like (a) how open-ended and poorly-scoped of a question they can tackle, which in turn affects how important a question they can tackle; (b) how efficiently and thoroughly they can reach a good answer; (c) how well they can communicate their insights; (d) whether they can hire and train other people to do c... (read more)

My current impressions on career choice for longtermists

I didn't mean to express a view one way or the other on particular current giving opportunities; I was instead looking for something a bit more general and timeless to say on this point, since especially in longtermism, giving opportunities can sometimes look very appealing at one moment and much less so at another (partly due to room-for-more-funding considerations). I think it's useful for you to have noted these points, though.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

This is still a common practice. The point of it isn't to evaluate employees by # of hours worked; the point is for their manager to have a good understanding of how time is being used, so they can make suggestions about what to go deeper on, what to skip, how to reprioritize tasks, etc.

Several employees simply opt out from this because they prefer not to do it. It's an optional practice for the benefit of employees rather than a required practice used for performance assessment.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

I'm referring to the possibility of supporting academics (e.g. philosophers) to propose and explore different approaches to moral uncertainty and their merits and drawbacks. (E.g., different approaches to operationalizing the considerations listed at https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#Allocating_capital_to_buckets_and_causes , which may have different consequences for how much ought to be allocated to each bucket)

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

Keep in mind that Milan worked for GiveWell, not OP, and that he was giving his own impressions rather than speaking for either organization in that post.

That said:

*His "Flexible working schedule" point sounds pretty consistent with how things are here.

*We continue to encourage time tracking (but we don't require it and not everybody does it).

*We do try to explicitly encourage self-care.

Does that respond to what you had in mind?

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

GiveWell's CEA was produced by multiple people over multiple years - we wouldn't expect a single person to generate the whole thing :)

I do think you should probably be able to imagine yourself engaging in a discussion over some particular parameter or aspect of GiveWell's CEA, and trying to improve that parameter or aspect to better capture what we care about (good accomplished per dollar). Quantitative aptitude is not a hard requirement for this position (there are some ways the role could evolve that would not require it), but it's a major plus.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

The role does include all three of those things, and I think all three things are well served by the job qualifications listed in the posting. A common thread is that all involve trying to deliver an informative, well-calibrated answer to an action-relevant question, largely via discussion with knowledgeable parties and critical assessment of evidence and arguments.

In general, we have a list of the projects that we consider most important to complete, and we look for good matches between high-ranked projects and employees who seem well suited to them. I ex... (read more)

1Ben Pace4yThanks Holden!
Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

We do formal performance reviews twice per year, and we ask managers to use their regular (~weekly) checkins with reports to sync up on performance such that nothing in these reviews should be surprising. There's no unified metric for an employee's output here; we set priorities for the organization, set assignments that serve these priorities, set case-by-case timelines and goals for the assignments (in collaboration with the people who will be working on them), and compare output to the goals we had set.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

All bios here: https://www.openphilanthropy.org/about/team

Grants Associates and Operations Associates are likely to report to Derek or Morgan. Research Analysts are likely to report to people who have been in similar roles for a while, such as Ajeya, Claire, Luke and Nick. None of this is set in stone though.

Load More