All of Mark Xu's Comments + Replies

Increasing Demandingness in EA

I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people. Example: if people work for 40 years, then 10 people donating 10% of their time gives you 10 years with 0 experience, 10 with 1 year, 10 with 2 years, and 10 with 3 years; however, if someone is doing EA work full-time, you get 1 year with 0 exp, 1 with 1, 1 with 2, etc. I expect 1 year with 20 years of experience to plausibly be as good/useful as 10 with 3 years of experience.... (read more)

4michaelchen24d
My bad, I meant to write "Part-time volunteering might not provide as much of an opportunity to build unique skills, compared to working full-time on direct work". Fixed.

I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people [emphasize mine]

"benefits of learning" doesn't feel like the only reason, or even the primary reason, why I expect full-time EA work to be much more impactful than part-time EA work, controlling for individual factors. To me, network/coordination costs seem much higher. E.g. it's very hard to manage a team of volunteer researchers or run an org where people volunteer 4h/week on average, and presumably less consistently.

'Dropping out' isn't a Plan

One key difference is that "continuing school" usually has a specific mental image attached, whereas "drop out of school" is much vaguer, making them difficult to compare between.

Ah, I see. I guess I kind of buy this, but I don't think it's nearly as cut-and-dry as you argue, or something. Not sure how much this generalizes, but to me "staying in school" has been an option that conceals approximately as many major suboptions as "leaving school." I'd argue that for many people, this is approximately true - that is, people have an idea of where they'd want to work or what they'd want to do given leaving school, but broadly "staying in school" could mean anything from staying on ~exactly the status quo to transferring somewhere in a different country, taking a gap year, etc.

My bargain with the EA machine

Many people in EA depart from me here: they see choices that do not maximize impacts as personal mistakes. Imagine a button that, if you press it, would cause you to always take the impact-maximizing action for the rest of your life, even if it entails great personal sacrifice. Many (most?) longtermist EAs I talk to say they would press this button – and I believe them. That’s not true of me; I’m partially aligned with EA values (since impact is an important consideration for me), but not fully aligned.

I think there are people (e.g. me) that value thing... (read more)

9Denkenberger23d
I would be interested in what people think qualifies as "great personal sacrifice." Some would say it would mean things like becoming a priest, volunteering for the military during a war, going to prison for something you believe in, etc. The things that many EAs do, such as giving 10% or 50%, being vegetarian or vegan, choosing a lower pay career, relocating to a less preferred city or country, choosing a somewhat less satisfying/prestigious career, or working or volunteering a total of 60 or 70 hours a week (while maintaining good sleep, nutrition, exercise and stress levels), might be described as "significant sacrifice." But maybe if an EA were doing extreme versions of many of these things, it could be considered great personal sacrifice?
How Many People Are In The Invisible Graveyard?

A title like "How many lives might have been saved given an earlier COVID-19 vaccine rollout?" would have given me much more information about what the post was about than the current title, which I find very vague.

Things I recommend you buy and use.

kindle's are smaller, have backlights, and the kindle store is a good user experience.

Consider trying the ELK contest (I am)

Note: I work for ARC.

I would consider someone a "pretty good fit" (whatever that means) for alignment research if they started out with a relatively technical background, e.g. an undegrad degree in math/cs, but not really having engaged with alignment before and they were able to come up with a decent proposal after:

  • ~10 hours of engaging with the ELK doc.
  • ~10 hours of thinking about the document and resolving confusions they had, which might involve asking some questions to clarify the rules and the setup.
  • ~10 hours of trying to come up with a proposal.
... (read more)
Consider trying the ELK contest (I am)

Can confirm we would be interested in hearing what you came up with.

Announcing "Naming What We Can"!

Ben Pace, Ben Khun, Ben Todd, Ben West, and Ben Garfinkel should all become the same person, to avoid confusion.

2Mojmir_Stehlik7mo
Ditto for Jona Glade and Joan Gass.

Looks like if this doesn't work out, I should at least update my surname...

I'm open to a legal arrangement of shared nationalities, bank accounts, and professional roles.

Things I recommend you buy and use.

Thanks for writing this up. Just ordered a misto, elastic laces, and a waterpik. My own personal list of recommendations is on https://markxu.com/things, but it lacks justifications. Feel free to ask me about any of the items though.

1MaximeCdS4mo
Thanks for sharing! Can I ask why you recommend both a Kindle and a Remarkable 2? Do you think there's a need for Kindle if one has a Remarkable?
1BenSchifman1y
Thanks Mark -- I'll take a look at your site!
Money Can't (Easily) Buy Talent

Systematic undervaluing of some fields is not something I considered and slightly undermines my argument.

I still think the main problem would be identifying rising-star historians in advance instead of in retrospect.

2jared_m1y
You might not have to identify them in advance, rather than 10+ years into their post-doctoral career. Googling "mid-career grant history" leads to a few links like these [http://legacy.humanities.ufl.edu/funding/faculty-september-fitch.html] — where charitable or governmental foundations provide support to experienced scholars. The American Historical Association promoted the same grant here [https://www.historians.org/publications-and-directories/perspectives-on-history/october-2018/grant-of-the-week-fitch-fellowships] . One could imagine a similar grant (perhaps hosted at FHI [https://www.fhi.ox.ac.uk/grant-announcement/], Princeton [https://online.princeton.edu/node/216], or another EA-experienced university [or at Rethink Priorities]) where "architectural history," "preservation-related," and other italicized words below are replaced with EA-aligned project parameters that FHI and its donors would hope to support. One could also structure fewer grants at a higher price point than $15K (say, $50K) to fund more ambitious projects that may absorb 6-9 months of a scholar's time — rather than 2-3 months. As star scholars are identified, their funding could be renewed for multiple years. (Open Phil has certainly followed that model for rising stars and their high-potential projects. See their extension of Jade's grant funding here [https://www.openphilanthropy.org/giving/grants/centre-for-effective-altruism-longtermist-incubator] .)
Money Can't (Easily) Buy Talent

Hey Charles! Glad to see that you're still around.

It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA

I don't think OpenPhil or the EA Funds are particularly funding constrained, so this seems to suggest that "people who can do useful things with money" is more of a bottleneck than money itself.

It seems easy to construct EA projects that benefit from monies and purchasable talent

I think I disagree about the quality of execution one is likely to get by purchasing talent. I agree that in areas like global health, ... (read more)

Money Can't (Easily) Buy Talent

I am confused by EA orgs not meeting basic living thresholds. Could you provide some examples?

9Josh Jacobson1y
I am not trying to claim that EA orgs do not meet basic living thresholds, but rather that "There are many organizations offering amounts that many likely find greatly constraining to living off of." I think it's quite common for EA job offers to be in the $40-$55k range (there are also many well above this range), with multiple instances of being significantly lower than that (e.g. $30k). I believe that there are many that find these potential salaries to be greatly constraining.
Money Can't (Easily) Buy Talent

The purpose of hiring two people isn't just to do twice the amount of work. Two people can complement each other, creating a team which is better than the sum of their parts. Even two people with the same job title are never doing exactly the same work, and this matters in determining how much value they're adding to the firm. I think this works against the point you're making in this passage. Do you account for this somewhere else in your post, and/or do you think it affects your overall point?

My claim is that having one person with the skill-set of tw... (read more)

Money Can't (Easily) Buy Talent

Rather than "earn to give" or "do direct work," I think it might be "try as hard as you can to become a highly talented person" (maybe by acquiring domain expertise in an important cause area).

"Try and become very talented" is good advice to take from this post. I don't have a particular method in mind, but becoming the Pareto best in the world at some combination of relevant skills might be a good starting point.

The flip side is that if you value money/monetary donations linearly—or more linearly than other talented people—then you’ve got a comparati

... (read more)
Introducing Probably Good: A New Career Guidance Organization

I'm excited about more efficient matching between people who want career advice and people who are not-maximally-qualified to give it, but can still give aid nonetheless. For example, when planning my career, I often find it helpful to talk to other students making similar decisions, even though they're more "more qualified" than me. I suspect that other students/people feel similarly and one doesn't need to be a career coach to be helpful.

5omernevo2y
That's really interesting! There are probably quite a few different formats to do this sort of thing (one on ones with people facing the same dilemmas \ people that have faced it recently, bringing together groups of people who have similar situations, etc.) I think some local groups are doing things like this, but it's definitely something we should think about as an option that can potentially be relatively low effort and (hopefully) high impact.
Thoughts on whether we're living at the most influential time in history

I will now consider everything that Carl writes henceforth to be in a parenthetical.

EA Forum Prize: Winners for August 2020

This creates weird incentives, e.g. I could construct a plausible-but-false view, make a post about it, then make a big show of changing my mind. I don't think the amounts of money involved make it worth it, but I'm wary of incentivizing things that are so easily gamed. 

AI risk hub in Singapore?

This is an interesting stategic consideration! Thanks for writing it up.

Note that the probability of AsianTAI/AsianAwarenessNeeded depends on whether or not there is an AI risk hub in Asia. In the extreme, if you expect making aligned AI to take much longer than unaligned AI, then making Asia concerened about AI risk might drive the probability of AsianTAI close to 0. Given how rough the model is, I don't think this matters that much.

Delegate a forecast

How many EA forum posts will there be with greater than or equal to 10 karma submitted in August of 2020?

2elifland2y
Here's my forecast [https://elicit.ought.org/builder/_6_SZ-Jjg]. The past is the best predictor of the future, so I looked at past monthly data [https://forum.effectivealtruism.org/allPosts?timeframe=monthly&sortedBy=top] as the base rate. I first tried to tease out whether there was a correlation in which months had more activity between 2020 and 2019. It seemed there was a weak negative correlation, so I figured my base rate should be just based on the past few months of data. In addition to the past few months of data, I considered that part of the catalyst for record-setting July activity might be Aaron's "Why you should put on the EA Forum" EAGx talk [https://www.youtube.com/watch?v=FdrTMUQdBSg]. Due to this possibility, I gave August a 65% chance of hitting over the base rate of 105 >=10 karma posts. My numerical analysis is in this sheet [https://docs.google.com/spreadsheets/d/1n9V6TeOyGBqy3ZfuqX8EYLGDXLZW5aIh0GrMMfvE2U4/edit#gid=0] .
Will Three Gorges Dam Collapse And Kill Millions?

metaculus link is broken

[This comment is no longer endorsed by its author]Reply
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

In what meaningful ways can forecasting questions be categorized?

This is really broad, but one possible categorization might be questions that have inside view predictions versus questions that have outside view predictions.

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

How optimistic about "amplification" forecast schemes, where forecasters answer questions like "will a panel of experts say <answer> when considering <question> in <n> years?"

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

When I look at most forecasting questions, they seem goodharty in a very strong sense. For example, the goodhart tower for COVID might look something like:

1. How hard should I quarantine?

2. How hard I should quarantine is affected by how "bad" COVID will be.

3. How "bad" COVID should be caches out into something like "how many people", "when vaccine coming", "what is death rate", etc.

By the time something I care about becomes specific enough to be predictable/forecastable, it seems like most of the thing I a... (read more)