I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people. Example: if people work for 40 years, then 10 people donating 10% of their time gives you 10 years with 0 experience, 10 with 1 year, 10 with 2 years, and 10 with 3 years; however, if someone is doing EA work full-time, you get 1 year with 0 exp, 1 with 1, 1 with 2, etc. I expect 1 year with 20 years of experience to plausibly be as good/useful as 10 with 3 years of experience....
I expect 10 people donating 10% of their time to be less effective than 1 person using 100% of their time because you don't get to reap the benefits of learning for the 10% people [emphasize mine]
"benefits of learning" doesn't feel like the only reason, or even the primary reason, why I expect full-time EA work to be much more impactful than part-time EA work, controlling for individual factors. To me, network/coordination costs seem much higher. E.g. it's very hard to manage a team of volunteer researchers or run an org where people volunteer 4h/week on average, and presumably less consistently.
One key difference is that "continuing school" usually has a specific mental image attached, whereas "drop out of school" is much vaguer, making them difficult to compare between.
Ah, I see. I guess I kind of buy this, but I don't think it's nearly as cut-and-dry as you argue, or something. Not sure how much this generalizes, but to me "staying in school" has been an option that conceals approximately as many major suboptions as "leaving school." I'd argue that for many people, this is approximately true - that is, people have an idea of where they'd want to work or what they'd want to do given leaving school, but broadly "staying in school" could mean anything from staying on ~exactly the status quo to transferring somewhere in a different country, taking a gap year, etc.
Many people in EA depart from me here: they see choices that do not maximize impacts as personal mistakes. Imagine a button that, if you press it, would cause you to always take the impact-maximizing action for the rest of your life, even if it entails great personal sacrifice. Many (most?) longtermist EAs I talk to say they would press this button – and I believe them. That’s not true of me; I’m partially aligned with EA values (since impact is an important consideration for me), but not fully aligned.
I think there are people (e.g. me) that value thing...
A title like "How many lives might have been saved given an earlier COVID-19 vaccine rollout?" would have given me much more information about what the post was about than the current title, which I find very vague.
kindle's are smaller, have backlights, and the kindle store is a good user experience.
Note: I work for ARC.
I would consider someone a "pretty good fit" (whatever that means) for alignment research if they started out with a relatively technical background, e.g. an undegrad degree in math/cs, but not really having engaged with alignment before and they were able to come up with a decent proposal after:
Can confirm we would be interested in hearing what you came up with.
nit: link on "reasons" was pasted twice. For others it's https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models
Also hadn't seen that paper. Thanks!
Ben Pace, Ben Khun, Ben Todd, Ben West, and Ben Garfinkel should all become the same person, to avoid confusion.
Looks like if this doesn't work out, I should at least update my surname...
I'm open to a legal arrangement of shared nationalities, bank accounts, and professional roles.
Thanks for writing this up. Just ordered a misto, elastic laces, and a waterpik. My own personal list of recommendations is on https://markxu.com/things, but it lacks justifications. Feel free to ask me about any of the items though.
Systematic undervaluing of some fields is not something I considered and slightly undermines my argument.
I still think the main problem would be identifying rising-star historians in advance instead of in retrospect.
Hey Charles! Glad to see that you're still around.
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
I don't think OpenPhil or the EA Funds are particularly funding constrained, so this seems to suggest that "people who can do useful things with money" is more of a bottleneck than money itself.
It seems easy to construct EA projects that benefit from monies and purchasable talent
I think I disagree about the quality of execution one is likely to get by purchasing talent. I agree that in areas like global health, ...
I am confused by EA orgs not meeting basic living thresholds. Could you provide some examples?
The purpose of hiring two people isn't just to do twice the amount of work. Two people can complement each other, creating a team which is better than the sum of their parts. Even two people with the same job title are never doing exactly the same work, and this matters in determining how much value they're adding to the firm. I think this works against the point you're making in this passage. Do you account for this somewhere else in your post, and/or do you think it affects your overall point?
My claim is that having one person with the skill-set of tw...
Rather than "earn to give" or "do direct work," I think it might be "try as hard as you can to become a highly talented person" (maybe by acquiring domain expertise in an important cause area).
"Try and become very talented" is good advice to take from this post. I don't have a particular method in mind, but becoming the Pareto best in the world at some combination of relevant skills might be a good starting point.
The flip side is that if you value money/monetary donations linearly—or more linearly than other talented people—then you’ve got a comparati
I'm excited about more efficient matching between people who want career advice and people who are not-maximally-qualified to give it, but can still give aid nonetheless. For example, when planning my career, I often find it helpful to talk to other students making similar decisions, even though they're more "more qualified" than me. I suspect that other students/people feel similarly and one doesn't need to be a career coach to be helpful.
I will now consider everything that Carl writes henceforth to be in a parenthetical.
This creates weird incentives, e.g. I could construct a plausible-but-false view, make a post about it, then make a big show of changing my mind. I don't think the amounts of money involved make it worth it, but I'm wary of incentivizing things that are so easily gamed.
This is an interesting stategic consideration! Thanks for writing it up.
Note that the probability of AsianTAI/AsianAwarenessNeeded depends on whether or not there is an AI risk hub in Asia. In the extreme, if you expect making aligned AI to take much longer than unaligned AI, then making Asia concerened about AI risk might drive the probability of AsianTAI close to 0. Given how rough the model is, I don't think this matters that much.
How many EA forum posts will there be with greater than or equal to 10 karma submitted in August of 2020?
metaculus link is broken
In what meaningful ways can forecasting questions be categorized?
This is really broad, but one possible categorization might be questions that have inside view predictions versus questions that have outside view predictions.
How optimistic about "amplification" forecast schemes, where forecasters answer questions like "will a panel of experts say <answer> when considering <question> in <n> years?"
When I look at most forecasting questions, they seem goodharty in a very strong sense. For example, the goodhart tower for COVID might look something like:
1. How hard should I quarantine?
2. How hard I should quarantine is affected by how "bad" COVID will be.
3. How "bad" COVID should be caches out into something like "how many people", "when vaccine coming", "what is death rate", etc.
By the time something I care about becomes specific enough to be predictable/forecastable, it seems like most of the thing I a...
I think this model is kind of misleading, and that the original astronomical waste argument is still strong. It seems to me that a ton of the work in this model is being done by the assumption of constant risk, even in post-peril worlds. I think this is pretty strange. Here are some brief comments:
... (read more)
- If you're talking about the probability of a universal quantifier, such as "for all humans x, x will die", then it seems really weird to say that this remains constant, even when the thing you're quantifying over grows larger.
- For instance, it seems clear that if