(Maybe obvious point, but) there just aren't that many people doing longtermist EA work, so basically every problem will look understaffed, relative to the scale of the problem.
I think the model setup or at least the clarifications around it needs tweaking. Namely you're assuming that the main reason we may discontinue a researched-to-be-positive intervention is due to intrinsic time preference. But I think it's much more likely that over enough time there will be distributional shift/generalizability issues with old studies.
For one example, if we're all dead, a lot of studies are kind of useless. For another example, studies on the cost-effectiveness of (e.g.) malaria nets and deworming pills becomes increasingly out-of-distribution as (thankfully!) malarial and intestinal worm loads decrease worldwide, perhaps in the future approaching zero.
if you have good evidence that your shape rotator abilities aren’t reasonably strong — e.g., you studied reasonably hard for SAT Math but got <700 (90th percentile among all SAT takers).
This is really minor, but I think there's a weird conflation of spatial-visual reasoning and mathematical skills in this post (and related memespaces like roon's). This very much does not fit my own anecdotal experiences*, and I don't think this is broadly justified in psychometrics research.
*FWIW, I've bounced off of AISTR a few times. I usually attribute the d... (read more)
I’d still be interested in hearing how the existing level of COIs affects your judgement of EA epistemics.
I'm confused by this. My inside view guess is that this is just pretty small relative to other factors that can distort epistemics. And for this particular problem, I don't have a strong coherent outside view because it's hard to construct a reasonable reference class for what communities like us with similar levels of CoIs might look like.
Ironically, the person I mentioned in my previous comment is one of the main players at Anthropic, so your second paragraph doesn't give me much comfort.
I don't understand your sentence/reasoning here. Naively this should strengthen ofer's claim, not weaken it.
Here's my general stance on integrity, which I think is a superset of issues with CoI.
As noted by ofer, I also think investments are structurally different from grants.
Speaking for myself, I was interested in a lot of the same things in the LW cluster (Bayes, approaches to uncertainty, human biases, utilitarianism, philosophy, avoiding the news) before I came across LessWrong or EA. The feeling is much more like "I found people who can describe these ideas well" than "oh these are interesting and novel ideas to me." (I had the same realization when I learned about utilitarianism...much more of a feeling that "this is the articulation of clearly correct ideas, believing otherwise seems dumb").That said, some of the ideas ... (read more)
I think an ideal social norm would be only reviewing an organization after you are no longer working with them and thus can review the full experience.
This implies that a nontrivial fraction of employees would leave, which seems true of some EA orgs and not others (and I think the difference is non-random for pretty obvious reasons).
See also Altruism as a central purpose.
I'm a bit confused by both this post and comments about questions like what level/timing the deference happens.Speaking for myself, if an internet rando wrote a random blog post called "AGI Ruin: A List of Lethalities," I probably would not read it. But I did read Yudkowsky's post carefully and thought about it nontrivially, mostly due to his track record and writing ability (rather than e.g. because the title was engaging or because the first paragraph was really well-argued).
Fair, though many EAs are probably in positions where they can talk to other billionaires (especially with >5 hours of planning), and probably chose not to do so.
In 2015, when I was pretty new to EA, I talked to a billionaire founder of a company I worked at and tried to pitch them on it. They seemed sympathetic but empirically it's been 7 years and they haven't really done any EA donations or engaged much with the movement. I wouldn't be surprised if my actions made it at least a bit harder for them to be convinced of EA stuff in the future.In 2022, I probably wouldn't do the same thing again, and if I did, I'd almost certainly try to coordinate a bunch more with the relevant professionals first. Certainly the current generation of younger highly engaged EAs seemed more deferential (for better or worse) and similar actions wouldn't be in the Overton window.
My understanding is that without altruistic end-buyers, then the intrinsic value of impact certificates becomes zero and it's entirely a confidence game.
Thank you, this clarification makes sense to me!
This critique strikes me as about as sensible as digging up someone's old high-school essays and critiquing their stance on communism or the criminal justice system. I want to remind any reader that this is an opinion from 1999, when Eliezer was barely 20 years old. I am confident I can find crazier and worse opinions for every single leadership figure in Effective Altruism, if I am willing to go back to what they thought while they were in high-school. To give some character, here are some things I believed in my early high-school years
This is really mino... (read more)
Oh, hmm, I think this is just me messing up the differences between the U.S. and german education systems (I was 18 and 19 in high-school, and enrolled in college when I was 20).
I think the first quote on nanotechnology was actually written in 1996 originally (though was maybe updated in 1999). Which would put Eliezer at ~17 years old when he wrote that.
The second quote was I think written in more like 2000, which would put him more in the early college years, and I agree that it seems good to clarify that.
Sure that makes more sense to me. I was previously reading "few" as 2-4 times, and was thinking that's way too few times to be asking for help from coworkers total in a week, but a bit too high to be asking (many) specific senior people for help each year.
My guess is that people should ask their friends/colleagues/acquaintances for help with things a few times a week, and ask senior people they don't know for help with things a few times a year. T
Is this a few times each person, or a few times total? It's hard for me to tell because either seems slightly off to me.
I meant like maybe 3-15 times total ("few" was too ambiguous to be a good word choice).
Writing that out maybe I want to change it to 3-30 (the top end of which doesn't feel quite like "a few"). And I can already feel how I should be giving more precise categories // how taking what I said literally will mean not doing enough asking in some important circumstances, even if I stand by my numbers in some important spiritual sense.
Anyway I'm super interested to get other people's guesses about the right numbers here. (Perhaps with better categories.)
Have you considered that deworming may be a perpetual need while influencing a decision that motivates a sustainable systemic change a permanent solution? This could justify spending on advocacy, in general.
It's an interesting hypothesis, but I don't think deworming is a perpetual need? I don't think I took deworming pills growing up, and I doubt most Forum readers did.
Framed another way, I don't think we should have a strong prior belief that if we subsidize health interventions for X years, this means they'll need to be continuously subsidized by t... (read more)
Basically, there's a big difference between "OP made a mistake because they over/underrated X" and "OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants."
The synthesis position might be something like "some subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants."
I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. We're all human.
Yeah I mean, no kidding. But it's called Open Philanthropy. It's easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this.
I don't understand this point. Can you spell it out? From my perspective, Open Phil's main legible contribution is a) identifying great donation opportunities, b) recommending Cari Tuna and Dustin Moskovitz to donate to such opportunities, and c) building up an ... (read more)
A general policy I've adapted recently as I've gotten more explicit* power/authority than I'm used to is to generally "operate with slightly to moderately more integrity than I project explicit reasoning or cost-benefits analysis would suggest."
This is primarily for epistemics and community epistemics reasons, but secondarily for optics reasons.I think this almost certainly does risk leaving value on the table, but on balance it is a better balance than potential alternatives:
A lot of people, myself included, had relatively weak priors on the effects of marginal imprisonments on crime, and were subsequently convinced by the Roodman report. It might be valuable for people interested in this or adjacent cause areas to commission a redteaming of the Roodman report, perhaps by the CityJournal folks?
I'm very excited about this and there's a ~70% chance I will be interested in attending assuming it makes sense for me to do so!
I don't know how much credit/inspiration this should really give people. As you note, the other conditions for EA org work is often better than external jobs (though this is far from universal). And as you allude to in your post, there are large quality of life improvements from working on something that genuinely aligns with my values. At least naively, for many people (myself included) it is selfishly worth quite a large salary cut to do this. Many people both in and outside of EA also take large salary cuts to work in government and academia as well, sometimes with less direct alignment with their values, and often with worse direct working conditions.
Thanks for the explanation. My impression is that
Thanks, this makes sense!I do appreciate you (and others) thinking clearly about this, and your interest in safeguarding the future.
One issue here with some of the latter numbers is that a lot of the work is being done by the expected value of the far future being very high, and (to a lesser extent) by us living in the hinge of history. Among the set of potential longtermist projects to work on (e.g. AI alignment, vs. technical biosecurity, or EA community building, or longtermist grantmaking, or AI policy, or macrostrategy), I don't think the present analysis of very high ethical value (in absolute terms) should be dispositive in causing someone to choose careers in AI alignment.
Also, you are assuming an erroneous dynamic. Animal welfare is important for AI safety not only because it enables it to acquire diametrically different impact but also since it provides a connection to the agriculture industry, a strategic sector in all nations. Once you have the agri lobbies on board, you speak with the US and Europe, at least, about safety sincerely.
Can you spell out the connection?
EAs have legible achievements in x-risk-adjacent domains (e.g. highly cited covid paper in Science, Reinforcement Learning from Human Feedback which was used to power stuff like InstructGPT), and illegible achievements in stuff like field-building and disentanglement research.
However, the former doesn't have a clean connection to actually reducing x-risk, and the latter isn't very legible.
So I think it is basically correct that we have not done legible things to reduce object-level x-risk like cause important treaties to be signed, ban gain-of-function res... (read more)
Up until recently, the vast majority of EA donations come from Open Philanthropy, so you can look at their grants database to get a pretty good sense.
needs to grant access.
We might also want to praise users to those who have a high ratio of highly upvoted comments to posts
One thing that confuses me is that the karma metric probably already massively overemphasizes rather than underemphasizes the value of comments relative to posts. Writing 4 comments that have ~25 karma each probably provides much less value (and certainly takes me much less effort) than writing a post that gets ~100 karma.
I think for moderate to high levels of x-risk, another potential divergence is that while both longtermism and non-longtermism axiologies will lead you to believe that large scale risk prevention and mitigation is important, specific actions people take may be different. For example:
Some of the... (read more)
You may benefit from joining the Facebook group Bay Area Effective Altruists, which does list some events. It's easier to find events if you're willing to widen your search space to include East Bay and South Bay, though still not amazing. For as long as I've been here, the bay area has substantially less publicly accessible EA events than other EA hubs. Most EAs coming to the area find connections via pre-existing personal or professional networks.
This is a Known Issue, but relatively few people try very hard to resolve it. Or more precisely, many pe... (read more)
I think your two comments here are well-argued, internally consistent, and strong. However, I think I disagree with
As, to a first approximation, reality works in first-order terms
in the context of EA career choice writ large, which I think may be enough to flip the bottom-line conclusion.
I think the crux for me is that I think if the differences in object-level impact across people/projects is high enough, then for anybody whose career or project is not in the small subset of the most impactful careers/projects, their object-level impacts will ... (read more)
I'm curious whether people have thoughts on whether this analysis of problem-level tractability also applies to personal fit. I think many of the arguments here naively seems like it should apply to personal fit as well. Yet many people (myself included) make consequential career- and project- selection decisions based on strong intuitions of personal fit.
This article makes a strong argument that it'd be surprising if tractability (but not importance, or to a lesser degree neglectedness) can differ by >2 OOMS. In a similar vein, I think it'd also ... (read more)
Many readers will be familiar with Peter Singer’s Drowning Child experiment:
Peter Singer's Drowning Child thought experiment.
A "Drowning Child experiment" will be substantially more concerning
You got a lot of flak this post, and I think many of the dissenting comments were good (I strongly upvoted the top one). I also think some specific points could be better argued, and it'd be quite valuable to have a deeper understanding of the downside risks and where the bottom-line advice is not applicable.
Nonetheless, I think I should mention publicly that I broadly agree with this post. I think the post advances a largely correct bottom-line conclusion, and for the right reasons. I think many EAs in positions to do so, for example undergrads/grad... (read more)
I think the reasoning is sound. One caveat on the specific numbers/phrasing:
So whilst I think it's true for some EAs that EA jobs offer slightly less pay [emphasis mine] relative to their other options
To be clear, many of us originally took >>70% pay cuts to do impactful work, including at EA orgs. EA jobs pay more now, but I imagine being paid <50% of what you'd otherwise earn elsewhere is still pretty normal for a fair number of people in meta and longtermist roles.
I agree with the rest of your comparisons but I think this one is suspect:
Compare the salaries of ETG EAs with non-ETG EAs that are otherwise as similar as possible, e.g. a quant researcher at Jane Street vs one at Redwood Research. Usually, I think the ETG EAs earn more.
"Pure" ETG positions are optimized for earning potential, so we should expect them to be systematically more highly paid than other options.
There's one example comparison here and to clarify I think this is most true for more meta/longtermist organisations, as salaries within animal welfare (for example) are still quite low IMO[...] Rethink Priorities
Please note that Rethink Priorities, where I work, has the same salary band across cause areas.
What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?
I think the standard thing for many orgs and cultures to start off open and transparent and move towards closedness and insularity. There are good object-level reasons for the former, and good object-level reasons for the latter, but taken as a whole, it might just better be viewed as a lifecycle thing rather than one of principled arguments.
Open Phil is an unusually transparent and well-documented example in my mind (though perhaps this is changing again in 2022)
At least for me, I thought we should avoid talking about the pivotal act stuff through a combination of a) this is obviously an important candidate hypothesis but seems bad to talk about because then the Bad Guys will Get Ideas and b) other people who're better at math/philosophy/alignment presumably know this and are privately considering it in detail, I have only so much to contribute here.
b) is plausibly a dereliction of duty, as is my relative weighting of the terms, but at least in my head it wasn't (isn't?) obvious to me that it was wrong for me not to spend a ton of time thinking about pivotal acts.
I upvoted this even though I strongly disagree with it.
(However, for other readers, just in case it needs saying: please make your own independent assessment of whether this post is overall worthwhile*).
*One thing I dislike is overcorrection for "oh no I might be biased for liking/not liking this post, so I can't downvote it"
(strongly upvoted because I think this is a clean explanation of what I think is an underrated point at the current stage, particularly among younger EAs).
I agree with others here that it's not clear whether undifferentiated scientific progress is good or bad at the current margin.
However, assuming scientific progress is good, I'm also not convinced that breaking up elite colleges will increase scientific progress. Some counterpoints:
Scientific progress has been the root of so much progress, I think we should have a strong prior that more of it is good!
See discussion here.
Also cost-effectiveness analyses in general, of which only a subset is in EA.
I don't use this framing very often because I think it confuses more than enlightens, but I roughly mean a something similar to #3:
13. I value this action roughly equivalently to "EA coffers" increasing by ~$10k.
Personally, I primarily downvote posts/comments where I generally think "reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn't read this post/comment, assuming that the time spent reading this post/comment is free."
I basically never strong downvote posts unless it's obvious spam or otherwise an extremely bad offender in the "worsens thinking" direction.
Thanks for the tip!