EA forum content might be declining in quality. Here are some possible mechanisms:
Another possible mechanism is forum leadership encouraging people to be less intimidated and write more off-the-cuff posts -- see e.g. this or this.
Side note: It seems like a small amount of prize money goes a long way.
E.g. Rethink Priorities makes their salaries public: they pay senior researchers $105,000 – $115,000 per year.
Their headcount near the end of 2021 was 24.75 full-time equivalents.
And their publications page lists 30 publications in 2021.
So napkin math suggests that the per-post cost of a contest post is something like 1% of the per-post cost of a RP publication. A typical RP publication is probably much higher quality. But maybe sometimes getting a lot of shallow explorations quickly is what's desired. (Disclaimer: I haven't been reading the forum much, didn't read many contest posts, and don't have an opinion about their quality. But I did notice the organizers of the ELK contest were "surprised by the number and quality of submissions".)
A related point re: quality is that smaller prize pools presumably select for people with lower opportunity costs. If I'm a talented professional who commands a high hourly rate, I might do the expected value math o... (read more)
Hey just want to weigh in here that you can't divide our FTE by our total publication count, since that doesn't include a large amount of work we've produced that is not able to be made public or is not yet public but will be. Right now I think a majority of our output is not public right now for one reason or another, though we're working on finding routes to make more of it public.
I do think your general point though that the per-post cost of a contest post is less / much less than an RP post is accurate though.
-Peter (Co-CEO of Rethink Priorities)
This is a plausible mechanism for explaining why content is of lower quality than one would otherwise expect, but it doesn't explain differences in quality over time (and specifically quality decline), unless you add extra assumptions such that the proportion of people with low bars to posting has increased recently. (Cf. Ryan's comment)
You're quite right, it was left too implicit.
often people who aren't aware of or care about forum norms
EA has grown a lot recently, so I think there are more people recently who aren't aware of or care about the "high bar" norm. This is in part due to others explicitly saying the bar should be lower, which (as others here have noted) has a stronger effect on some than on others.
Edit: I don't have time to do this right now, but I would be interested to plot the proportion of posts on the EA forum from people who have been on the forum for less than a year over time. I suspect that it would be trending upwards (but could be wrong). This would be a way to empirically verify part of my claim.
I'm interested in learning how plausible people find each of these mechanisms, so I created a short (anonymous) survey. I'll release the results in a few days [ETA: see below]. Estimated completion time is ~90 seconds.
I broadly agree with 5 and 6.
Re 3, 'There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.' - this makes me wince. A language point is that I think talking about how 'good quality' people are overall is unkind and leads to people feeling bad about themselves for not having such-and-such an attribute. An object level point is I don't think there is an anticorrelation - I think being a busy EA org person does make it more likely that they'll have valuable takes, but not being a busy-EA-org-person doesn't make it less likely - there aren't that many busy-EA-org-person jobs, and some people aren't a good fit for busy jobs (eg because of their health or family commitments) but they still have interesting ideas.
Re 7: I'm literally working on a post with someone about how lots of people feel too intimidated to post on the Forum because of its perceived high standards! So I think though the Forum team are trying to make people feel welcome, it's not true that it's (yet) optimized for this, imo.
There's a kind of general problem whereby any messaging or mechanism that's designed to dissuade people from posting low-qual... (read more)
I think it's fairly clear which of these are the main factors, and which are not. Explanations (3-5) and (7) do not account for the recent decline, because they have always been true. Also, (6) is a weak explanation, because the quality wasn't substantially worse than an average post.
On the other hand, (1-2) +/- (8) fit perfectly with the fact that volume has increased over the last 18 months, over the same period as community-building has happened on a large scale. And I can't think of any major contributors outside of (1-8), so I think the main causes are simply community dilution + a flood of newbies.
Though the other factors could still partially explain why the level (as opposed to the trend) isn't better, and arguably the level is what we're ultimately interested in.
There are other platforms for deep, impact-focused research.
Could you name them? I'm not sure which ones are out there, other than LW and Alignment Forum for AI alignment research.
E.g. I'm not sure where else is a better place to post research on forecasting, research on EA community building, research on animal welfare, or new project proposals. There are private groups and slacks, but sometimes what you want is public or community engagement.
How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that's
I know I could potentially have higher impact just betting on saving 10 million shrimp or whatever, but I have enough moral uncertainty that I would highly value this kind of offset package. My guess is there are lots of people for whom going vegan is not possible or desirable, who would be in the same boat.
Have you seen farmkind? Seems like they are trying to provide a donation product for precisely this problem.
Suppose that the EA community were transported to the UK and US in 1776. How fast would slavery have been abolished? Recall that the slave trade ended in 1807 in the UK and 1808 in the US, and abolition happened between 1838-1843 in the British Empire and 1865 in the US.
Assumptions:
Note... (read more)
I suspect the net impact would be pretty low. Most of the really compelling consequentialist arguments like "if we don't agree to this there will be a massive civil war in future" and "an Industrial Revolution will leave everyone far richer anyway" are future knowledge that your thought experiment strips people of. It didn't take complex utility calculations to persuade people that slaves experienced welfare loss; it took deontological arguments versed [mainly] in religious belief to convince people that slaves were actually people whose needs deserved attention. And Jeremy Bentham was already there to offer utilitarian arguments, to the extent people were willing to listen to them.
And I suspect that whilst a poll of Oxford-educated utilitarian pragmatists with a futurist mindset transported back to 1776 would near-unanimously agree that slavery wasn't a good thing, they'd probably devote far more of their time and money to stuff they saw as more tractable like infectious diseases and crop yields, writing some neat Benthamite literature and maybe a bit of wondering whether Newcomen engines and canals made the apocalypse more likely.
I can't imagine the messy political compromise tha... (read more)
Worth noting that if there are like 10,000 EAs today in the world with a population of 8,000,000,000, the percentage of EAs globally is 0.000125 percent.
If we keep the same proportion and apply that to the world population in 1776, there would be about 1,000 EAs globally and about 3 EAs in the United States. If they were overrepresented in the United States by a factor of ten, there would be about 30.
Not sure how to post these two thoughts so I might as well combine them.
In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.
However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:
Just because SBF stole billions of dollars does not mean he has fewer virtuous personalit... (read more)
Watch team backup: I think we should be incredibly careful about saying things like, "it is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating". I'm sure you mean something reasonable when you say this, similar to what's expressed here, but I still wanted to flag it.
I noticed this a while ago. I don't see large numbers of low-quality low-karma posts as a big problem though (except that it has some reputation cost for people finding the Forum for the first time). What really worries me is the fraction of high-karma posts that neither original, rigorous, or useful. I suggested some server-side fixes for this.
PS: #3 has always been true, unless you're claiming that more of their output is private these days.
Should the EA Forum team stop optimizing for engagement?
I heard that the EA forum team tries to optimize the forum for engagement (tests features to see if they improve engagement). There are positives to this, but on net it worries me. Taken to the extreme, this is a destructive practice, as it would
I'm not confident that EA Forum is getting worse, or that tracking engagement is currently net negative, but we should at least avoid failing this exercise in Goodhart's Law.
Thanks for this shortform! I'd like to quickly clarify a bit about our strategy. TL;DR: I don't think the Forum team optimizes for engagement.
We do track engagement, and engagement is important to us, since we think a lot of the ways in which the Forum has an impact are diffuse or hard to measure, and they'd roughly grow or diminish with engagement.
But we definitely don't optimize for it, and we're very aware of worries about Goodharting.
Besides engagement, we try to track estimates for a number of other things we care about (like how good the discussions have been, how many people have gotten jobs as a result of the Forum, etc), and we're actively working on doing that more carefully.
And for what it's worth, I think that none of our major projects in the near future (like developing subforums) are aimed at increasing engagement, and neither have been our recent projects (like promoting impactful jobs).
I'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons:
However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with ... (read more)
Thanks for writing this.
I also agree that research into how laypeople actually think about morality is probably a very important input into our moral thinking. I mentioned some reasons for this in this post for example. This project on descriptive population ethics also outlines the case for this kind of descriptive research. If we take moral uncertainty and epistemic modesty/outside-view thinking seriously, and if on the normative level we think respecting people's moral beliefs is valuable either intrinsicaially or instrumentally, then this sort of research seems entirely vital.
I also agree that incorporating this data into our considered moral judgements requires a stage of theoretical normative reflection, not merely "naively deferring" to whatever people in aggregate actually believe and that we should probably go back and forth between these stages to bring our judgements into reflective equillibrium (or some such).
That said, it seems like what you are proposing is less a project and more an enormous research agenda spanning several fields of research, a lot of which is ongoing across multiple disciplines, though much of it is in its early stages. For example, there is much w... (read more)
I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.
My current moral views seem to be something close to "reflected" preference utilitarianism, but now that I think this is my view, I find it quite hard to figure out what this actually means in practice.
My impression is that most EAs don't have a very preference utilitarian view and prefer to advocate for their own moral views. You may want to look at my most recent post on my shortform on this topic.
If you would like to set up a call sometime to discuss further, please PM!
First, neat idea, and thanks for suggesting it!
Is there a reason this isn't being done? Is it just too expensive?
From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:
1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?)
2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new.
3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.
On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.
Terminology proposal: a class-n (or tier-n) megaproject reduces x-risk by between 10^-n and 10^-(n+1). This is intended as a short way to talk about the scale of longtermist megaprojects, inspired by 80k's scale-scale but a bit cleaner because people can actually remember how to use it.
Class-0 project: reduces x-risk by >10%, e.g. creating 1,000 new AI safety researchers as good as Paul Christiano
Class-1 project: reduces x-risk by 1-10%, e.g. reducing pandemic risk to zero
Class-2 project: reduces x-risk by 0.1-1%, e.g. the Anthropic interpretability team
Class-3 project: reduces x-risk by 0.01-0.1%, e.g. most of these, though some make it into class 2
The classes could also be non-integer for extra precision, so if I thought creating 1,000 Paul Christianos reduced x-risk by 20%, I could call it a -log10(20%) = class-0.70 megaproject.
I'm still not sure about some details, so leave a comment if you have opinions:
I might want to become a billionaire for roughly the reasons in this post [1] (tl;dr EV is tens of millions per year and might be the highest EV thing I can do), and crypto seems like one particularly promising way. I have other possible career paths, but my current plan is to
- accumulate a list of ~25 problems in crypto that could be worth $1B if solved
- hire a research assistant to look over the list for ~100 hours and compile basic stats like "how crowded is this" and estimating market size
- talk with experts about the most promising ones
- if one is particularly promising, do the standard startup things (hire smart contract and front-end devs, get funding somehow) and potentially drop out of school
Does this sound reasonable? Can you think of improvements to this plan? Are there people I should talk to?
[1]: https://forum.effectivealtruism.org/.../an-update-in...
I'm looking for AI safety projects with people with some amount of experience. I have 3/4 of a CS degree from Caltech, one year at MIRI, and have finished the WMLB and ARENA bootcamps. I'm most excited about activation engineering, but willing to do anything that builds research and engineering skill.
If you've published 2 papers in top ML conferences or have a PhD in something CS related, and are interested in working with me, send me a DM.
Who tends to be clean?
With all the scandals in the last year or two, has anyone looked at which recruitment sources are least likely to produce someone extremely net negative in direct impact or to the community (i.e. a justified scandal)? Maybe this should inform outreach efforts.
How much equity does SBF actually have in FTX? Posts like this imply he has 90%, but the first article I found said that he actually had 90% equity in Alameda (which is owned by FTX or something?) and nothing I can find gives a percentage equity in FTX. Also, FTX keeps raising money, so even if he had 90% at once point, surely much of that has been sold.
It's common for people to make tradeoffs between their selfish and altruistic goals with a rule of thumb or pledge like "I want to donate X% of my income to EA causes" or "I spend X% of my time doing EA direct work" where X is whatever they're comfortable with. But among more dedicated EAs where X>>50, maybe a more useful mantra is "I want to produce at least Y% of the expected altruistic impact that I would if I totally optimized my life for impact". Some reasons why this might be good:
Epistemic status: showerthought
If I'm capable of running an AI safety reading group (at my school, and I learn that someone else is doing it, I might be jealous that my impact is "being taken".
If I want to maximize total impact, I don't endorse this feeling. But what feeling does make sense from an impact maximization perspective? Based on Shapley values, you should
Is it possible to donate appreciated assets (e.g. stocks) to one of the EA Funds? The tax benefits would be substantially larger than donating cash.
I know that MIRI and GiveWell as well as some other EA-aligned nonprofits do support donating stocks. GiveWell even has a DAF with Vanguard Charitable. But I don't see such an option for the EA Funds.
edit: DAF = donor-advised fund
What's the right way to interact with people whose time is extremely valuable, equivalent to $10,000-$1M per hour of OpenPhil's last dollar? How afraid should we be of taking up their time? Some thoughts:
I think there are currently too few infosec people and people trying to become billionaires.
What percent of Solana is held by EAs? I've heard FTX holds some, but unknown how much. This is important because if I do a large crypto project on Solana, much of the value might come from increasing the value of the Solana ecosystem, and thus other EAs' investments.
Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.
I agree with these points. I've worked in the space for a few years (most notably for IOHK working on Cardano) and am happy to offer some advice. Saying that, I would much rather work on something directly valuable (climate change or food security) than earning to give at the moment...
A lot of EAs I know have had strong intuitions towards scope sensitivity, but I also remember having strong intuitions towards moral obligation, e.g. I remember being slightly angry at Michael Phelps' first retirement, thinking I would never do this and that top athletes should have a duty to maximize their excellence over their career. Curious how common this is.
I want to skill up in pandas/numpy/data science over the next few months. Where can I find a data science project that is relevant to EA? Some rough requirements:
This is a plausible mechanism for explaining why content is of lower quality than one would otherwise expect, but it doesn't explain differences in quality over time (and specifically quality decline), unless you add extra assumptions such that the proportion of people with low bars to posting has increased recently. (Cf. Ryan's comment)