Strong disagree. A bioweapons lab working in secret on gain of function research for a somewhat belligerent despotic government, which denies everything after an accidental release is nowhere near any model I have of 'scrupulous altruism'.
Ironically, the person I mentioned in my previous comment is one of the main players at Anthropic, so your second paragraph doesn't give me much comfort.
I'm talking about the unilateralist's curse with respect to actions intended to be altruistic, not the uncontroversial claim that people sometimes do bad things. I find it hard to believe that any version of the lab leak theory involved all the main actors scrupulously doing what they thought was best for the world.
I think we should be careful with arguments that such and such existential risk factor is entirely hypothetical.
I think we should be careful with arguments that existential risk discussions require lower epistemic standards. That could backf... (read more)
Is there any real-world evidence of the unilateralist's curse being realised? My sense historically is that this sort of reasoning to date has been almost entirely hypothetical, and has done a lot to stifle innovation and exploration in the EA space.
Another vote against this being a wise metric, here. Anecdotally, while writing my last post when (I thought) the prize was still running, I felt both a) incentivised to ensure the quality was as high as I could make it and b) less likely to actually post as a consequence (writing higher quality takes longer).
And that matches what I'd like to see on the forum - better signal to noise ratio, which can be achieved both by increasing the average quality of posts and by decreasing the number of marginal posts.
Unsurprisingly I disagree with many of the estimates, but I very much like this approach. For any analysis of any action, one can divide the premises arbitrarily many times. You stop when you're comfortable that the granularity of the priors you're forming is high enough to outweigh the opportunity cost of further research, which is how any of us can literally take any action.
In the case of 'cluelessness', it honestly seems better framed as 'laziness' to me. There's no principled reason why we can't throw a bunch of resources at refining and parameterising... (read more)
I'm really not sure this is true. A market is one way of aggregating knowledge and preferences, but there are others (e.g. democracy). And as in a democracy, we expect many or most decisions to be better handled by a small group of people whose job it is.
This doesn't sound like most people's view on democracy to me. Normally it's more like 'we have to relinquish control over our lives to someone, so it gives slightly better incentives if we have a fractional say in who that someone is'.
I'm reminded of Scott Siskind on prediction markets - while there mi... (read more)
Fwiw I didn't downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I'm also finding it hard to parse some of what you say.
A system or pattern or general belief that leads to a defect or plausible potential defect (even if there is some benefit to it), and even if this defect is abstract or somewhat disagreed upon.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I'll give what I believe is a fairly clear example from my own recent investigation... (read more)
For those who enjoy irony: the upvotes on this post pushed me over the threshold not only to 6-karma strong upvotes, but for my 'single' upvoted now being double-weighted.
Often authors mention the issue, but don't offer any specific instances of groupthink, or how their solution solves it, even though it seems easy to do—they wrote up a whole idea motivated by it.
You've seriously loaded the terms of engagement here. Any given belief shared widely among EAs and not among intelligent people in general is a candidate for potential groupthink, but qua them being shared EA beliefs, if I just listed a few of them I would expect you and most other forum users to consider them not groupthink - because things we believe ... (read more)
As a datum I rarely look beyond the front page posts, and tbh the majority of my engagement probably comes from the EA forum digest recommendations, which I imagine are basically a curated version of the same.
'Personally I'd rather want the difference to be bigger, since I find it much more informative what the best-informed users think.'
This seems very strange to me. I accept that there's some correlation between upvoted posters and epistemic rigour, but there's a huge amount of noise, both in reasons for upvotes and in subject areas. EA includes a huge diversity of subject areas each requiring specialist knowledge. If I want to learn improv, I don't go to a Fields Medalist winner or Pulitzer prize winning environmental journalist, so why should the equivalent be true on here?
I think that a fairly large fraction of posts is of a generalist nature. Also, my guess is that people with a large voting power usually don't vote on topics they don't know (though no doubt there are exceptions).
I'd welcome topic-specific karma in principle, but I'm unsure how hard it is to implement/how much of a priority it is. And whether karma is topic-specific or not, I think that large differences in voting power increase accuracy and quality.
That makes sense, though I don't think it's as clear a dividing line as you make out. If you're submitting a research project for eg, you could spend a lot of time thinking about parameters vs talking about the general thing you want to research, and the former could make the project sound significantly better - but also run the risk you get rejected because those aren't the parameters the grant manager is interested in.
'It’s rarely worth your time to give detailed feedback'
This seems at odds with the EA Funds' philosophy that you should make a quick and dirty application that should be 'the start of a conversation'.
I think you're mixing up updates and operations. If I understand you right, you're saying each user on the forum can get promoted at most 16 times, so at most each strong update gets incremented 16 times.
But you have to count the operations of the algorithm that does that. My naive effort is something like this: Each time a user's rank updates (1 operation), you have to find and update all the posts and users that received their strong upvotes (~N operations where N is either their number of strong upvotes, or their number of votes depending on... (read more)
To be clear, I'm looking at the computational costs, not algorithmic complexity which I agree isn't huge.
Where are you getting 2x from for computations? If User A has cast strong upvotes to up to N different people, each of who has cast strong upvotes to up to N different people, and so on up to depth D, then naively a promotion for A seems to have O(N^D) operations, as opposed to O(1) for the current algorithm. (though maybe D is a function of N?)
In practice as Charles says big O is probably giving a very pessimistic view here since there's a large gap be... (read more)
I just posted a comment giving a couple of real-life anecdotes showing this effect.
For the last several years, most EA organizations did little or no pursuit of media coverage. CEA’s advice on talking to journalists was (and is) mostly cautionary. I think there have been good reasons for that — engaging with media is only worth doing if you’re going to do it well, and a lot of EA projects don’t have this as their top priority.
I think this policy has been noticeably harmful, tbh. If the supporters of something won't talk to the media, the net result seems to be that the media talk to that thing's detractors instead, and ... (read more)
But in the process you might also promote other users - so you'd have to check for each recipient of strong upvotes if that was so, and then repeat the process for each promoted user, and so on.
Pretty sure that would be computationally intractable. Every time someone was upvoted beyond a threshhold you'd need to check the data of every comment and post on the forum.
Another concern is karma inflation from strong upvotes. As time goes by, the strength of new strong upvotes increases (details here), which means more recent posts will naturally tend to be higher rated even given a consistent number of users.
I just posted a reply to a similar comment about orthogonality + IC here.
(Epistemic status of this comment: much weaker than of the OP)
I am suspicious a) of a priori non-mathematical reasoning being used to generate empirical predictions on the outside view and b) of this particular a priori non-mathematical reasoning on the inside view. It doesn't look like AI algorithms have tended to get more resource grabby as they advance. AlphaZero will use all the processing power you throw at it, but it doesn't seek more. If you installed the necessary infrastructure (and, ok, upgraded the storage space), it could presumably... (read more)
I don't think background rate is relevant here. I was contesting your claim that 'the people who are most impactful within EA have both high alignment and high competence'. It depends on what you mean 'within EA' I guess. If you mean 'people who openly espouse EA ideas', then the 'high alignment' seems uninterestingly true almost by definition. If you mean 'people who are doing altruistic work effectively' then Gates and Musk are , IMO, strong enough counterpoints to falsify the claim.
Maybe I'm just wrong. I only have a lay understanding of GDPR, but my impression was that keeping any data that people had shared with you without their knowledge was getting into sketchy territory.
Pimp: this is very much the sort of stuff we're now trying to facilitate on the Gather Town.
When I came to university I had already read a lot of the Sequences ...
You'd read the Sequences but you thought we were a cult? Inconceivable!
(/sarcasm)
Oddly, while I agree with much of this post (and strong upvoted), it reads to me as evidencing many of the problems it describes! Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology (tithes being t... (read more)
Almost of the elements that make EA seem culty seem to me to hail from the rationality side of the movement: Pascalian reasoning, in-group jargon, hero worship, or rather epistemic deferral to heroes and to holy texts, and eschatology
The hero worship is I think especially concerning and is a striking way that implicit/"revealed" norms contradict explicit epistemic norms for some EAs
In case anyone isn't aware of it, that's very much the demographic that CEEALAR (aka the EA hotel) is trying to support!
They are surprised that somebody interested in EA might be unhappy to discover that the committee members have been recording the details of their conversation in a CRM without asking.
Side note: morality aside, in Europe this is borderline illegal, so seems like a very bad idea.
I'm not sure the most impactful people need have high alignment. We've disagreed about Elon Musk in the past, but I still think he's a better candidate for the world's most counterfactually positive human than anyone else I can think of. Bill Gates is similarly important and similarly kinda-but-conspicuously-not-explicitly aligned.
Yes, if you rank all humans by counterfactual positive impact, most of them are not EA, because most humans are not EAs.
This is even more true if you are mostly selecting on people who were around long before EA started, or if you go by ex post rather than ex ante counterfactual impact (how much credit should we give to Bill Gates' grandmother?)
(I'm probably just rehashing an old debate, but also Elon Musk is in the top 5-10 of contenders for "most likely to destroy the world," so that's at least some consideration against him specifically).
Sub-hypothesis: the people who find extravagant spending distasteful are disproportionately likely to be the people who object to the billionaires that enable it - and so that spending it isn't what pisses them off so much as what draws their attention to the scenario they dislike.
But morally-motivated people, especially on college campuses, often find seemingly-extravagant spending distasteful.
As far as I can see, no-one else has raised this, but to me the optics of having large sums of money available and not spending it are as bad or worse as spending too freely. Cf Christopher Hitchens' criticism of Mother Teresa - and closer to home, Evan's criticisms a few years ago that EA fund payouts were being granted too infrequently. For what it's worth, I find the latter a much bigger concern.
I regret that I have but one strong upvote to give this. Lack of feedback on why some of the projects I've been involved in didn't get funding has been incredibly frustrating.
One further benefit of getting it would have been that it can help across the ecosystem when you get turned down by Funder A and apply to Funder B - if you can pass on the feedback you got from Funder A (and how you've responded to it), that can save a lot of Funder B's time.
As a meta-point, the lack of feedback on why there's a lack of feedback also seems very counterproductive.
'By default' seems like another murky term. The orthogonality thesis asserts (something like) that it's not something you should place a bet at arbitrarily long odds on, but maybe it's nonetheless very likely to work out, because per Drexler, we just don't code AI as an unbounded optimiser, which you might still call 'by default'.
At the moment I have no idea what to think, tbh. But I lean towards focusing on GCRs that definitely need direct action in the short term, such as climate change, over ones that might be more destructive but where the relevant direct action is likely to be taken much further off.
I had a look at it, but my instinct was the reverse - it feels much more natural to me to walk an avatar through a virtual space than to drag a video feed of my face around.
But if there's a lot of EAs who prefer Spatial Chat, maybe there'd be enough demand to support both at some point. My instinct would be to avoid splitting the space any more just yet, but since these places can all link to each other, over time we could build a linked network of virtual spaces (we already have a 2-way link to an EA VR space, for eg).
I think the real struggle would be how to get anywhere near enough users to make the app usable - there are hundreds of copycat dating apps which don't place onerous restrictions on can use them and struggle to get traction, and you're talking about opening it to maybe 5-10000 people in the world.
So my first thought would be 'make the category more general'. It's not like I'm only interested in dating other EAs - and I also doubt my profile of partners is particularly typical among EAs, or that there will even be that much commonality in who we prefer to d... (read more)
Hi Steven,
To clarify, I make no claims about what experts think. I would be moderately surprised if more than a small minority of them pay any attention to the orthogonality thesis, presumably having their own nuanced views how AI development might pan out. My concern is with the non-experts who make up the supermajority of the EA community - who frequently decide whether to donate their money to AI research vs other causes, who are prioritising deeper dives, who in some cases decide whether to make grants, who are deciding whether to become experts,... (read more)
My concern is with the non-experts…
My perspective is “orthogonality thesis is one little ingredient of an argument that AGI safety is an important cause area”. One possible different perspective is “orthogonality thesis is the reason why AGI safety is an important cause area”. Your belief is that a lot of non-experts hold the latter perspective, right? If so, I’m skeptical.
I think I’m reasonably familiar with popular expositions of the case for AGI safety, and with what people inside and outside the field say about why or why not to work on AGI safety. And... (read more)
This isn't a post about careers, it's about moral philosophy! I have been toying with a thought like this for years, but never had the wherewithal to coherently graph it. I'm glad and jealous that someone's finally done it!
No-one 'is a utilitarian' or similar, we're all just optimising for some function of at least two variables, at least one of which we can make a meaningful decision about. I genuinely think this sort of reasoning resolves a lot of problems posed by moral philosophers (eg the demandingness objection), not to mention helps map abstractions about moral philosophy to something a lot more like the real world.
That would be awesome :)
You could make a case that it is a normative statement - certainly not everyone would consider it not to be. It would have been clearer if I'd phrased my response as a question: 'would you consider that statement to be normative?'
My sense is that you have a pretty good idea of how philosophers use the word 'normative', and you're pursuing a level of clarity about it that's impossible to obtain. Since it (by definition) doesn't map to anything in the physical or mathematical worlds, and arguably even if it did, it just isn't possible to identify a class of ... (read more)
I'm not sure how to interpret 'real' there. If you mean 'real' as opposed to something like a hologram, I'd say the sentence is underdefined. If you mean it as synonymous for a proposition about physical state, such that 'there are two oranges in front of me' would be approximately equivalent to 'the two oranges in front of me are real' , then I think you're asking about any proposition about physical state.
In which case I don't think there's much reason to call them 'normative', no statement can be proven by physical observation, so that would make basically all parseable statements normative, which would make the term useless. Although I'm sympathetic to the idea that it is.
Can confirm Gathertown allows screensharing - I'm doing it as I type - and we've actually just been setting up some of the desk pods to allow communication with other people in the same pod (you can also cluster round the same desk, though that does feel a bit cramped for more than two).
Btw, I'm hoping that the Discord and Gather servers will have a positive sum effect where they link to each other and collaboratively increase the number of EAs who get into online coworking. We've placed a prominent link to the Discord server near the entrance to the Gather space :)
Defining a normative statement as 'a statement with a normative "should"' has certain problems...
'If you add 1 to 1 you should get 2' is not a statement people would necessarily consider normative.
I don't think there's a perfect answer, but as a heuristic I defer to the logical positivists - if you can't even in principle find direct evidence for or against the statement by observing the physical world and you can't mathematically prove it, and on top of that it sounds like a statement about behaviour or action, then you're probably in normland.
It's a lovely idea! Do you have an idea of how to keep it up to date, so that old, no-longer-available rows don't detract from the active ones?
Good question! We're planning on pinging listings on the sheets roughly every four months to see if it's still up to date. We also have a column that says when the listing was last updated.
Great post!
Couple of nitpicks: in the coloured charts some of the colours (eg global poverty/moral philosophy) are reeeally hard to tell apart.
I would also like to see absolute numbers of posts on eg the popularity posts, since high votes for eg 'career choice' could be explained by those posts being disproportionately likely to be important announcements from 80k or similar where what's really getting upvoted is often 80k's work rather than the words on the screen. And high stats for criticism could be (though I suspect isn't) explained by much fewer critical posts leading to greater extremes.
Please do!
And I haven't yet - the most users online so far was 6, and the free plan allows up to 25 (unless by concurrent users they mean 'members'?), but I'm very happy to do so if it gets anywhere near becoming a limiting factor!
ETA It's substantially more expensive than I thought to do this, so I wouldn't be able to self-fund it, but if we hit the point where we repeatedly need space for 25+ users I'd expect we could get funding from a community group. Or in the worst case scenario we can set up an adjacent space with a 2-way portal between the two.
I've currently got a request in with LTFF so I could end up doing something totally unrelated to software development.
If that doesn't come through I would look at this again, though for the reasons I wrote about in the agencies sequence would want to learn more before rushing into it.
Also for the reasons I wrote about in that sequence I think it's probably better in the abstract for EA developers to work for a dedicated agency like Markus' if that becomes an option, though he's only in the early stages of proving the concept at the moment, so won't be hiring for a few months at least.
Why? The less scrupulous one finds Anthropic in their reasoning, the less weight a claim that Wuhan virologists are 'not much less scrupulous' carries.