Arepo

Sequences

Wiki Contributions

Comments

Cause Area: UK Housing Policy

Interesting analysis - I think even if stuff like this doesn't pan out, there are large intangible benefits to EA of giving people who might be interested in specific issues a way to optimise within their parameters.

Out of curiosity, what made you decide to research the area in the first place?

Flimsy Pet Theories, Enormous Initiatives

Also anecdotally I have found Facebook quite positive since I installed a feed blocker. Now I just get event invites, notifications from groups I'm interested (which are much easier to curate than a feed), a low-overhead messaging service, and the ability to maintain shallow but genuinely friendly relationships and occasionally crowdsource from a peer group in more helpful ways than Google.

Overall I'd say it's comfortably though not dramatically net positive like this - though given that it involves deliberate hacking out of one of the core components of the service I wouldn't take it as much of a counterpoint to 'Facebook is generally bad'.

Flimsy Pet Theories, Enormous Initiatives

A lot of the people I knew, in the field (including the person I mentioned), pretty clearly hadn't thought about the impact a whole lot. It's not just that they weren't using QALYs, it's just that they weren't really comparing it to similar things.

Re this particular example, after you had the conversation did the person agree with you that they clearly hadn't thought about it? If not, can you account for their disagreement other than claiming that they were basically irrational?

Flimsy Pet Theories, Enormous Initiatives

I seem to have quite strongly differing intuitions from most people active in central EA roles, and quite similar ones (at least about the limitations to EA-style research) to many people I've spoken to who believe the motte of EA but are sceptical of the bailey (ie of actual EA orgs and methodology). I worry that EA has very strong echo chamber effects reflected in eg the OP, in Linch's comment below, and Hauke's about Bill Gates, in various other comments in this thread suggesting 'almost no-one' thinks about these questions with clarity and in countless of other such casual dismissals I've heard by EAs of smart people taking positions not couched in sufficiently EA terms.

FWIW I also don't think claiming someone has lots of other great qualities is inconsistent with being insulting to them.

I don't disagree that it's plausible we can bring something. I just think that assuming we can do so is extremely arrogant (not by you in particular, but as a generalised attitude among EAs). We need to respect the views of intelligent people who think this stuff is important, even if they can't or don't explain why in the terms we would typically use. For PR reasons alone, this stuff is important - I can only point to anecdotes, but so many intelligent people I've spoken to find EAs collectively insufferable because of this sort of attitude, and so end up not engaging with ideas that might otherwise have appealed to them. Maybe someone could run a Mechanical Turk study on how such messaging affects reception of theoretically unrelated EA ideas.

Flimsy Pet Theories, Enormous Initiatives

No-one's saying he's a master strategist. Quite the opposite - his approach is to try stuff out and see what happens. It's the EA movement that strongly favours reasoning everything out in advance.

What I'm contesting is the claim that he has 'at least some things to learn from the effective altruism community', which is far from obvious, and IMO needs a heavy dose of humility. To be clear, I'm not saying no-one in the community should do a shallow (or even deep) dive into his impact - I'm saying that we shouldn't treat him or his employees like they're irrational for not having done so to our satisfaction with our methods, as the OP implies.

Firstly, on the specific issue of whether bunkers are a better safeguard against catastrophe, that seems extremely short termist. Within maybe 30-70 years if SpaceX's predictions are even faintly right, a colony on Mars could be self-sustaining, which seems much more resilient than bunkers, and likely to have huge economic benefits for humanity as a whole. Also, if bunkers are so much easier to set up, all anyone has to do is found an inspiring for profit bunker-development company and set them up! If no-one has seriously done so at scale, that indicates to me that socially/economically they're a much harder proposition, and that this might outweigh the engineering differences.

Secondly, there's the question of what the upside of such research is - as I said, it's far from clear to me that any amount of a priori research will be more valuable than trying stuff and seeing what happens.

Thirdly I think it's insulting to suppose these guys haven't thought about their impact a lot simply because they don't use QALY-adjacent language. Musk talks thoughtfully about his reasons all the time! If he doesn't try to quantify the expectation, rather than assuming that's because he's never thought to do so, I would assume it's because he thinks such a priori quantification is very low value (see previous para) - and I would acknowledge that such a view is reasonable. I would also assume something similar is true for very many of his employees, too, partly because they're legion compared to EAs, partly because the filtering for their intelligence has much tighter feedback mechanisms than that for EA researchers.

If any EAs doing such research don't recognise the validity of these sorts of concerns, I can imagine it being useless or even harmful.

Flimsy Pet Theories, Enormous Initiatives

Also we still, as a community seem confused over what 'neglectedness' does in the ITN framework - whether it's a heuristic or a multiplier, and if the latter how to separate it from tractability and how to account for the size of the problem in question (bigger, less absolutely neglected problems might still benefit more from marginal resources than smaller problems on which we've made more progress with fewer resources, yet I haven't seen a definition of the framework that accounts for this). Yet anecdotally I still hear 'it's not very neglected' used to casually dismiss concerns on everything from climate change through nuclear war to... well, interplanetary colonisation. Until we get a more consistent and coherent framework, if I as a longtime EA supporter am sceptical on one of the supposed core components of EA philosophy, I don't see how I'm supposed to convince mission-driven not-very-utilitarians to listen to its analyses.

Flimsy Pet Theories, Enormous Initiatives

I actually find it quite easy to believe that Musk's initiatives are worth more than the whole EA movement - though I'm agnostic on the point. Those ideas exist in a very different space from effective altruism, and if you fail to acknowledge deep philosophical differences and outside view reasons for scepticism about overcommitting to one worldview you risk polarising the communities and destroying future value trades between them. For example:

  • Where EA starts (roughly) from an assumption that you have a small number of people philosophically committed to maximising expected welfare, Musk's companies start with a vision a much larger group of people find emotionally inspiring, and a small subset of them find extremely inspiring. Compare the 35-50 hour work weeks of typical EA orgs' staff vs the 80-90 common among Tesla/SpaceX employees - the latter seem to be far more driven, and I doubt that telling them to go and work on AI policy would a) work or b) inspire them to anywhere near comparable productivity if it did.
  • Musk's orgs are driven by a belief that they can one day make a profit from what they do, and that if they can't, they shouldn't succeed.
  • Most EA orgs have no such market mechanism, even in the long term. And EA research has perverse incentives that we rarely seem to recognise - researchers gain prestige for raising 'interesting questions' that might minimally if at all affect anyone's behaviour (eg moral uncertainty, cluelessness, infinite ethics, doomsday arguments etc), and they're given money and job security for failing to answer them in favour of ending every essay with 'more research needed'.
  • In particular they're incentivised to produce writings that encourage major donors to fund them. One plausible way of doing this for eg, is to foster an ingroup mentality, encouraging the people who take them seriously to think of themselves as custodians of a privileged way of thinking (cf the early rationality movement's dismissal of outsiders as 'NPCs'). I don't know of any meta-level argument that this should lead to more reliable understanding of the world than, say the wisdom of crowds.
  • As Halffull discussed in detail in another comment, Musk's initiatives are immensely complicated, and a priori reasoning about them might essentially be worthless. We could spend lifetimes considering them and still not have meaningfully greater confidence in their outcomes - and we'd have marked ourselves as irrelevant in the eyes of people driven to work on them. Or we could work with the people who're motivated by the development of such technologies and encourage what Halffull's 'culture of continuous oversight/thinking about your impact' - which those companies seem to have, at least compared to other for-profits .
  • Empirically, the EA movement has a history of ignoring or rejecting certain causes as being not worthy of consideration, then coming to view them as significant after all. See GWWC's original climate change research which basically dismissed the cause vs Founders Pledge's more recent research which takes it seriously as one of their top causes, Open Philanthropy Project's explicit acknowledgement of their increasing concern with 'minor' global catastrophic risks, or just see all the causes OPP have supported with their recent grants (how many EAs would have taken you seriously 10 years ago if you'd thought about donating to US criminal justice reform?). I would say we have a much better track record of unearthing important causes that were being neglected than of providing good reasons to neglect causes.
An Emergency Fund for Effective Altruists

Sounds right. There's a lot to learn about the incentives behind this kind of initiative, but I'm excited about it (I argued for some similar initiatives a couple of years ago)

Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go?

'we just have better metrics for the former'

Can you clarify this? Which statement are you referring to by 'the former'? What metrics?

Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go?

'I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible'

The extent possible for 'weird' EA projects is often 'no extent'. We have applied to various non-EA grants that sorta kinda cover the areas you could argue we're in, and to my knowledge not received any of them. I believe that to date close to (perhaps literally) 100% of our funding has come from EAs or EA-adjacent sources, and I suspect that this will be true of the majority of EA nonprofits.

'Assuming that shared EA values fully solve the problem' is exactly what we're trying to avoid here. Typical nonprofit salaries just work on the assumption that the person doing the job is willing to take a lower salary with no upside, which leads to burn out, lack of motivation and sometimes lack of competence at EA orgs. We're trying to think of a way to recreate the incentive-driven structure of successful startups, both to give the staff a stronger self-interest-driven motivation and to make future such roles more appealing to stronger candidates.

Load More