All of MHarris's Comments + Replies

I think, in general, personal consumption decisions should be thought in the context of moral seriousness (see Will MacAskill's comments in recent podcast).

Should we take seriously efforts to avoid unnecessary emissions? Yes! Is EA doing this? I'm not sure. My impression is that EAs are fairly likely to avoid unnecessary flights, take public transport etc - that's the attitude I take myself, anyway. This is less unusual than veganism - the thoughtful Londoners I'm surrounded by do the same. So I think it would be easy to underestimate the extent to which E... (read more)

Thanks for sharing your talk.

I'm at the UK's Competition and Markets Authority. Very happy to talk to anyone about the intersection of competition policy and AI.

How much did the $13 million shift the odds? That's the key question. The conventional political science on this is skeptical that donations have much of an effect on outcomes (albeit it's a bit more positive about lower profile candidates like Carrick) https://fivethirtyeight.com/features/money-and-elections-a-complicated-love-story/

(In this case, given the crypto backlash, it's surely possible SBF's donations hurt Carrick's election chances. I don't want to suggest this was actually the case, just noting that the confidence interval should include the po... (read more)

3
Nathan Young
2y
Yeah good question, but maybe 25%. So overall it's about $60M for a seat. I really think Carrick had no chance without this money (there were several other crypto people + conventional candidates) I believe there might have been too many mail shots, say, but I don't beleive Carrick was hurt overall, because without SBF noone woudl know who he was.

Fundraising is particularly effective in open primaries, such as this one. From the linked article:

But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.

Basically, said Darrell West, vice president and director of governance studi

... (read more)

I'm certain EA would welcome you, whether you think AI is an important x-risk or not.

If you do continue wrestling with these issues, I think you're actually extremely well placed to add a huge amount of value as someone who is (i) ML expert, (ii) friendly/sympathetic to EA, (iii) doubtful/unconvinced of AI risk. It gives you an unusual perspective which could be useful for questioning assumptions.

From reading this post, I think you're temperamentally uncomfortable with uncertainty, and prefer very well defined problems. I suspect that explains why you feel... (read more)

2
colin
2y
I want to strongly second this!  I think that a proof of the limitations of ML under certain constraints would be incredibly useful to narrow the area in which we need to worry about AI safety or at least limit the types of safety questions that need to be addressed in that subset of ML
6
Ada-Maaria Hyvärinen
2y
Thanks for the nice comment! Yes, I am quite uncomfortable with uncertainty and trying to work on that. Also, I feel like by now I am pretty involved in EA and ultimately feel welcome enough to be able to post a story like this in here (or I feel like EA apprechiates different views enough despite also feeling this pressure to conform at the same time). 

Excession, Surface Detail and The Hydrogen Sonata are the three I'd recommend from a longtermist perspective.

Consider Phlebas is (by some margin) the worst novel in the series. It's a shame it seems like the obvious place to start.

On this theme, I was struck by the 80,000 hours podcast with Tom Moynihan, which discussed the widespread past belief in the 'principle of plenitude': "Whatever can happen will happen", with the implication that the current period can't be special. In a broad sense (given humanity's/earth's position), all such beliefs were wrong. But it struck me that several of the earliest believers in plenitude were especially wrong - just think about how influential Plato and Aristotle have been!

I wonder if there would be a strong difference between "What do you think of a group/concept called 'effective altruism'", "Would you join a group called 'effective altruism'", "What would you think of someone who calls themselves an 'effective altruist'", "Would you call yourself an 'effective altruist'".

I wonder which of these questions is most important in selecting a name.

I don't mind rhetorical descriptions of China as having 'less economic and political freedom than the United States', in a very general discussion. But if you're going to make any sort of proposal like 'there should be more political freedom!' I would feel the need to ask many follow-up clarifying questions (freedom to do what? freedom from what consequences? freedom for whom?) to know whether I agreed with you.

Well-being is vague too, I agree, but it's a more necessary term than freedom (from my philosophical perspective, and I think most others).

This sounds a lot like a version of preference utilitarianism, certainly an interesting perspective.

I know a lot of effort in political philosophy has gone into trying to define freedom - personally, I don't think it's been especially productive, and so I think 'freedom' as a term isn't that useful except as rhetoric. Emphasising 'fulfilment of preferences' is an interesting approach, though. It does run into tricky questions around the source of those preferences (eg addiction).

1
BrownHairedEevee
3y
Yeah, it is very similar to preference utilitarianism. I'm still undecided between hedonic and preference utilitarianism, but thinking about this made me lean more toward preference utilitarianism. What do you think is wrong with the current definitions of liberty? I think the concept of well-being is similarly vague. I tend to use different proxies for well-being interchangeably (fulfillment of preferences, happiness minus suffering, good health as measured by QALYs or DALYs, etc.) and I think this is common practice in EA. But I still think that freedom and well-being are useful concepts: for example, most people would agree that China has less economic and political freedom than the United States.

3 months late, but better than never: it's incredibly inspiring to see how the community has grown over the past decade.

I'm all for focusing on the power of policy, but I'm not sure giving up any of our positions on personal donations will help get us there.

This is a discussion that has happened a few times. I do think that 'global priorities' has already grown as a brand enough to be seriously considered for wider use, and perhaps even as the main term for the movement.

I'd still be reluctant to ditch 'effective altruism' entirely. There is an important part of the original message of the movement (cf pond analogy) that's about asking people to step up and give more (whether money or time) - questioning personal priorities/altruism. I think we've probably developed a healthier sense of how to balance that ('altruism/life balance') but it feels like 'global priorities' wouldn't cover it.

This is an excellent point. I "joined" EA because of the pond idea. I found the idea of helping a lot of people with the limited funds I could spare really appealing, and it made me feel like I could make a real difference. I didn't get into EA because of its focus on global prioritization research.

Of course, what I happened to join EA because of is not super important, but I wonder how others feel. Like EA as a "donate more to AMF and other effective charities" is a really different message than EA as "research and philosophize about what issues are reall... (read more)

I've always thought the Repugnant Conclusion was mostly status quo bias, anyway, combined with the difficulty of imagining what such a future would actually be like.

I think the Utility Monster is a similar issue. Maybe it would be possible to create something with a much richer experience set than humans, which should be valued more highly. But any such being would actually be pretty awesome, so we shouldn't resent giving it a greater share of resources.

1
Daniel_Eth
3y
Humans seem like (plausible) utility monsters compared to ants, and  many religious people have a conception of god that would make Him a utility monster ("maybe you don't like prayer and following all these rules, but you can't even conceive of the - 'joy' doesn't even do it justice - how much grander it is to god if we follow these rules than even the best experiences in our whole lives!"). Anti-utility monster sentiments seem to largely be coming from a place where someone imagines a human that's pretty happy by human standards, and thinks the words "orders of magnitude happier than what any human feels", and then they notice their intuition doesn't track the words "orders of magnitude".
Answer by MHarrisMar 06, 20216
0
0

Economist in the civil service here. I wouldn't sweat this decision, unless there's a transparently better alternative. It sounds like good progression for you, from which you can look for an even higher impact role.

2
Economist
3y
Thanks for your reply, I think you're right. I'm actually virtually certain I'm going to take it. I don't have any better opportunities on the immediate horizon and this is career progression and a probable CV builder. I think I'm hoping someone will tell me that this role/route is promising in terms of direct impact when if I'm being honest with myself it probably isn't. Perhaps Tyler Cowen would say it's good, but to be honest even that's not certain as I still don't have a well-formed view of just how influential the CBI is in terms of influencing policy. Direct impact maybe shouldn't be a dominating concern right now given that I'm only 27 and this role is progression. If this role is unlikely to improve future options that would be something good to know though.

My main reaction (rather banal): I think we shouldn't use an acronym like IBC! If this is something we think people should think about early in their time as an effective altruist, let's stick to more obvious phrases like "how to prioritise causes".

5
MichaelA
3y
Personally, I think the term "important between-cause considerations" seems fairly clear in what it means, and seems to fill a useful role. I think an expanded version like "important considerations when trying to prioritise between causes" also seems fine, but if the concept is mentioned often the shorter version would be handy. And I'd say we should avoid abbreviating it to IBC except in posts (like this one) that mention the term often and use the expanded form first - but in those posts, abbreviating it seems fine. I think "how to prioritise causes" is a bit different. Specifically, I think that that'd include not just considerations about the causes themselves (IBCs), but also "methodological points" about how to approach the question of how to prioritise, such as: * "focus on the interventions within each cause that seem especially good" * "consider importance, tractability, and neglectedness" * "try sometimes taking a portfolio or multiplayer-thinking perspective" * "consider important between-cause considerations" (That said, I think that that those "methodological points" are also important, and now realise that Jack's "calls to action" might be similarly important in relation to those as in relation to IBCs.)
1
Jack Malde
3y
Probably fair! I'm certainly not wedded to that acronym continuing to exist.

One issue to consider is whether catastrophic risk is a sufficiently popular issue for an agency to use it to sustain itself. Independent organisations can be vulnerable to cuts. This probably varies a lot by country.

0
Larks
7y
Do you know of any quantitative evidence on the subject? My impression was there is a fair bit of truth to the maxim "There's nothing as permanent as a temporary government program."
0
Kirsten
7y
Both creating and sustaining a government agency will likely take more popular support than we currently have, but I still think it's an important long term goal. I'm under the impression that agencies are less dependent on the ebb and flow of public opinion than individual policy ideas. However, they would certainly still need some public support. On the other hand, having an agency for catastrophic risk prevention might give the issue legitimacy and actually make it more popular.

This book is a core text on this subject, which explicitly considers when specific agencies are effective and motivated to pursue particular goals: https://www.amazon.co.uk/Bureaucracy-Government-Agencies-Basic-Classics/dp/0465007856

I'm also reminded of Nate Silver's interviews with the US hurricane forecasting agency in The Signal and the Noise.