All of MichaelDello's Comments + Replies

Thanks for writing this! I had one thought regarding how relevant saying no to some of the technologies you listed is to AGI. 

In the case of nuclear weapons programs, the use of fossil fuels, CFCs, and GMOs, we actively used these technologies before we said no (FFs and GMOs we still use despite 'no', and nuclear weapons we have and could use at a moments notice). With AGI, once we start using them it might be too late. Geo-engineering experiments is the most applicable out of these, as we actually did say no before any (much?) testing was undertaken.

1
charlieh943
5mo
I agree restraining AGI requires "saying no" prior to deployment. In this sense, it is more similar to geo-engineering than fossil fuels: there might be no 'fire alarm'/'warning shot' for either.  Though, the net-present value of AGI (as perceived by AI labs) still seems v high, evidenced by high investment in AGI firms. So, in this sense, it has similar commercial incentives for continued development as continued deployment of GMOs/fossil fuels/nuclear power. I think the GMO example might be the best as it both had strong profit incentives and no 'warning shots'.

I supplement iron and vitamin C, as my iron is currently on the lower end of normal (after a few years of being vegan it was too high, go figure).

I tried creatine for a few months but didn't notice much difference in the gym and while rockclimbing. 

I drink a lot of B12 fortified soy milk which seems to cover that.

I have about 30g of protein powder a day with a good range of different amino acids to help hit 140g a day.

I have a multivitamin every few days.

I have iodine fortified salt that I cook with sometimes.

I've thought about supplementing omega 3 or eating more omega 3 rich foods but never got around to it.

8 years vegan for reference.

I strongly agree that current LLM's don't seem to pose a risk of a global catastrophe, but I'm worried about what might happen when LLM's are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.

Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I'm envisioning something like a LLM virtual assistant which leads to a lot... (read more)

8
titotal
1y
Given that AI is being developed by companies running on a "move fast and break things" philosophy, a spectacular failure of some sort is all but guaranteed.  It'd have to bigger than mere lost productivity to slow things down though. Social media algorithms arguably already have a body count (via radicalisation), and those have not been slowed down. 

This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself.

Thanks for the great question. I'd like to see more attempts to get legislation passed to lock in small victories. The Sioux Falls slaughterhouse ban almost passing gives me optimism for this. Although it seemed to be more for NIMBY reasons than for animal rights reasons, in some ways that doesn't matter. 

I'm also interested in efforts to maintain the lower levels of speciesism we see in children into their adult lives, and to understand what exactly drives that so we can incorporate it into outreach attempts targeted at adults. Our recent interview w... (read more)

Thank you for the feedback! I just wanted to let you know that while I haven't had time to write a proper response, I've read your feedback and will try to take it on board in my future work.

People more involved with X-risk modelling (and better at math) than I could better say whether this is better than existing tools for X-risk modelling, but I like it! I hadn't heard of the absorbing state terminology, that was interesting. When reading that, my mind goes to option value, or lack thereof, but that might not be a perfect analogy.

Regarding x-risks requiring a memory component, can you design Markov chains to have the memory incorporated?

Some possible cases where memory might be useful (without thinking about it too much) might be:

  • How well pa
... (read more)
2
JoshuaBlake
1y
A fully generic Markov chain can have a state space with arbitrary variables, so can incorporate some memory that way. But that ends up with a continuous state space which complicated things in ways I'm not certain of without going back to the literature. An easier solution is if you can consider discrete states. For example, ongoing great power war (likely increases many risks) or pandemic (perhaps increases bio risk but decreases risk of some conflicts) might be states.

Thanks for sharing, I'm looking forward to this! I'm particularly excited about the sections on measuring suffering and artificial suffering.

Thanks for sharing! I love seeing concrete roadmaps/plans for things like this, and think we should do it more.

Fair enough! I probably wasn't clear - what I had in mind was one country detecting an asteroid first, then deflecting it into Earth before any other country/'the global community' detects it. Just recently we detected a 1.5 km near Earth object that has an orbit which intersects with Earth. The scenario I had in mind was that one country detects this (but probably a smaller one ~50 m) first, then deflects it.

We detect ~50 m asteroids as they make their final approach to Earth all the time, so detecting one first by chance could be a strategic advantage.

I take your other points, though.

"(b) Secondly, while the great powers may see military use for smaller scale orbital bombardment weapons (i.e. ones capable of causing sub-global or Tunguska-like asteroid events), these are only as destructive as nuclear weapons and similarly cannot be used without risking nuclear retaliation."

I don't think this is necessarily right. First, an asteroid impact is easier to seem like a natural event, therefore being less likely to result in mutually assured destruction. Also, just because we can't think of a reason for a nation to use an asteroid strike, do... (read more)

1
Joel Tan
1y
I am extremely sceptical that you can make an asteroid impact seem like a natural event. The trajectory of asteroids are being tracked, and if one of them drastically changed course after an enemy state's deep space probe (whose launch cannot be hidden) were in the vicinity, the inference is clear. In any case, the difficulty of weaponization far outstrips redirection. The energy (and hence payload) as well as the complexity of the supporting calculations needed to redirect an asteroid so it does not hit earth is magnitudes less than the payload and calculations needed to . Even if we were capable of the former (i.e. have deflection capabilities), we would not have the latter - and that's not even getting into the risk of even marginal errors in calculations of these long orbits causing staggering different predictions of ground zero - you could easily end up striking yourself (or causing a tsunami that drowns your own coastal cities). That's not getting into the issue of the military value of such weapons - which by definition cannot deter, if meant to look accidental.

If anyone is still reading this today and is curious where I ended up, I just took a job with Sentience Institute as a Strategy Lead & Researcher.

1
Tristan Williams
1y
What lead you there? Did you apply broadly and end up with multiple viable choices? Any advice to add after successfully undertaking this process?

Cost is one factor, but nuclear also has other advantages such as land use, amount of raw material required (to make the renewables and lithium etc. for battery storage), and benefits for power grid.

It's nice that renewables is getting cheaper, and I'd definitely like to see more renewables in the mix, but my ideal long term scenario is a mix of nuclear, renewables and battery. I'm weakly open to a small amount of gas being used for power generation in the long term in some cases.

1
rileyharris
2y
I think my (updated based on the comments so far) conclusion is the same as yours!

Hm, good to know and fair point!  I wonder if we can test the effect of extra funding over what's needed to run a passable campaign by investing say $5,000 in online ads etc. in a particular electorate, but even that is hard to compare to other electorates given the number of factors. If anyone else has ideas for measuring impact of extra funding, I'd love to hear it!

Seeking grants from EA grant makers is something I haven't at all considered. I wonder if there are any legal restrictions on this as a political party recipient  (I haven't looked into this but could foresee some potential issues with foreign sources of funding). On the one hand, AJP can generate its own funds, but I feel like we are still funding constrained in the sense that an extra $10,000 per state branch per election (at least) could almost always be put to good use. Do you think we should we look into this, particularly with the federal election coming up?

2
Ren Ryba
2y
I had a quick look at EA grant makers at the beginning of the SA state campaign. I found that every EA grant maker I checked (can't remember which ones) had a clause saying that they won't fund political parties or campaigns. So I imagine there'd have to be a conversation with grant makers first about their policies - which may, understandably, be a tricky conversation. The Australian government website says: "In late 2018 the Parliament passed legislation to ban political donations of $1,000 or more from foreign sources. ... The new rules ban donations from foreign donors: a person who does not have a connection to Australia, such as a person who is not an Australian citizen or an entity that does not have a significant business presence in Australia." So yes, that could be a hurdle. But perhaps this idea could still work for parties in countries that do not have a law like this, or the funding could come from EA orgs or grant makers based in Australia. I'm in two minds about the party being funding constrained. To be funding constrained would mean that extra funding would translate to either a higher vote or a better outcome in some other measure of influence. I haven't seen any evidence to either support or refute that claim. The SA state campaign's spend in 2022 was $100,000 and resulted in a vote of 1.5%, while in 2018 the spend  was $18,000 and resulted in a vote of 2.17%. Obviously that's just a single comparison, and the contexts varied wildly between those two years, but it's not obvious to me that extra spending would increase the vote (or other measures of influence). I previously looked at obtaining data from state branches on this question, but I don't believe I went ahead with that project.

"This being said, the format of legislative elections in France makes it very unlikely that a deputy from the animalist party will ever be elected, and perhaps limits our ability to negotiate with the other parties."

This makes some sense, as unfortunate as it is. Part of the motivation for other parties being willing to negotiate with you or adopt their own incrementally pro-animal policies is based on how worried they are that they might lose a seat to your party. If they're not at all worried, this limits your influence.

But I wouldn't say it entirely voi... (read more)

1
centur888
2y
"I think it's still possible to have some influence in systems where minor parties are unlikely to get elected."   Thats good news Michael Dello If i may ask what your strategy would be if you were running a campaign in a 'blue-ribbon' National Party electorate? eg  New England region in NSW :)  Also, how best could a small number of AJP volunteers be used effectively?

I just want to add that I personally became actively involved with the AJP because I felt that political advocacy from within political parties had been overly neglected by the movement. My intuition was that this is because some of the earlier writings about political advocacy/running for election work by 80,000 Hours and others focused mostly on the US/UK political systems, which I understand are harder for small parties to have any influence (especially the US).

One advantage of being in a small party is that it's relatively easy to become quite senior q... (read more)

Thank you so much for the feedback!

I did think about working for a government department (non-partisan), but I decided against it. From my understanding, you can't be working for 'the crown' and running for office, you'd have to take time off or quit.

The space agency was my thinking along those lines, as I don't think that counts as working for the crown.

I hadn't thought about the UK Civil Service. I've never looked in to it. I don't think that would affect me too much, as long as I'm not a dual citizen.

I haven... (read more)

Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).

1
RobertHarling
4y
I believe it is the probability that a nuclear war occurs AND leads to human extinction, as described in The Precipice. I think I would agree that if it was the just the probability of nuclear war, this would be too low, and a large reason the number is small is because of the difficulty for a nuclear war to cause human extinction.

When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.

1
MichaelA
4y
Ah, that makes sense, then. And I'd also guess that researchers and policy makers are the main people that would need to be convinced. But that might also be partly because the general public probably doesn't think about this much or have a very strong/solidified opinion; that might make it easier for researchers and policy makers to act in either direction without worrying about popular opinion, and mean this can be a case of pulling the rope sideways. So influencing the development of asteroid deflection technology might still be more tractable in that particular regard than influencing AI development, since there's a smaller set of minds needing changing. (Though I'd still prioritise AI anyway due to the seemingly much greater probability of extreme outcomes there.) I should also caveat that I don't know much at all about the asteroid deflection space.

We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.

I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don't seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.

My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.

So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical im... (read more)

Self-plugging as I've written about animal suffering and longtermism in this essay:

http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/

To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.

People are already talking about introducing plants, insects and animals to Mars as a means of terr... (read more)

3
MichaelStJules
4y
Aren't those extinction risks, although perhaps less severe or likely to cause extinction than others, according to EAs?

Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?

0
Amy Labenz
7y
Thanks for saving the dates! Part of the reason that we are hosting EA Global 2017 in three locations is that we hope to include people from different communities. I am looking at a variety of venues and working to confirm the EA Global UK dates. I hope to have an update on that before too long. Unfortunately, due to some staffing changes, we have had a bit of a delay this year in launching the EAGx application process. I hope to hire someone to help with the EAGx events soon so that we can ramp up the process and confirm some additional events. We post upcoming events here: https://www.eaglobal.org/events/

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

1
Rohin Shah
7y
^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

0[anonymous]7y
Thanks for your comment. I agree with the Michael Plant's response below. I am not saying that there will be a preponderance of suffering over pleasure in the future. I am saying that if you ignore all future pleasure and only take account of future suffering, then the future is astronomically bad.
-4
RyanCarey
7y
People like Bostrom have thoroughly considered how valuable the future might be. The view in existential risk reduction circles is simply that the future has positive expected value on likely moral systems. There are a bunch of arguments for this. One can argue from improvements to welfare, decreases in war, emergency of more egalitarian movements over time, anticipated disappearance of scarcity, and reliance on factory farming, increasing societal wisdom over time, and dozens of other reasons. One way of thinking about this if you are a symmetric utilitarian is that we don't have much reason to think either of pain and pleasure is more energy efficient than the other (https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)[Are pain and please equally energy efficient]. Since a singleton would be correlated with some relevant values, it should produce much more pleasure than pain, so the future should have very net positive values. I think that to the extent that we can research this question, we can sit very confidently saying that for usual value systems, the future has positive expectation. The reason that I think people tend to try to shy away from public debates on this topic, such as when arguing for the value of existential risk research, is that doing so might risk creating a false equivalence between themselves and very destructive positions, which would be very harmful.
2
MichaelPlant
7y
Hello Michael, I think the key point of John's argument is that he's departing from classical utilitarianism in a particular way. That way is to say future happy lives have no value, but future bad lives have negative value. The rest of the argument then follows. Hence John's argument isn't a dissent about any of the empirical predictions about the future. The idea is that you the ANU can agree with Bostrom et al. about what actually happens, but disagree on how good it is.

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

2[anonymous]7y
Thanks for this. It'd be interesting if there were survey evidence on this. Some anecdotal stuff the other way... On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future). Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis - http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058.

I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.

First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.

But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game whe... (read more)

1
Ben_West
7y
Thanks for the feedback! 1. I generally like arguments from humility, but I think you're overstating the difficulty of choosing the better candidate. E.g. in 2016 only one candidate had any sort of policy at all about farmed animals, so it didn't require a very extensive policy analysis to figure out who is preferable. The same is true for other EA focus areas. 2. I agree. I do not think that promoting vote pairing irrespective of the candidates is a very useful thing to do.

Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.

I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.

0
turchin
6y
Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.

I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.

Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's prob... (read more)

-1
carneades
7y
I read through your article, but let me see if I can strengthen the claim that charities promoted by effective altruism do not actually make systematic change. Remember, effective altruists should care about the outcomes of their work, not the intentions. It does not matter if effective altruists love systematic change, if that change fails to occur, the actions they did are not in the spirit of effective altruism. Simply put, charities such as the Against Malaria Foundation harm economic growth, limit freedom, and instill dependency, all while attempting to stop a disease which kills about as many people every year as the flu. Here's the full video

Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.

A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.

The model is largely se... (read more)

This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.

Any thoughts on the worst possible ethical theory?

Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?

Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally.... (read more)

People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).

However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.

How can we get everyone to agree on the best ethical theory?

2
DC
8y
Perhaps it would be easier to figure out what is the worst ethical theory possible? I don't recall ever seeing this question being asked, and it seems like it'd be easier to converge on. Regardless of how negatively utilitarian someone is, almost everyone has an easier time intuiting the avoidance of suffering rather than the maximization of some positive principle, which ends up sounding ambiguous and somewhat non-urgent. I think suffering enters near mode easier than happiness does. It may be easier for humans to agree on what is the most anti-moral, badness-maximizing schema to adopt.

Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!

I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions max... (read more)

0
Rick
8y
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’). Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work. My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utilit

Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!

Your third point is well taken - I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.

I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn't win (and probably even if it does) I'll share it here.

I think it's a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I'm not sure that's totally obvious. When you throw wild animals and digital systems into the mix, things get scary.

1
RobBensinger
8y
I wouldn't be surprised if Bostrom's basic thinking is that suffering animals just aren't a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they'll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction. We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn't. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium. (Minor side-comment: 'humans survive and eat lots of suffering animals forever' is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)

Thanks, there are some good points here.

I still have this feeling, though, that some people support some causes over others simply for the reason that 'my personal impact probably won't make a difference', which seems hard to justify to me.

Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is.

Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.

Thanks for writing this. One small critique:

"For example, Brian Tomasik has suggested paying farmers to use humane insecticides. Calculations suggest that this could prevent 250,000 painful deaths per dollar."

I'm cautious about the sign of this. Given that insects are expected to have net negative lives anyway, perhaps speeding up their death is actually the preferable choice. Unless we think that an insect dying of pesticide is more painful than them dying naturally plus the pain throughout the rest of their life.

But overall, I would support the recommendation that OPP supports WAS research.

9
Brian_Tomasik
8y
Hi Michael :) The "Humane Insecticides" article talks about using different insecticides that are equally lethal, rather than reducing insecticide use. (It expresses similar concerns as those you raise about the sign of reducing insecticide use.) The 250,000 number is an amount of pain equivalent to that many pesticide deaths. That said, I'm somewhat skeptical about the number quoted in the article because it ignores a lot of costs (e.g., setup costs, identifying the right alternative chemicals, etc.). I first wrote it in 2007 when I was less attuned to the arguments for conservatism in cost-effectiveness estimates. Still, some other estimates suggest similar orders of magnitude for how much expected insect suffering can be prevented per dollar, although these interventions are mostly more controversial (and more speculative).

It looks like you're subscribing to a person-affecting philosophy, whereby you say potential future humans aren't worthy of moral consideration because they're not being deprived, but bringing them into existence would be bad because they would (could) suffer.

I think this is arbitrarily asymmetrical, and not really compatible with a total utilitarian framework. I would suggest reading the relevant chapter in Nick Beckstead's thesis 'On the overwhelming importance of shaping the far future', where I think he does a pretty good job at showing just this.

I did earning to give for 18 months in a job that I thought I would really enjoy but after 12 months realised I didn't. I'm now doing a PhD.

I think personal fit is pretty important, but at the end of the day it's still just another thing to consider, and not the be all end all. I think its a pretty valid point that you will perform better in a role that you enjoy and thus advance further and have more impact, but if you're really trying to maximise impact there are limits to that (e.g. Hurford's example about surfing, unless surfing to give can be a thing)... (read more)

I noticed there doesn't seem to be an option to nominate less than 5 people. Not sure if this is a feature but I wanted to just nominate a few people and was unable to.

I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say 'could' only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn't put us off too much, if at all.

0
MichaelDickens
8y
Yes, I agree that WAS research has a high expected value. My point was that it has a non-trivial probability (say, >10%) of being harmful.

Happy to hear what they are Alex.

The final article had a title change and it was made clear numerous times that it was a personal analysis, not necessarily representing the views of Effective Altruism. In fact, we worked off the premise of voting to maximise wellbeing, not to further EA.

I posted it here and shared it with EAs because they are used to thinking about ways to maximise wellbeing, and I've never seen an analysis that looks at multiple parties and policies to try and select the 'best' party (many have agreed that this doesn't seem to have been d... (read more)

0
AlexRichard
7y
Oh hey, didn't see this at the time. If EA becomes an explicitly political movement, people who disagree with it will not join; non-political donations are distinct from politics in the sense that they do not need to be identified with one side or another; EA values might be associated with one side or another, but this is an official-seeming EA venue, not just a private-ish place for discussion.

Regardless of whether or not moral realism is true, I feel like we should act as though it is (and I would argue many Effective Altruists already do to some extent). Consider the doctor who proclaims that they just don't value people being healthy, and doesn't see why they should. All the other doctors would rightly call them crazy and ignore them, because the medical system assumes that we value health. In the same way, the field of ethics came about to (I would argue) try and find the most right thing to do. If an ethicist comes out and says that the mos... (read more)

Thanks for everyone's feedback. The article has now been published and is a living document (we will edit daily based on feedback) until the election.

http://www.michaeldello.com/?p=839

Load more