All of MichaelDello's Comments + Replies

Thanks for the detailed response! I've included a few reflections on the work in the conclusion section. Fair point on the internal costs - I was thinking about this as a cost but not as an impact multiplier from funding. With some more work it could be used as justification for the existence of ECA and why consumers pay their salary. ~$200k seems right for staff time plus overhead.

Yeah, "over half" was quite surprising to me too. I wonder how much of this is because organisations may only lodge a rule change request if they have a decent sense that it is ... (read more)

3
Mo Putera
Yes definitely helpful, both for my own thinking and to be able to have something to point others to. With the caveat that learning from success stories requires some sort of survivorship bias adjustment, I think nuts-and-bolts writeups of technical policy reform success stories (as opposed to more high-level guides) are valuable and undersupplied, so if you ever get round to the more detailed writeup that would be great. 

Thanks for sharing and great work, I'm inspired! I'm starting a new role at a large company in a few weeks after working at smaller organisations/academia for a while, and I'm excited to explore what's possible once I settle in.

I did a limited version of this 10 years ago at my first full time job at a large Australian company. A few colleagues came to a giving game I co-organised with a local EA chapter. I spoke to the company's philanthropic giving lead - I didn't make any headway, and found out that the company's corporate giving was based predominately on supporting communities they operated in (I was a bit naive). 

I'm really excited about this! I'll be watching it closely, because starting something similar here in Australia could be interesting. 

My experience working in policy has been that it can either be surprisingly tractable or surprisingly intractable. Achieving change in energy policy in Australia has been surprisingly easy, and achieving change in farmed animal policy in Australia has been surprisingly hard. 

I'm not sure yet which of the two would be most analogous to wild animal welfare. Farmed animal policy has strong entrenched interests, but perhaps wild animal welfare doesn't because many don't care about the issue as much one way or the other. It could be easy to get some quick wins.

$10,000 to Good Ancestors Project - all my post tax income above $59,000 for last financial year.

Between a new job and having finally paid off all my student debt, I'm excited about the next year.

1
Simon Newstead 🔸
Awesome mate, congrats!

A lot of people are talking about data centres in space in the last few weeks. Andrew McCalip built a model to see what it would take for space compute to get cheaper than terrestrial compute.

This quote stood out:

we should be actively goading more billionaires into spending on irrational, high-variance projects that might actually advance civilization. I feel genuine secondhand embarrassment watching people torch their fortunes on yachts and status cosplay. No one cares about your Loro Piana. If you've built an empire, the best possible use of it is to bur

... (read more)
MichaelDello
1
0
0
100% agree

Far-future effects are the most important determinant of what we ought to do

Assuming this captures x-risk considerations, the scale of the future is significantly bigger than present day.

Thanks for writing about this! I've thought about this as well, but there are a couple of reasons I haven't done this yet. Primarily, I've been thinking more lately about making sure my time is appropriately valued. I'm still fairly early-mid career, and as much as it shouldn't matter, taking a salary reduction now probably means reduced earnings potential in the future. This obviously matters less if you don't plan on working for a non-highly impactful non-profit in the near future or if you're later in your career, but I think this is worth thinking abou... (read more)

Luke, you've been so strong at the helm of GWWC for so long that I'm often guilty of thinking about you and GWWC as synonymous (that's a compliment, I swear!). Well done on the amazing work you've done, and enjoy a well deserved break. I can't wait to see what you do next.

As someone who is not an AI safety researcher, I've always had trouble knowing where to donate if I wanted to reduce x-risk specifically from AI. I think I would have donated quite a larger share of my donations to AI safety over the past 10 years if something like an AI Safety Metacharity existed. Nuclear Threat Initiative tends to be my go to for x-risk donations, but I'm more worried about AI specifically lately. I'm open to being pitched on where to give for AI safety.

Regarding the model, I think it's good to flesh things out like this, so thank you fo... (read more)

No problem! I think my main concern is just that you make sure the water properties at 0.5-1m depth match the water properties at the surface, or at least, you can work out how they vary to apply corrections to the satellite data. But overall I'm positive about this venture.

Applying remote sensing to fish welfare is a neat idea! I've got a few thoughts.

I’m surprised that temperature had no/low correlation with the remote sensing data. My understanding is that using infrared radiation to measure water surface temperature was quite robust. The skin depth of these techniques are quite small, e.g., measuring the temperature in the top 10 μm. Do you have a sense of the temperature profile with respect to depth for these ponds? Perhaps you were measuring the temperature below the surface, and the surface temperature as predicted by... (read more)

2
PM
Thanks for taking the time to look at the report and respond with your thoughts. We very much appreciate it! Specific to temperature, we do not know how our partner extracted data from images to determine temperature (or any parameter). We have already followed up with them to get more specific information about what exactly they did.  Regarding the depth of measurements, our “ground truthed” data were collected at a depth of approximately 0.5-1m. The sensor of the handheld device---which collected data for all parameters except for ammonia---was submerged just below the water surface. For ammonia, a sample of water was collected from the same site at approximately the same depth. This aspect of the study protocol was designed to match the procedures conducted by the ARA.

Point 4, Be cautious and intentional about mission creep, makes me think of environmental- and animal-focused political parties such as the Greens and Animal Justice Party in Australia, and the Dutch Party for the Animals in the Netherlands. The first formed as as an environmental party, and the latter two formed as animal protection parties.

All three of these have experienced a lot of mission creep since then (Animal Justice Party to a lesser extent than the other two). The prevailing wisdom from many is that this is a good thing. A serious political part... (read more)

Thanks for writing this! I had one thought regarding how relevant saying no to some of the technologies you listed is to AGI. 

In the case of nuclear weapons programs, the use of fossil fuels, CFCs, and GMOs, we actively used these technologies before we said no (FFs and GMOs we still use despite 'no', and nuclear weapons we have and could use at a moments notice). With AGI, once we start using them it might be too late. Geo-engineering experiments is the most applicable out of these, as we actually did say no before any (much?) testing was undertaken.

1
Charlie Harrison
I agree restraining AGI requires "saying no" prior to deployment. In this sense, it is more similar to geo-engineering than fossil fuels: there might be no 'fire alarm'/'warning shot' for either.  Though, the net-present value of AGI (as perceived by AI labs) still seems v high, evidenced by high investment in AGI firms. So, in this sense, it has similar commercial incentives for continued development as continued deployment of GMOs/fossil fuels/nuclear power. I think the GMO example might be the best as it both had strong profit incentives and no 'warning shots'.

I supplement iron and vitamin C, as my iron is currently on the lower end of normal (after a few years of being vegan it was too high, go figure).

I tried creatine for a few months but didn't notice much difference in the gym and while rockclimbing. 

I drink a lot of B12 fortified soy milk which seems to cover that.

I have about 30g of protein powder a day with a good range of different amino acids to help hit 140g a day.

I have a multivitamin every few days.

I have iodine fortified salt that I cook with sometimes.

I've thought about supplementing omega 3 or eating more omega 3 rich foods but never got around to it.

8 years vegan for reference.

I strongly agree that current LLM's don't seem to pose a risk of a global catastrophe, but I'm worried about what might happen when LLM's are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.

Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I'm envisioning something like a LLM virtual assistant which leads to a lot... (read more)

8
titotal
Given that AI is being developed by companies running on a "move fast and break things" philosophy, a spectacular failure of some sort is all but guaranteed.  It'd have to bigger than mere lost productivity to slow things down though. Social media algorithms arguably already have a body count (via radicalisation), and those have not been slowed down. 

This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself.

Thanks for the great question. I'd like to see more attempts to get legislation passed to lock in small victories. The Sioux Falls slaughterhouse ban almost passing gives me optimism for this. Although it seemed to be more for NIMBY reasons than for animal rights reasons, in some ways that doesn't matter. 

I'm also interested in efforts to maintain the lower levels of speciesism we see in children into their adult lives, and to understand what exactly drives that so we can incorporate it into outreach attempts targeted at adults. Our recent interview w... (read more)

Thank you for the feedback! I just wanted to let you know that while I haven't had time to write a proper response, I've read your feedback and will try to take it on board in my future work.

People more involved with X-risk modelling (and better at math) than I could better say whether this is better than existing tools for X-risk modelling, but I like it! I hadn't heard of the absorbing state terminology, that was interesting. When reading that, my mind goes to option value, or lack thereof, but that might not be a perfect analogy.

Regarding x-risks requiring a memory component, can you design Markov chains to have the memory incorporated?

Some possible cases where memory might be useful (without thinking about it too much) might be:

  • How well pa
... (read more)
2
JoshuaBlake
A fully generic Markov chain can have a state space with arbitrary variables, so can incorporate some memory that way. But that ends up with a continuous state space which complicated things in ways I'm not certain of without going back to the literature. An easier solution is if you can consider discrete states. For example, ongoing great power war (likely increases many risks) or pandemic (perhaps increases bio risk but decreases risk of some conflicts) might be states.

Thanks for sharing, I'm looking forward to this! I'm particularly excited about the sections on measuring suffering and artificial suffering.

Thanks for sharing! I love seeing concrete roadmaps/plans for things like this, and think we should do it more.

Fair enough! I probably wasn't clear - what I had in mind was one country detecting an asteroid first, then deflecting it into Earth before any other country/'the global community' detects it. Just recently we detected a 1.5 km near Earth object that has an orbit which intersects with Earth. The scenario I had in mind was that one country detects this (but probably a smaller one ~50 m) first, then deflects it.

We detect ~50 m asteroids as they make their final approach to Earth all the time, so detecting one first by chance could be a strategic advantage.

I take your other points, though.

"(b) Secondly, while the great powers may see military use for smaller scale orbital bombardment weapons (i.e. ones capable of causing sub-global or Tunguska-like asteroid events), these are only as destructive as nuclear weapons and similarly cannot be used without risking nuclear retaliation."

I don't think this is necessarily right. First, an asteroid impact is easier to seem like a natural event, therefore being less likely to result in mutually assured destruction. Also, just because we can't think of a reason for a nation to use an asteroid strike, do... (read more)

1
Joel Tan🔸
I am extremely sceptical that you can make an asteroid impact seem like a natural event. The trajectory of asteroids are being tracked, and if one of them drastically changed course after an enemy state's deep space probe (whose launch cannot be hidden) were in the vicinity, the inference is clear. In any case, the difficulty of weaponization far outstrips redirection. The energy (and hence payload) as well as the complexity of the supporting calculations needed to redirect an asteroid so it does not hit earth is magnitudes less than the payload and calculations needed to . Even if we were capable of the former (i.e. have deflection capabilities), we would not have the latter - and that's not even getting into the risk of even marginal errors in calculations of these long orbits causing staggering different predictions of ground zero - you could easily end up striking yourself (or causing a tsunami that drowns your own coastal cities). That's not getting into the issue of the military value of such weapons - which by definition cannot deter, if meant to look accidental.

If anyone is still reading this today and is curious where I ended up, I just took a job with Sentience Institute as a Strategy Lead & Researcher.

1
T_W
What lead you there? Did you apply broadly and end up with multiple viable choices? Any advice to add after successfully undertaking this process?

Cost is one factor, but nuclear also has other advantages such as land use, amount of raw material required (to make the renewables and lithium etc. for battery storage), and benefits for power grid.

It's nice that renewables is getting cheaper, and I'd definitely like to see more renewables in the mix, but my ideal long term scenario is a mix of nuclear, renewables and battery. I'm weakly open to a small amount of gas being used for power generation in the long term in some cases.

1
rileyharris
I think my (updated based on the comments so far) conclusion is the same as yours!

Hm, good to know and fair point!  I wonder if we can test the effect of extra funding over what's needed to run a passable campaign by investing say $5,000 in online ads etc. in a particular electorate, but even that is hard to compare to other electorates given the number of factors. If anyone else has ideas for measuring impact of extra funding, I'd love to hear it!

Seeking grants from EA grant makers is something I haven't at all considered. I wonder if there are any legal restrictions on this as a political party recipient  (I haven't looked into this but could foresee some potential issues with foreign sources of funding). On the one hand, AJP can generate its own funds, but I feel like we are still funding constrained in the sense that an extra $10,000 per state branch per election (at least) could almost always be put to good use. Do you think we should we look into this, particularly with the federal election coming up?

2
Ren Ryba
I had a quick look at EA grant makers at the beginning of the SA state campaign. I found that every EA grant maker I checked (can't remember which ones) had a clause saying that they won't fund political parties or campaigns. So I imagine there'd have to be a conversation with grant makers first about their policies - which may, understandably, be a tricky conversation. The Australian government website says: "In late 2018 the Parliament passed legislation to ban political donations of $1,000 or more from foreign sources. ... The new rules ban donations from foreign donors: a person who does not have a connection to Australia, such as a person who is not an Australian citizen or an entity that does not have a significant business presence in Australia." So yes, that could be a hurdle. But perhaps this idea could still work for parties in countries that do not have a law like this, or the funding could come from EA orgs or grant makers based in Australia. I'm in two minds about the party being funding constrained. To be funding constrained would mean that extra funding would translate to either a higher vote or a better outcome in some other measure of influence. I haven't seen any evidence to either support or refute that claim. The SA state campaign's spend in 2022 was $100,000 and resulted in a vote of 1.5%, while in 2018 the spend  was $18,000 and resulted in a vote of 2.17%. Obviously that's just a single comparison, and the contexts varied wildly between those two years, but it's not obvious to me that extra spending would increase the vote (or other measures of influence). I previously looked at obtaining data from state branches on this question, but I don't believe I went ahead with that project.

"This being said, the format of legislative elections in France makes it very unlikely that a deputy from the animalist party will ever be elected, and perhaps limits our ability to negotiate with the other parties."

This makes some sense, as unfortunate as it is. Part of the motivation for other parties being willing to negotiate with you or adopt their own incrementally pro-animal policies is based on how worried they are that they might lose a seat to your party. If they're not at all worried, this limits your influence.

But I wouldn't say it entirely voi... (read more)

1
centur888
"I think it's still possible to have some influence in systems where minor parties are unlikely to get elected."   Thats good news Michael Dello If i may ask what your strategy would be if you were running a campaign in a 'blue-ribbon' National Party electorate? eg  New England region in NSW :)  Also, how best could a small number of AJP volunteers be used effectively?

I just want to add that I personally became actively involved with the AJP because I felt that political advocacy from within political parties had been overly neglected by the movement. My intuition was that this is because some of the earlier writings about political advocacy/running for election work by 80,000 Hours and others focused mostly on the US/UK political systems, which I understand are harder for small parties to have any influence (especially the US).

One advantage of being in a small party is that it's relatively easy to become quite senior q... (read more)

Thank you so much for the feedback!

I did think about working for a government department (non-partisan), but I decided against it. From my understanding, you can't be working for 'the crown' and running for office, you'd have to take time off or quit.

The space agency was my thinking along those lines, as I don't think that counts as working for the crown.

I hadn't thought about the UK Civil Service. I've never looked in to it. I don't think that would affect me too much, as long as I'm not a dual citizen.

I haven... (read more)

Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).

1
RobertHarling
I believe it is the probability that a nuclear war occurs AND leads to human extinction, as described in The Precipice. I think I would agree that if it was the just the probability of nuclear war, this would be too low, and a large reason the number is small is because of the difficulty for a nuclear war to cause human extinction.

When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.

1
MichaelA🔸
Ah, that makes sense, then. And I'd also guess that researchers and policy makers are the main people that would need to be convinced. But that might also be partly because the general public probably doesn't think about this much or have a very strong/solidified opinion; that might make it easier for researchers and policy makers to act in either direction without worrying about popular opinion, and mean this can be a case of pulling the rope sideways. So influencing the development of asteroid deflection technology might still be more tractable in that particular regard than influencing AI development, since there's a smaller set of minds needing changing. (Though I'd still prioritise AI anyway due to the seemingly much greater probability of extreme outcomes there.) I should also caveat that I don't know much at all about the asteroid deflection space.

We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.

I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don't seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.

My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.

So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical im... (read more)

Self-plugging as I've written about animal suffering and longtermism in this essay:

http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/

To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.

People are already talking about introducing plants, insects and animals to Mars as a means of terr... (read more)

3
Michael St Jules 🔸
Aren't those extinction risks, although perhaps less severe or likely to cause extinction than others, according to EAs?

Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?

0
Amy Labenz
Thanks for saving the dates! Part of the reason that we are hosting EA Global 2017 in three locations is that we hope to include people from different communities. I am looking at a variety of venues and working to confirm the EA Global UK dates. I hope to have an update on that before too long. Unfortunately, due to some staffing changes, we have had a bit of a delay this year in launching the EAGx application process. I hope to hire someone to help with the EAGx events soon so that we can ramp up the process and confirm some additional events. We post upcoming events here: https://www.eaglobal.org/events/

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

1
Rohin Shah
^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

0[anonymous]
Thanks for your comment. I agree with the Michael Plant's response below. I am not saying that there will be a preponderance of suffering over pleasure in the future. I am saying that if you ignore all future pleasure and only take account of future suffering, then the future is astronomically bad.
-4
RyanCarey
People like Bostrom have thoroughly considered how valuable the future might be. The view in existential risk reduction circles is simply that the future has positive expected value on likely moral systems. There are a bunch of arguments for this. One can argue from improvements to welfare, decreases in war, emergency of more egalitarian movements over time, anticipated disappearance of scarcity, and reliance on factory farming, increasing societal wisdom over time, and dozens of other reasons. One way of thinking about this if you are a symmetric utilitarian is that we don't have much reason to think either of pain and pleasure is more energy efficient than the other (https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)[Are pain and please equally energy efficient]. Since a singleton would be correlated with some relevant values, it should produce much more pleasure than pain, so the future should have very net positive values. I think that to the extent that we can research this question, we can sit very confidently saying that for usual value systems, the future has positive expectation. The reason that I think people tend to try to shy away from public debates on this topic, such as when arguing for the value of existential risk research, is that doing so might risk creating a false equivalence between themselves and very destructive positions, which would be very harmful.
2
MichaelPlant
Hello Michael, I think the key point of John's argument is that he's departing from classical utilitarianism in a particular way. That way is to say future happy lives have no value, but future bad lives have negative value. The rest of the argument then follows. Hence John's argument isn't a dissent about any of the empirical predictions about the future. The idea is that you the ANU can agree with Bostrom et al. about what actually happens, but disagree on how good it is.

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

2[anonymous]
Thanks for this. It'd be interesting if there were survey evidence on this. Some anecdotal stuff the other way... On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future). Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis - http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058.

I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.

First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.

But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game whe... (read more)

1
Ben_West🔸
Thanks for the feedback! 1. I generally like arguments from humility, but I think you're overstating the difficulty of choosing the better candidate. E.g. in 2016 only one candidate had any sort of policy at all about farmed animals, so it didn't require a very extensive policy analysis to figure out who is preferable. The same is true for other EA focus areas. 2. I agree. I do not think that promoting vote pairing irrespective of the candidates is a very useful thing to do.

Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.

I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.

0
turchin
Thanks - just saw this comment now. Not really miss the idea, but not decoded include it here.

I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.

Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's prob... (read more)

-1
carneades
I read through your article, but let me see if I can strengthen the claim that charities promoted by effective altruism do not actually make systematic change. Remember, effective altruists should care about the outcomes of their work, not the intentions. It does not matter if effective altruists love systematic change, if that change fails to occur, the actions they did are not in the spirit of effective altruism. Simply put, charities such as the Against Malaria Foundation harm economic growth, limit freedom, and instill dependency, all while attempting to stop a disease which kills about as many people every year as the flu. Here's the full video

Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.

A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.

The model is largely se... (read more)

This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.

Any thoughts on the worst possible ethical theory?

Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?

Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally.... (read more)

People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).

However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.

How can we get everyone to agree on the best ethical theory?

2
DC
Perhaps it would be easier to figure out what is the worst ethical theory possible? I don't recall ever seeing this question being asked, and it seems like it'd be easier to converge on. Regardless of how negatively utilitarian someone is, almost everyone has an easier time intuiting the avoidance of suffering rather than the maximization of some positive principle, which ends up sounding ambiguous and somewhat non-urgent. I think suffering enters near mode easier than happiness does. It may be easier for humans to agree on what is the most anti-moral, badness-maximizing schema to adopt.

Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!

I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions max... (read more)

0
Rick
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’). Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work. My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utilit
Load more