All of MichaelDello's Comments + Replies

Feedback requested for an animal advocacy and longtermist career (direct and research)

Thank you so much for the feedback!

I did think about working for a government department (non-partisan), but I decided against it. From my understanding, you can't be working for 'the crown' and running for office, you'd have to take time off or quit.

The space agency was my thinking along those lines, as I don't think that counts as working for the crown.

I hadn't thought about the UK Civil Service. I've never looked in to it. I don't think that would affect me too much, as long as I'm not a dual citizen.

I haven... (read more)

X-risks to all life v. to humans

Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).

1RobertHarling1yI believe it is the probability that a nuclear war occurs AND leads to human extinction, as described in The Precipice. I think I would agree that if it was the just the probability of nuclear war, this would be too low, and a large reason the number is small is because of the difficulty for a nuclear war to cause human extinction.
Why making asteroid deflection tech might be bad

When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.

1MichaelA2yAh, that makes sense, then. And I'd also guess that researchers and policy makers are the main people that would need to be convinced. But that might also be partly because the general public probably doesn't think about this much or have a very strong/solidified opinion; that might make it easier for researchers and policy makers to act in either direction without worrying about popular opinion, and mean this can be a case of pulling the rope sideways [http://www.overcomingbias.com/2007/05/policy_tugowar.html]. So influencing the development of asteroid deflection technology might still be more tractable in that particular regard than influencing AI development, since there's a smaller set of minds needing changing. (Though I'd still prioritise AI anyway due to the seemingly much greater probability of extreme outcomes there.) I should also caveat that I don't know much at all about the asteroid deflection space.
Why making asteroid deflection tech might be bad

We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.

How should longtermists think about eating meat?

I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don't seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.

How should longtermists think about eating meat?

My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.

So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical im... (read more)

How should longtermists think about eating meat?

Self-plugging as I've written about animal suffering and longtermism in this essay:

http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/

To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.

People are already talking about introducing plants, insects and animals to Mars as a means of terr... (read more)

3MichaelStJules2yAren't those extinction risks, although perhaps less severe or likely to cause extinction than others, according to EAs?
Save the Date for EA Global Boston and San Francisco

Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?

0Amy Labenz5yThanks for saving the dates! Part of the reason that we are hosting EA Global 2017 in three locations is that we hope to include people from different communities. I am looking at a variety of venues and working to confirm the EA Global UK dates. I hope to have an update on that before too long. Unfortunately, due to some staffing changes, we have had a bit of a delay this year in launching the EAGx application process. I hope to hire someone to help with the EAGx events soon so that we can ramp up the process and confirm some additional events. We post upcoming events here: https://www.eaglobal.org/events/ [https://www.eaglobal.org/events/]
Ethical Reaction Time: What it is and why it matters

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

1rohinmshah5y^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?
The asymmetry and the far future

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

0John G. Halstead5yThanks for your comment. I agree with the Michael Plant's response below. I am not saying that there will be a preponderance of suffering over pleasure in the future. I am saying that if you ignore all future pleasure and only take account of future suffering, then the future is astronomically bad.
-3RyanCarey5yPeople like Bostrom have thoroughly considered how valuable the future might be. The view in existential risk reduction circles is simply that the future has positive expected value on likely moral systems. There are a bunch of arguments for this. One can argue from improvements to welfare, decreases in war, emergency of more egalitarian movements over time, anticipated disappearance of scarcity, and reliance on factory farming, increasing societal wisdom over time, and dozens of other reasons. One way of thinking about this if you are a symmetric utilitarian is that we don't have much reason to think either of pain and pleasure is more energy efficient than the other ( https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)[Are [https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html)[Are] pain and please equally energy efficient]. Since a singleton would be correlated with some relevant values, it should produce much more pleasure than pain, so the future should have very net positive values. I think that to the extent that we can research this question, we can sit very confidently saying that for usual value systems, the future has positive expectation. The reason that I think people tend to try to shy away from public debates on this topic, such as when arguing for the value of existential risk research, is that doing so might risk creating a false equivalence between themselves and very destructive positions, which would be very harmful.
2MichaelPlant5yHello Michael, I think the key point of John's argument is that he's departing from classical utilitarianism in a particular way. That way is to say future happy lives have no value, but future bad lives have negative value. The rest of the argument then follows. Hence John's argument isn't a dissent about any of the empirical predictions about the future. The idea is that you the ANU can agree with Bostrom et al. about what actually happens, but disagree on how good it is.
The asymmetry and the far future

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

2John G. Halstead5yThanks for this. It'd be interesting if there were survey evidence on this. Some anecdotal stuff the other way... On the EA funds page, Beckstead mentions person-affecting views as one of the reasons that one might not go into far future causes (https://app.effectivealtruism.org/funds/far-future) [https://app.effectivealtruism.org/funds/far-future)]. Some Givewell staffers apparently endorse person-affecting views and avoid the far future stuff on that basis - http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058 [http://blog.givewell.org/2016/03/10/march-2016-open-thread/#comment-939058].
Vote Pairing is a Cost-Effective Political Intervention

I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.

First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.

But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game whe... (read more)

1Ben_West5yThanks for the feedback! 1. I generally like arguments from humility, but I think you're overstating the difficulty of choosing the better candidate. E.g. in 2016 only one candidate had any sort of policy at all about farmed animals, so it didn't require a very extensive policy analysis to figure out who is preferable. The same is true for other EA focus areas. 2. I agree. I do not think that promoting vote pairing irrespective of the candidates is a very useful thing to do.
The Map of Impact Risks and Asteroid Defense

Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.

I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.

0turchin3yThanks - just saw this comment now. Not really miss the idea, but not decoded include it here.
If you want to disagree with effective altruism, you need to disagree one of these three claims

I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.

Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's prob... (read more)

-1carneades5yI read through your article, but let me see if I can strengthen the claim that charities promoted by effective altruism do not actually make systematic change. Remember, effective altruists should care about the outcomes of their work, not the intentions. It does not matter if effective altruists love systematic change, if that change fails to occur, the actions they did are not in the spirit of effective altruism. Simply put, charities such as the Against Malaria Foundation harm economic growth, limit freedom, and instill dependency, all while attempting to stop a disease which kills about as many people every year as the flu. Here's the full video [https://www.youtube.com/watch?v=cmzTAJUspc8]
Students for High Impact Charity: Review and $10K Grant

Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.

A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.

The model is largely se... (read more)

The need for convergence on an ethical theory

This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.

Any thoughts on the worst possible ethical theory?

Review of EA Global 2016 Marketing

Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?

Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally.... (read more)

The need for convergence on an ethical theory

People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).

However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.

How can we get everyone to agree on the best ethical theory?

2DonyChristie5yPerhaps it would be easier to figure out what is the worst ethical theory possible? I don't recall ever seeing this question being asked, and it seems like it'd be easier to converge on. Regardless of how negatively utilitarian someone is, almost everyone has an easier time intuiting the avoidance of suffering rather than the maximization of some positive principle, which ends up sounding ambiguous and somewhat non-urgent. I think suffering enters near mode easier than happiness does. It may be easier for humans to agree on what is the most anti-moral, badness-maximizing schema to adopt.
The need for convergence on an ethical theory

Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!

I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions max... (read more)

0Rick5yThank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the most good possible’ technically is a ‘correct’ moral framework, but it really doesn’t ‘work’ well for day-to-day decisions unless you apply a lot of diligent thought to it (often forcing you to rely on ‘sub-frameworks’). Imagine a 10 year old child who suddenly and religiously adopts a classical hedonistic utilitarian framework – I would have to imagine that this would not turn out for the best. Even though their overall framework is probably correct, their understanding of the world hampers their ability to live up to their ideals effectively. They will make decisions that will objectively be against their framework, simply because the information they are acting on is incomplete. 10 year olds with much simpler moral frameworks will most likely be ‘right’ from a utilitarian standpoint much more often than 10 year olds with a hedonistic utilitarian framework, simply because the latter requires a much more nuanced understanding of the world and forecasted effects in order to work. My worry is that all humans (not just 10 year olds) are bad at forecasting the impacts of their actions, especially when dynamic effects are involved (as they invariably are). With this in mind, let’s pretend that, at most, the average person can semi-accurately estimate the first order effects of your actions (which is honestly a stretch already). A first order effect would be something like “each marginal hour I work creates more utili
The need for convergence on an ethical theory

Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!

Your third point is well taken - I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.

Is not giving to X-risk or far future orgs for reasons of risk aversion selfish?

I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn't win (and probably even if it does) I'll share it here.

I think it's a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I'm not sure that's totally obvious. When you throw wild animals and digital systems into the mix, things get scary.

1RobBensinger5yI wouldn't be surprised if Bostrom's basic thinking is that suffering animals just aren't a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they'll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction. We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn't. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium. (Minor side-comment: 'humans survive and eat lots of suffering animals forever' is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)
Is not giving to X-risk or far future orgs for reasons of risk aversion selfish?

Thanks, there are some good points here.

I still have this feeling, though, that some people support some causes over others simply for the reason that 'my personal impact probably won't make a difference', which seems hard to justify to me.

Is not giving to X-risk or far future orgs for reasons of risk aversion selfish?

Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is.

Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.

Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering

Thanks for writing this. One small critique:

"For example, Brian Tomasik has suggested paying farmers to use humane insecticides. Calculations suggest that this could prevent 250,000 painful deaths per dollar."

I'm cautious about the sign of this. Given that insects are expected to have net negative lives anyway, perhaps speeding up their death is actually the preferable choice. Unless we think that an insect dying of pesticide is more painful than them dying naturally plus the pain throughout the rest of their life.

But overall, I would support the recommendation that OPP supports WAS research.

9Brian_Tomasik5yHi Michael :) The "Humane Insecticides [http://reducing-suffering.org/humane-insecticides/]" article talks about using different insecticides that are equally lethal, rather than reducing insecticide use. (It expresses similar concerns as those you raise about the sign of reducing insecticide use.) The 250,000 number is an amount of pain equivalent to that many pesticide deaths. That said, I'm somewhat skeptical about the number quoted in the article because it ignores a lot of costs (e.g., setup costs, identifying the right alternative chemicals, etc.). I first wrote it in 2007 when I was less attuned to the arguments for conservatism in cost-effectiveness estimates. Still, some other estimates [http://reducing-suffering.org/cost-effectiveness-comparison-for-different-ways-to-reduce-insect-suffering/] suggest similar orders of magnitude for how much expected insect suffering can be prevented per dollar, although these interventions are mostly more controversial (and more speculative).

It looks like you're subscribing to a person-affecting philosophy, whereby you say potential future humans aren't worthy of moral consideration because they're not being deprived, but bringing them into existence would be bad because they would (could) suffer.

I think this is arbitrarily asymmetrical, and not really compatible with a total utilitarian framework. I would suggest reading the relevant chapter in Nick Beckstead's thesis 'On the overwhelming importance of shaping the far future', where I think he does a pretty good job at showing just this.

Earning to Give v. Pursuing your Passion/Direct Work

I did earning to give for 18 months in a job that I thought I would really enjoy but after 12 months realised I didn't. I'm now doing a PhD.

I think personal fit is pretty important, but at the end of the day it's still just another thing to consider, and not the be all end all. I think its a pretty valid point that you will perform better in a role that you enjoy and thus advance further and have more impact, but if you're really trying to maximise impact there are limits to that (e.g. Hurford's example about surfing, unless surfing to give can be a thing)... (read more)

Month-long EA movement building experiment: Effective Altruism: Grow

I noticed there doesn't seem to be an option to nominate less than 5 people. Not sure if this is a feature but I wanted to just nominate a few people and was unable to.

Are GiveWell Top Charities Too Speculative?

I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say 'could' only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn't put us off too much, if at all.

0MichaelDickens5yYes, I agree that WAS research has a high expected value. My point was that it has a non-trivial probability (say, >10%) of being harmful.
(Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election

Happy to hear what they are Alex.

The final article had a title change and it was made clear numerous times that it was a personal analysis, not necessarily representing the views of Effective Altruism. In fact, we worked off the premise of voting to maximise wellbeing, not to further EA.

I posted it here and shared it with EAs because they are used to thinking about ways to maximise wellbeing, and I've never seen an analysis that looks at multiple parties and policies to try and select the 'best' party (many have agreed that this doesn't seem to have been d... (read more)

0AlexRichard5yOh hey, didn't see this at the time. If EA becomes an explicitly political movement, people who disagree with it will not join; non-political donations are distinct from politics in the sense that they do not need to be identified with one side or another; EA values might be associated with one side or another, but this is an official-seeming EA venue, not just a private-ish place for discussion.
End-Relational Theory of Meta-ethics: A Dialogue

Regardless of whether or not moral realism is true, I feel like we should act as though it is (and I would argue many Effective Altruists already do to some extent). Consider the doctor who proclaims that they just don't value people being healthy, and doesn't see why they should. All the other doctors would rightly call them crazy and ignore them, because the medical system assumes that we value health. In the same way, the field of ethics came about to (I would argue) try and find the most right thing to do. If an ethicist comes out and says that the mos... (read more)

(Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election

Thanks for everyone's feedback. The article has now been published and is a living document (we will edit daily based on feedback) until the election.

http://www.michaeldello.com/?p=839

(Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election

Hey Kieran, a few more sections have been added since I did this post, including animal welfare. Check out the Google Document for the latest version.

(Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election

Please note that this is not a final recommendation, and is not intended to be read as such. Please don't share this beyond EA circles yet unless there is someone who might be particularly suited to helping to make this more rigorous and/or useful.

1casebash5yI would add an explicit disclaimer to clarify that this doesn't represent the views of EA as a whole. I would suggest changing the title to make this clearer as well. The current analysis ignores that some parties have a more realistic chance of winning seats than others.
The morality of having a meat-eating pet

Very true David, but then the same could be said of being vegan to a lesser extent.

This article was targeted more towards the vegan community in general, not just EAs (though I cross posted it here because I thought it might be useful). Most non-EAs wouldn't think about donations that way, and probably wouldn't donate the $20,000 if they didn't get a pet.

The morality of having a meat-eating pet

If you don't get your pets from a 'no-kill shelter', that might not be the case. In that situation, if you don't get the pet, they might just be put down.

The morality of having a meat-eating pet

Very true - I wasn't sure what the difference would be between non-by-product and by-product consumption. I suspect it's somewhere between what I stated and no effect, so this estimate could be an upper bound.

1Denkenberger5yAt least the more expensive cat food can contain actual muscle, and I know someone who says it tastes pretty good. But dog food is many times grain-based with flavor added.
The morality of having a meat-eating pet

It would be interesting to see a study on this, it certainly seems plausible - a survey asking for the number of family pets throughout childhood and their current dietary choices might be illuminating.

In any case, I would still argue that this should be done with a non-meat-eating pet over a meat-eating one.

Global poverty could be more cost-effective than animal advocacy (even for non-speciesists)

"The biggest takeaway here is that animal charity research is a really good cause."

I agree - if we're highly certain we've found the best poverty interventions, or close to, and the best animal interventions might be ~250x as effective as the best poverty interventions, that should argue for increased animal charity research. But Peter is definitely right in that the higher robustness of existing human interventions (ignoring flow on effects like the poor meat eater problem) is a potentially valid reason to pick poverty interventions now over animal interventions now.

The morality of having a meat-eating pet

Sure, I think any way of reducing the population/proportion of meat eating pets would be, on the whole, a good thing.

I'd also predict a positive correlation between affluence and having a pet, which might mean that societies coming out of poverty results in more animal consumption than suggested by the 'poor meat eater problem'.

Advice Wanted on Expanding an EA Project

I wanted to take part in the essay competition and categorise the space related risks and solutions to food (related to my PhD in space science) though unfortunately didn't have time. Will this competition be recurring? If not, it's something I'd like to write about anyway.

0Denkenberger5yWe are not planning another essay contest at this point, but I would be happy to hear your thoughts. Related to space and food, you might want to check this [http://sethbaum.com/ac/2015_Refuges.html] out.
Looking for Wikipedia article writers (topics include many of interest to effective altruists)

I'm interested in working on the animal welfare section. I'm intending to do my own research on this in the near future anyway. In particular I'm interested in trying to find evidence and arguments for the effectiveness of different approaches to animal activism.

0vipulnaik6yGreat. You can message me on Facebook or email me at vipulnaik1@gmail.com [vipulnaik1@gmail.com] so we can discuss possible places to get started.
Looking for Wikipedia article writers (topics include many of interest to effective altruists)

The ACE article got removed? Do you have any idea why? I only skimmed the article but it looked like a reasonable article.

New climate change report from Giving What We Can

I haven't read the articles yet, though I did study climate change as part of my undergraduate and externally, so I'll have a crack at answering your technical question (Q3).

The point of mitigation is to reduce greenhouse gas emissions (including carbon dioxide and methane) or to capture and store them (number of ways to do this, underground gas to liquid storage, growing trees etc.). CO2 actually has a much shorter residence time in the atmosphere, but it does then get stored in the ocean for up to centuries. Methane is also a big problem, because it has ... (read more)

The Poor Meat Investor Problem

Nice discussion, this is something I've thought about before but haven't put to paper.

As for the effectiveness of using animals to lift people out of poverty vs other methods, I have no grounds to comment. I can see why the well-being of animals wouldn't be considered in the economic equation (though disagree with it) for the very line of reasoning you've proposed about certain subsets of humanity not being considered in years past.

Even as a non-speciesist, from a utilitarian standpoint, I could still see the 'possibility' of animals as investment being a ... (read more)

Guidelines on depicting poverty

I'm not sure that simplification and gamification should be intrinsically bad things. Situationally, both can be used to get a lot of good done. The Gaming for Good events, run by Bachir Boumaaza (Athene), can be described as 'gamification', but raised nearly $15 million US for Save the Children. Ignoring the fact that they are not GiveWell etc (let's imagine for sake of argument that they are, or the charity was AMF), would that outweigh any negative impacts of gamification?

The great calculator

Great comment, you've convinced me. Thanks for the link as well, it looks interesting.

The great calculator

Thanks for the feedback everyone. Lots of recurring themes, so I'll address them partly here.

The main point is this; the end market is not Effective Altruists. I don't think it's very likely at all that adding too much complexity for the sake of accuracy, at least on the front end, will result in any meaningful reduction in animal suffering. The point is not to be deceitful or to bias people, but simply to maximise the reduction in animal suffering.

As someone said at the EA Global 2015 conference in Melbourne, "Sometimes the best way to be a utilitari... (read more)

2Owen_Cotton-Barratt6yI think you're conflating a couple of different dimensions: degree of complexity, and degree of rigour. These two are linked: there are some aspects that it's hard to be rigourous about without a certain level of complexity. But it can also be more work to make a more complex model rigourous, because you need to be careful about more different moving parts. I think for a calculator like this you should be aiming for low complexity and high rigour. Adding more questions or complicated arguments could put people off. But making elementary mistakes or sleights-of-hand in conversion makes it easier to attack (and people will try to attack it) and dismiss. So keep the number of questions small -- addressing existential risk definitely looks like a mistake to me -- but try to make them the most appropriate ones, and keep the language precise. This recent post on depicting poverty [http://effective-altruism.com/ea/v4/guidelines_on_depicting_poverty/] and Josh's comment there have some good discussion of what kind of language will avoid pushback.
The great calculator

Thanks for your comments, see my other responses, particularly around the question of rigour vs. impact.

The great calculator

Thanks for your comments, see my other responses, particularly around the question of rigour vs. impact.

Load More