Welcome to March's open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.

1

0
0

Reactions

0
0
Comments34
Sorted by Click to highlight new comments since: Today at 1:07 AM

Tom Ash wrote a piece a month ago called Effective Altruism and Consequentialism. In it, he covers a particular issue.

A third sort of non-consequentialist position is that we should not act wrongly in certain ways even if the results of doing so appear positive in a purely consequentialist calculus. On this position we should not treat our ends as justifying absolutely any means. Examples of prohibited means could be any of the adjectives or nouns commonly associated with wrongdoing: dishonesty, unfairness, cruelty, theft, et cetera. This view has strong intuitive force. And even if we don’t straightforwardly accept it, it’s hard not to think that a sensitivity to the badness of some of these actions is a good thing, as is a rule of thumb prohibiting them - something that many consequentialists accept.

It would be naive to suppose that effective altruists are immune to acting in these wrong ways - after all, they’re not always motivated by being unusually nice or moral people. Indeed, effective altruism makes some people more likely to act like this by providing ready-made rationalisations which treat them as working towards overwhelming important ends, and indeed as vastly important figures whose productivity must be bolstered at all costs. I’ve seen prominent EAs use these justifications for actions that would shock people in more normal circles. [...] And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people.

I believe there should be an essay targeted at the issue above. It should be posed as a problem, or a question, which we will make an effort to solve. The question is:

What, if any, circumstances allow us to make special exceptions for ourselves to act in ways we and others would usually prohibit?

The question isn't "is consequentialism true?", nor "is effective altruism the same thing as consequentialism?" Those are questions which have already been examined, not the least by the rest of Tom's essay. I want this essay for the sake of posterity, something we can point to as a common-sense guide to whether such an action is acceptable, and why not. So, it might be more about personal behavior, psychology, and practicality than philosophy. I really want help writing this. I think it needs to be written sometime, but I can't do it alone. Please offer your help in comment replies, or a PM.

I assume you've read this series of posts?

Obviously, we must be sensitive in covering the topic, especially in which examples are chosen. Some examples will present edge cases, problems where even the greatest moral philosophers in the world will be hard-pressed to conclusively solve. Some may not seem like edge cases to us in particular, but they may to others. Some examples, even single words used for reference, could be philosophical landmines. It would like to avoid such examples as best we can.

I believe good examples are the ones which are the least controversial among everyone, and are failure modes of consequentialist rationalizing in obviously hazardous ways. I read an article about how one anonymous woman in the United States was making money by writing essays for students at American universities with poor English skills. She was making enough money that I speculated, as an adept student of English, it was worth it for me to take up a career of plagiarism to earn to give. I was dumb. Jack LaSota commented I shouldn't consider it because if I was found out such actions would tarnish effective altruism by association. Months later, in another conversation, someone asked if funneling money into secret Swiss bank accounts to avoid paying taxes so the money could instead be donated to effective charities would be a good idea. Everyone responded 'uh, no', but I gave a lengthy and well-received response as to why it was a bad idea.

Justifying crime and fraud are great, uncontroversial examples of things WE TOTALLY SHOULDN'T DO. I think building from like examples is a good starting point, and we can extrapolate to general rationale behind not doing them to get the thesis across.

The bystander effect! Maybe you should ask someone specific who you know!

Yeah, I think you're right that we need to have some common-sense guidelines, or 'injunctions' or 'deontology' or 'virtue ethics'. And it's not great to act on a view that EAs are extra valuable. But what should we do about it? Why do people come to this position? What mistakes are they making?

Or maybe the issue can be extended other ways, and then it would be a nice juicy big post topic!

When considering working for a startup/company with significant positive externalities, would it be far off to estimate your share of impact as (estimate of total impact of the company vs. the world where it did not exist) * (equity share of company)?

This seems easier to estimate than your impact on company as a whole, and matches up with something like the impact certificate model (equity share seems like the best estimate we would have of what impact certificate division might look like). It's also possible that there are distortions in allocation of money that would lead to an underestimate of true impact.

On the downside, it doesn't fully account for replaceabilty, and I'm not sure if it meshes with the assessment that "negative externalities don't matter too much in most cases because someone else would take your job" that seems to be the typical EA position.

I think this is a good starting point for estimating share of externalities in a start-up (particularly the expected externalities that will be caused if the start-up is very successful).

I don't think it will be all that accurate, for the kind of reasons you mention, but it has the major advantages that it is easy to measure and somewhat robust. I expect that replaceability means that it tends to be an overestimate, but typically by less than an order of magnitude.

A warning, though: replaceability can operate on the level of startups as well as the level of jobs. You should consider that if your start-up weren't very successful in the niche it's going for, then someone else might be (even if they're a bit less good). This will tend to make the externalities of the whole company smaller than they first appear.

Has the date for the 2015 EA Summit been set yet?

I go to Cornell, and I'm mildly interested in starting an EA student group here. For various reasons, this only seems likely to work if I can find other already-EA people. Do any of you attend (or I suppose work at) Cornell or live in Ithaca or nearby, and if so, would you like to start a club?

Hi Yavanna, I helped start the new Effective Altruism Cornell group so you can join that! I'd encourage you to give them an email. In general, anyone can find local EA groups at http://effectivealtruismhub.com/groups

Thanks! I just emailed them.

Not sure I know anyone. I suggest also asking Jonathan Courtney, Tom Ash, and a www.lesswrong.com open thread!

Thanks for the open thread suggestion. I just asked; we'll see what happens!

I just came across an interesting set of short, user friendly videos describing how QALYs are derived using the standard gamble or time tradeoff techniques. They mainly focus on applications in the US healthcare system, but I think they could be useful for anyone trying to communicate the ideas of cost effectiveness research.

Determining utilities using the standard gamble and time tradeoff techniques

Calculating QALYs, and applying them to the healthcare system

They're written by Aaron Carroll, who regularly writes for the NYTimes, and blogs at The Incidental Economist. Their blog outlines the general concept behind EA here, so maybe they'd be open to consulting with an EA group? In general, I think healthcare economists have really interesting viewpoints to add to this movement.

Feel free to ask for advice regarding donations, projects and career choice here. If you just want to put important questions out there without attaching your username, it's fine to use a different one!

I've been tentatively considering a career in the actuarial sciences recently. It seems like the field compensates people pretty well, is primarily merit-based, doesn't require much, if any programming ability (which I don't really have), and doesn't have very many prerequisites to get into, other than strong mathematical ability and a commitment to taking the actuarial exams.

Also, actuarial work seems much slower paced than the work done in many careers that are frequently discussed on 80K Hours, which would make me super happy. I'm a bit burnt out on life right now, and I really don't want to go into a high-stress job, or a job with unusually long hours after I graduate at the end of this semester. I guess that if I wasn't a failure, I would have figured out what I was doing after graduation by now.

Are there any actuaries in the EA movement, or does anyone have any insights about this field that I might not have? My main concern regarding potentially becoming a trainee actuary is that the field is somewhat prone to automation. Page 71 of this paper, which was linked to in 80K Hours' report on career automation, suggests that there's a 21 % chance that actuarial work can be automated. The automation of certain tasks done by actuaries is frequently discussed on the actuary subreddit, as well.

Thanks for reading, and for any advice or thoughts that you might have for me!

I'm not expert on actuarial studies but I agree with your description that it seems good, challenging, decently rewarding reasonable low-stress and good for people with strong mathematical ability. Regarding programming ability, if you're a relatively analytical thinker, it's pretty feasible to learn to program, even without formal study, so doubt that it would be a decisive factor for most people.

"I guess that if I wasn't a failure, I would have figured out what I was doing after graduation by now."

Although you're also clearly giving it some thought, considering that you're posting about it here, and that you're reading 80,000 Hours, so maybe you needn't feel so downcast about things.

Thanks for the encouragement, Ryan!

How can we get more involved in policy? There's some historically contingent reasons that we haven't done so before. 1 - policy impact is hard to measure, 2 - policy can make people irrational.

But it's also fun to read - at SSC, in the news, and in general. Open Philanthropy Project is doing some policy research, as is GPP, and as is CSER. But how can we do it better? Probably we need better engagement with existing policy researchers. So should we host policymaking workshops? Should EAs practice getting involved by writing letters to our local members? Should we commission Scott Alexander to write extra policy essays about things we care about? Should we fund Niel B and others to do direct lobbying? How should this all play out?

I don't think the other comments have really nailed the most challenging parts of working in policy.

  1. It is all about the voters. 99% of what politicians care about is reelection, and therefore the will of the majority often wins. Voters are stupid, especially when it comes to the majority. They don't want to see their policymakers spend time (much less money!) on things that don't matter to them. This is why letter-writing (although at much greater scale than the EA community could currently support) is actually somewhat effective.

  2. Lobbying is all about relationships. Lobbying does not get done with an appointment set up with a representative. It happens over very long periods of getting to know a number of politicians involved, including frank discussions and the support of a powerful group.

  3. Politicians rely on those they've selected as experts. There's no chance they're going to listen to EA at this point... they're going to listen to the head of the World Bank and USAID. They'll pay a little attention to Gates, Amnesty and ONE, but those organizations struggle mightily to influence policy themselves.

OK, now that I was a jerk who discussed huge barriers, here's what I see as opportunities:

  1. One thing organizations such as J-PAL have clearly demonstrated is that the details of interventions matter greatly. Luckily, the details are often the things policymakers care the least about. There are opportunities to make what would be considered minute changes in policy, as long as we pay close attention and then work hard to make them happen.

  2. Because policymakers represent a single geography, if you can gain scale within a single geography, you can get the attention of that policymaker. A concentrated EA population in Oxford and/or SF for example, can have an outside shot of getting the attention of their representative. If they happen to have a good one, he could help get the ball rolling a bit.

  3. Organizations like ONE and Gates have spent many years doing a lot of this work for us. What that means is if we can influence them, or even partner with them, there's the potential to piggy-back those relationships and be heard.

It's definitely a very difficult road, and scaling EA while gaining influence and political contacts is key. Of course, it does open up the potential for enormous long-term opportunity.

I worked for a US Congressman and a US Congress Committee on policy, so let me know if there's questions regarding those experiences that I could help answer.

West Oxford's MP, Nicola Blackwood, is actually pretty promising from this point of view!

In late 2010, Blackwood was elected to serve on the Home Affairs Select Committee and is secretary of the All-Party Parliamentary Group on Overseas Development.[5] Before her election to parliament, Blackwood worked as a volunteer on human rights and aid projects in the Middle East, Mozambique, Rwanda and Bangladesh, and has also worked as a volunteer among the disadvantaged in Birmingham and Blackpool. Prior to running for office, Blackwood worked with the Conservative Party Human Rights Group which was set up to find ways for the UK to combat human rights abuses in places like Burma and the Democratic Republic of the Congo and as an adviser to the then Shadow International Development Secretary, Andrew Mitchell. She is a member of the Conservative Party Human Rights Commission,[6] as well as holding a position on the Council of Advisors for ZANE, a charity which seeks to support pensioners in Zimbabwe.[7]

I recall her being a lovely person, back when I was trudging around handing out leaflets in the rain for her.

Good question. Policy is an important area that the EA movement will need to address.

I will add another reason which may have limited policy engagement: policy is hard! Finding policies that would definitely be good and wouldn't face significant opposition requires substantial work. It is also possible to be overconfident about policy ideas: most of our ideas have looked better at a first glance than on closer inspection. This means we shouldn't be too hasty to fix on policies we choose to push.

It's possible to do this work, and GPP is experimenting with fleshing out policies at the moment, but it's not necessarily easy.

Another option is to lobby for "obvious" policies, like increasing foreign aid. But this is hardly a neglected area.

I know for a fact that there are intelligent and thoughtful people who argue that foreign aid spending has not been effective, and in some cases has actually been harmful. And there are other people who are convinced that we need to increase it. So, so much for 'obvious'. : )

If it's difficult but tractable, then liasing with existing policymakers is good, as is getting practice in lobbying and policymaking.

I also feel like science and tech lobbying is a bit neglected, and could be popular with the public and with great scientists, while giving us an opportunity to talk to relevant policymakers about risk-mitigating interventions.

Yes, although engaging with existing policymakers too soon is a good way to lose credibility. There is definitely more room to talk to friendly policy experts though!

I'm not sure that doing lobbying 'just for practice' is a good idea. It would be fairly easy to accidentally lobby for something bad, and equally the reputational consequences of lobbying can be complicated if you don't know an area.

What do you mean by science/tech lobbying? Lobbying for what?

Genetal science stuff like research funding, improved research infrastructure, better research regulation, better patent law, better education, all the while promoting public understanding of science. All of this could be a good platform to build on even if some of these areas are somewhat crowded, reducing immediate impact.

I think it's not so much that it's crowded as that it's often unclear what the actual thing you'd lobby for is: is more research funding better research funding? Maybe. What exactly would better patent law be? Better education? These are all things where it is easy to come to views, and even to be quite confident about them, but where the realities are often much more complicated than they seem.

I don't mean that in a nihilistic way - I'm currently working on building a much more informed view of safe biological research funding in order to lobby for a specific policy - it's just that there's quite a lot of work to be done to be sure something is good before you advocate for it.

A thought about the question whether to donate now or later: why would I invest money in myself or my own career, if the expected return on an investment in someone else is greater?

Total (altruistic) human potential might increase more if I donate to SCI which indirectly improves the education of many people, or to CFAR to pay for someone else's workshop rather than go to the workshop myself.

How and why could this thought be wrong ( or right?)

I think this idea raises some good questions. Something that Paul says along these lines is that when you donate funds, we expect that you might get some good flow-through benefits to the recipient and their contacts, but we should often assume that the economic benefits to people will eventually compound at roughly the interest rate.

The challenge is finding opportunities where good can compound faster than a few percent. And investing in the study and career progression of someone who is trying to answer these kinds of questions might be such an example. If you think so, then the question becomes whether you think that CFAR is a good opportunity for compounding your impact like that.

For those who are reading along, Ryan is referring to this. I mention CFAR as an example. There might be (identifiable) better giving opportunities somewhere in the world.

Hi all, Not sure if this has been posted somewhere but basically, it's FREE money for charity donations. the animals and it will only take two minutes. Follow these seven easy steps and you're done:

  1. Visit www.pledgeling.com and use the “Sign Up” button (in the purple bar at the top) to create an account.

  2. Fill in your name and email address, and create a password. No credit card required!

  3. Click the verification link you will receive in an email and have $5 automatically added to your account.

  4. Find your charity by using the search bar.

  5. Click the “Donate” button and confirm that you would like to pass the $5 along to that charity.

  6. Use the social media share buttons to ask your friends and family members to help as well.

  7. Congratulate yourself on making a $5 donation in two minutes!

Things that seem pretty useful for career development are online education, tutoring and networking. How can we provide these better? Can 80,000 Hours focus more on some or all of these?

A post on how to balance doing good where it has the highest impact and doing "frivolous" good for your friends: http://www.patheos.com/blogs/unequallyyoked/2015/03/effective-altruism-ethically-questionable-cookies.html

I was curious who'd heard of a few organisations and places in the EA orbit, with a range of fame and nicheness. So I started a poll on the EA Facebook group: https://www.facebook.com/groups/effective.altruists/permalink/832289520160740/?qa_ref=qd . I'd be interested in your responses!