weeatquince

Wiki Contributions

Comments

What questions relevant to EA could be answered by surveying the public?

Is this question asked with the intention of maybe doing such surveys?

I do plan to do surveys of the public's view of what a good future is and would really appricate support on that. I hope to be able to fund any such work but yet to be confirmed. Would you be invested in collaborating?

What questions relevant to EA could be answered by surveying the public?

I am doing work on this in the UK. Will PM you.

Edit: I do plan to do some of this. So if anyone else is interested in helping with such work on the UK do let me know.

What questions relevant to EA could be answered by surveying the public?

Making progress on ethics.

Sometimes I think philosophers could do better ethics work if they included surveying and working with the public as part of their tool kit . What do people actually think and how do they make trade-offs?

One specific example: I had a recent chat with a bunch of philosophers who said the standard view in philosophy is it impossible to have (or to technically formalise) a consequentialist view of justice based ethics. This confused me because in practice people do this all the time – you can find a bunch of justice based EAs and get them to make ethical trade-offs and it becomes pretty consequentialist pretty quickly (see here).  

What questions relevant to EA could be answered by surveying the public?

Any human-focused moral weights work!!

How much do members of the public care about:

  • Subjective wellbeing
  • Increases in income
  • Increases in happiness
  • Reductions in pain
  • Mental health 
  • Education
  • Being alive

Public surveys would be crucial for developing better QALYs / DALYs / WELBYs / etc (see these posts).

Public surveys are also needed to make trade offs between health and things not captured by QALYs / DALYs (such as increased income or justice), to trade of between years of life and quality of life (especially for some population ethics view) and so on.

Surveys in developing countries would be particularly useful.

What questions relevant to EA could be answered by surveying the public?

What are the public's views and concerns on AI, AI ethics and AI risks?

I regulation is going to happen. A better understanding of the public's attitudes would be useful for helping EA-aligned policy advocates to ensure that the regulation designed is effective at both addressing public need and ensuring that AI development is done in a safe way.

What questions relevant to EA could be answered by surveying the public?

What is the public's views, visions and ideas of what a good future will look like?

The idea here is that a clear vision of a what a good future looks like has been a key part of successful long-term policy making to date (based on experiences in Wales and Portugal). The hope is a clear vision of what the public want helps make long-term decision making feel easier to democratic policy makers, it helps them to explain and justify a focus on the long-term and should ultimately helps policy-makers prioritise the long-term more. 

Convergence thesis between longtermism and neartermism

Super thanks for the lengthy answer.

 

I think we are mostly on the same page.

Decision quality is orthogonal to value alignment. ... I'm more optimistic about IIDM that's either more targeted or value-aligned.

Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations).

[Could] claiming that in practice work that apparently looks like "un-targeted, value-neutral IIDM" (e.g. funding academic work in forecasting or campaigning for approval voting) is in practice pretty targeted or value-gnostic.

Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case.

 

So where if anywhere do we disagree?

I'm leery is that influence goes both ways, and I worry that LT people who get stuck on IIDM may (eventually) get corrupted by the epistemics or values of institutions they're trying to influence, or that of other allies.

Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful.

If we think of Lizka's B in the first diagram ("a well-run government") is only weakly positive or neutral on the value alignment axis from an LT perspective

I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.

Convergence thesis between longtermism and neartermism

Hi. Thank you so much for the link, somehow I had missed that post by Lizka. Was great reading :-)

To flag however I am still a bit confused.  Lizka's post says "Personally, I think IIDM-style work is a very promising area for effective altruism"so I don’t understand how you go from that too IIDM is net-negative. I also don’t understand what the phrase "especially if (like me) your empirical views about external institutions are a bit more negative than Lizka's" means (like if you think institutions are generally not doing good then IIDM might be more useful not less). 

I am not trying to be critical here. I am genuinely very keen to understand the case against. I work in this space so it would be really great to find people who think this is not useful and to understand their point of view. 

Disentangling "Improving Institutional Decision-Making"

Hi Lizka, WOW – Thank you for writing this. Great to see Rethink Priorities working on this. Absolutely loving the diagrams here.

I have worked in this space for a number of years mostly here, have been advocating for this cause within EA since 2016 and advised both Jess/80K and the effective institutions project on their writeups. Thought I would give some quick feedback. Let me know if it is useful.

I thought your disentanglement did a decent job. Here are a few thought I had on it. 

  1. I really like how you split IIDM into "A technical approach to IIDM" and "A value-aligning approach to IIDM."
  2. However I found the details of how you split it to be very confusing. It left me quite unsure what goes into what bucket. For example intuitively I would see increasing the "accuracy of governments" (i.e. aligning governments with the interests of the voters) as "value-aligning" yet you classify it as "technical".
  3. That said despite this, I very much agreed with the conclusion that "value-oriented IIDM makes more sense than value-neutral IIDM" and the points you made to that effect.
     
  4. I didn’t quite understand what "(1) IIDM can improve our intellectual and political environment" was really getting at. My best guess is that by (1) you mean work that only indirectly leads to "(3) improved outcomes". So value-oriented (1) would look like general value spreading. Is that correct?
     
  5. I agree with "for the sake of clarity ... we should generally distinguish between 'meta EA' work and IIDM work". That said I think it is worth bearing in mind that on occasion the approaches might not be that different. For example I have been advising the UK government on how to asses high-impact risks which is relevant for EAs too.*
     
  6. One institution can have many parts. Might be a thing to highlight if you do more disentanglement. E.g. Is a new office for future generations within a government, a new institution or improving an existing institution?
     

One other thought I had whilst reading.

  • I think it is important not to assign value to IIDM based on what is "predictable".

    For example you say "it would be extremely hard to produce candidate IIDM interventions that would have sufficiently predictable outcomes via this pathway, as the outcomes would depend on many very uncertain factors." Predictions do matter but one of the key cases for IIDM is that it offers a solution to the unpredictable, the unknown unknows, to the uncertainty of the EA (and especially longtermist) endeavour. All the advice on dealing with high-uncertainty and things that are hard to predict suggest that interventions like IIDM are the kinds of interventions that should work – as set out by Ian David Moss here (from this piece).

 

Finally, at points you seemed uncertain about tractability of this work. I wanted to add that so far I have found it much much easier than I expected. Eg you say "it is possible that shifting the aims of institutions is generally very difficult or that the potential benefits from institutions is overwhelmingly bottlenecked by decision-making ability, rather than by the value-alignment of institutions’ existing aims". (I am perhaps still confused about what you would count as shifting aims Vs decision-making ability see my point 1. above, but) my rough take on this is that I have found shifting the aims of government to be fairly easy and that there are not too many decision-making bottlenecks

So super excited to see more EA work in this space.

 

 

* Oddly enough, despite being in EA for years, I think I have found it easier to influence the UK government to get better at risk identification work than the EA community. Not sure what to do about that. Just wanted to say that I would love to input if RP is working in this space.

Convergence thesis between longtermism and neartermism

Without going into specific details of each of your counter-arguments your reply made me ask myself: why would it be that for across a broad range of arguments I consistently find them more compelling than you do? Do we disagree on each of these points or is there some underlying crux? 

I expect if there is a simple answer here it is that my intuitions are more lenient towards many of these arguments as I have found some amount of convergence to be a thing time and time again in my life to date. Maybe this would be an argument #11 and it might go like this:

 

#11. Having spent many years doing EA stuff, convergence keeps happening to me. 

When doing UK policy work the coalition we built essentially combined long- and near-termist types. The main belief across both groups seemed to be that the world is chronically short term and if we want to prevent problems (x-risks, people falling into homelessness) we need to fix government and make it less short-term. (This looks like evidence of #1 happening in practice). This is a form of improving institutional decision making (which looks like #6).

Helping government make good risk plans, e.g. for pandemics, came very high up the list of Charity Entrepreneurship's neartermist policy interventions to focus on. It was tractable and reasonably well evidenced. Had CE believed that Toby's estimates of risks were correct it would have looked extremely cost-effective too. (This looks like #2). 

People I know seem to work in longtermist orgs, where talent is needed, but donate to neartermist orgs, where money is needed. (This looks like #5).

In the EA meta and community building work I have done covering  both long- and near-term causes seems advantageous. For example Charity Entrepreneurship's model (talent + ideas > new charities) is based on regularly switching cause areas. (This looks like #6.) 

Etc.

 

It doesn’t feel like I really disagree with any thing concrete that you wrote (except maybe I think you overstate the conflict between long- and near-term animal welfare folk), more that you and I have different intuitions on how much this all points push towards convergence being possible, or at least not suspicious. And maybe those intuitions, as intuitions often do, arise from different lived experiences to date. So hopefully the above captures some of my lived experiences.

Load More