weeatquince

Wiki Contributions

Comments

A personal take on longtermist AI governance

Thank you Luke – great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public!

(FWIW Minor point but I am not sure I would phrase a goal as "make government generically smarter about AI policy" just being "smart" is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)

 

EA for Jews - Proposal and Request for Comment

Also Ben, is there a Jews and EA Facebook group – any plans to set one up? Or if I set one up do you think you could email / share it?

A personal take on longtermist AI governance

Thank you Luke for sharing your views. I just want to pick up one thing you said where your experience of the longtermist space seems sharply contrary to mine.

You said: "We lack the strategic clarity ... [about] intermediate goals". Which is a great point and I fully agree. Also I am super pleased to hear you have been working on this. You then said:

I caution that several people have tried this ... such work is very hard

This surprised me when I read it.  In fact my intuition is that such work is highly neglected, almost no one has done any of this and I expect it is reasonably tractable. Upon reflection I came up with three reasons for my intuition on this.


1. Reading longtermist research and not seeing much work of this type.

I have seem some really impressive forecasting and trend analysis focused but if anyone had worked on setting intermediate goals I would expect to see some evidence of basic steps such as listing out a range of plausible intermediate goals or consensus building exercises to set viable short and mid term visions of what AI governance progress looks like (maybe it's there and I've just not seen it). If anyone had made a serious stab at this I would expect to have seen thorough exploration exercises to map out and describe possible near-term futures, assumption based planning, scenario based planning, strategic analysis of a variety of options, tabletop exercises, etc. I have seen very little of this.


2. Talking to key people in the longtermist space and being told this research is not happening.

For a policy research project I was considering recently I went and talked to a bunch of longtermists about research gaps (eg at GovAI, CSET, FLI, CSER, etc). I was told time and time again that policy research (which I would see as a combination of setting intermediate goals and working out what policies are needed to get there) was not happening, was a task for another organisation, was a key bottleneck that no-one was working on, etc. 
 

3. I have found it fairly easy to make progress on identifying intermediate goals and short-term policy goals that seem net-positive for long-run AI governance

I have an intermediate goal of: key actors in positions of influence over AI governance are well equipped to make good decisions if needed (at an AI crunch time).  This leads to specific policies such as: Ensuring clear lines of responsibility exist in military procurement of software /AI or, if regulation happens it should be expert driven outcome based regulation or some of the ideas here. I would be surprised if longtermists looking into this (or other intermediate goals I routinely use) would disagree with the above intermediate goal or that the policy suggestions move us towards that goal. I would say this work has not been difficult.

– – 

So why is our experience of the longtermist space so different. One hunch I have is that we are thinking of different things when we consider "strategic clarity on intermediate goals".

In supporting governments to make long-term decisions and has given me a sense of what long-term decision making and "intermediate goal setting" and long-term decision making involves. This colours the things I would expect to see if the longtermist community was really trying to do this kind of work and I compare longtermists' work to what I understand to be best practice in other long-term fields (from forestry to tech policy to risk management).  This approach leaves me thinking that there is almost no longtermist "intermediate goal setting" happening.  Yet maybe you have a very different idea of what "intermediate goal setting involves" based on other fields you have worked in.

It might also be that we read different materials and talk to different people. It might be that this work has happened I've just missed it or not read the right stuff.

– –
Does this matter? I guess I would be much more encouraging about someone doing this work than you are and much more positive about how tractable such work is. I would advise that anyone doing this work should have a really good grasp of how wicked problems are addressed and how long-term decision making works in a range of non-EA fields and the various tools that can be used.

EA for Jews - Proposal and Request for Comment

I have an idea and though a comment here would be a good place to put it:
I wonder if there should be a Jewish run EA charity or Charitable Fund that directs funds to good places (such as assorted EA organisations).


I think lots of Jews want to give to a Jewish run organisation or give within the Jewish community. If a Jewish run EA charity existed it could be helpful for making the case for more global effective giving.

It could be run with Jewish grant managers who ensure that funds are used well and in line with Jewish principles (there could be a Pikuach nefesh fund for saving the most lives, or a Maimonides ladder sustainable growth fund, etc).

To argue against this idea: one of the nice things about EA is it is not us asking for your money it is us advising on where you should give your money which feels nicer and is maybe an easier pitch.  So maybe if there was an EA run Jewish charity or fund it might detract form that or should be separate from the outreach efforts.

Happy to help a bit with this if it happens.

 

How well did EA-funded biorisk organisations do on Covid?

Another slightly tangential but very similar question that came up in conversation I had recently is:

"How well have EA-funded orgs built on the momentum created by the COVID-motivated global interest in GCRs (global catastrophic risks) to drive policy change or other changes to help prevent GCRs and x-risks"

I could have imagined a world where the entire longtermist community pivoted towards this goal and at least for a year or two and focused all available time skill and money on driving GCR related policy change – but this doesn’t seem to have happened much. I could imagine the community looking back at this year and regretting the collective lack of action.

The organisation where I work, the APPG for Future Generations pivoted significantly, kickstarted a new Parliament Committee on risks and I wrote a paper on lessons learned from COVID which had significantly government interest and seems to have driven policy change (writeup forthcoming).

But beyond that there has definitely been some exciting stuff happening. I know:

  • CSER are starting a lessons learned from COVID project, although this is only just getting started.
  • FHI staff have submitted a some evidence to parliamentary inquiries (example).
  • The CLTR (funded by the EAIF) has launched a report on risk (I'm unsure if this was a change in direction or always the plan).
  • No more pandemics (not funded) was started.

This stuff is all great and I am sure there is more happening – but my general sense is that it is much less than and much slower than I would have expected.

I also loosely get the impression (from my own experience and that of 2-3 other orgs that I have talked to) that various EA funders have been disinterested in pivoting to support lessons learned from COVID focused policy work, some of which could scale up quite significantly, and that maybe funding is the main bottleneck for some of this (I think funding for more policy work is a bottleneck for all of the orgs listed above except FHI).

[Disclaimer – I will be bias given that I pivoted my work to focus on COVID lessons learned and policy influencing and looked for funding for this.] 

How well did EA-funded biorisk organisations do on Covid?

Hello, Thank you for the interesting thoughts. The comments on the GHS index are useful and insightful.

Your analysis of COVID preparation on Twitter is really really interesting. Well done for doing that. I have not yet looked at your analysis spreadsheet but will try to do that soon.

To touch on a point you said about preparation, I think we can take a bit more of a nuanced approach to think about when preparation works rather than just saying "effective pandemic response is not about preparation". Some thoughts from me on this (not just focused on pandemics).

  • Prevention definitely helps. (It is a semantic question if you want to count prevention as a type of preparation or not). The world is awash with very clear examples of disaster prevention whether it is engineering safe bridges, or flood prevention, or nuclear safety, or preventing pathogens escaping labs, etc.
  • The idea that preparation (henceforth excluding prevention) helps is conventional wisdom and I think I would want to see good evidence against this to stop believing in this.
  • Obviously preparation helps in the small cases, talk to a paramedic rushing to treat someone or a fireman. I have not looked into it but I get the impression that it helps in the medium cases, eg rapid response teams responding to terror attacks in the UK / France seem useful, although not an expert. On pandemics specifically the quick containment of SARs seems to be a success story (although I have not looked at how much preparation played a role it does seem to be a part of the story). There are not that many extreme COVID-level cases to look at, but it would be odd if it didn’t help in extreme cases too.
  • The specific wording of the claim in the linked article headline feels clickbait-y. When you actually read the article it actually says that competence matters more (I agree) and also that we should focus more on designing resilient anti-fragile systems rather than event specific preparation. I agree but I think that designing systems that can make good decisions in a risk scenario is a form of preparation.
  • I do agree that your analysis provides some evidence that preparation did not help with COVID. I am cautious of the usefulness of this evidence because of the problems with the GHS – e.g. the UK came near top but basically had no plan to deal with any non-influenza pandemic that I have identified.
  • A confusing factor that might make it hard to tell if preparation helped is that, based on the UK experience (eg discussed here) it appears that having bad plans in place may actually be worse than no plans.
  • Evidence from COVID does suggest to me that specific preparation does help. Notably countries (E Asia, Australasia)  that had SARs and prepared for future SARs type outbreaks managed COIVD better.

So maybe we can say something like:
Prevention definitely helps. Both event specific  preparation and generally building robust anti-fragile decision systems are useful approaches but the latter of those is more underinvested in. However good leadership is necessary as well as preparation and without good leadership (which maybe rare) preparation can turn out to be useless. Furthermore bad preparation, such as poor planning, can potentially hinder a response more than no preparation. 

Does that seem like a good summary and sufficiently explain your findings. 

I am thinking about doing more work to promote preparation so useful to hear if you disagree. 

How well did EA-funded biorisk organisations do on Covid?

[Edit – moved comment to answer above at suggestion of kbog] 

EA Infrastructure Fund: May 2021 grant recommendations

think a significant issue is that both of these cost time

I am always amazed at how much you fund managers all do given this isn't your paid job!
 

I don't think it's obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas

Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last.
 

... it could be that I'm just bad at getting value out of discussions, or updating my views, or something like that.

That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them!
(And/or just that everyone is different and different ways of learning work for different people)

EA Infrastructure Fund: May 2021 grant recommendations

Thank you so much for your thoughtful and considered reply.

I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect. 

This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).

 

Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that).

Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe: 

  1. One fund is making quite poor decisions AND/OR
  2. There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR
  3. There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements.

Just curious and typing up my thoughts. Not expecting good answers to this.

Load More