Oscar Delaney

67Brisbane QLD, AustraliaJoined Apr 2021


I study maths, philosophy and genetics at The University of Queensland in Australia. I was drawn to EA through GiveWell and Singerian global health ethics, but am now also interested in animal welfare and the longterm.


Sounds good!  Do you plan to publish the results each month on the forum, or if not what is a good way to get a quick summary of the results each month?

I like the structure and style of this piece, and think it makes sense for this central resource to be more formal and less emotional, and leave the more anecdote-y articles to media pieces which will have a wider audience anyway.

I think "greater significance to the industrial revolution" should be "greater significance than the industrial revolution"

Yes, good point, I now think I was wrong about how important the amount of funding is for steering.

Regarding the  'plausible research agendas' that should be pursued, I generally agree, while noting that even deciding on plausibility isn't necessarily uncontroversial.  Currently, I suppose it is grantmakers that decide this plausibility, which seems alright.

Also, given the large amounts of money available for conducting plausible alignment research, it seems less valuable to steer or think about the relative value of different research agendas, as it is less decision relevant when almost everything will be funded anyway.  Though in the future if community-building is very successful and we 10x alignment researchers, prioritisation within alignment would become a lot more important I imagine.

Thanks for this, I agree that it seems valuable to think carefully about the foundations of different research agendas and how justified these are.  Indeed, this seems analogous to the traditional EA pursuit of cause prioritisation: thinking carefully about the underlying assumptions and methodologies of different approaches to doing good, and comparing how well justified these are. To stretch the analogy, there may be some alignment equivalents of deworming that seem to have a strong chance of having little value but are still worthwhile in EV terms because of the possibility of having an outsized impact.

While I feel relatively unequipped to do useful direct alignment research (rowing), I feel even more unequipped to do steering.  I think this is a general feature of the world rather than just of me, that in order to usefully interrogate the axioms of a research agenda and compare the promisingness of different agendas it is very valuable to be quite familiar with these approaches, especially having already tried rowing in each.  For instance in biology, people often start out doing relatively menial lab work to help a senior person's project, then start directing particular experiments, after several years will run whole research projects, and usually only later in their career will they be well-placed to judge the overall merits of various research agendas.  Even though senior researchers are better at pipetting than undergrads, the comparative advantage of the undergrads is to pipette, and of the senior people is to steer and direct.

Likewise in alignment research, it seems most valuable for less experienced people to try rowing within one or more research agendas, and only later try to start their own or compare the value proposition of the different agendas.

I don't think this disagrees with what you wrote, it just explains why I think I should not be steering (yet).

Hi, I think I share these intuitions (surveillance is bad) but have a few qualms about your arguments:

  1. Regarding multi-layered defence, I agree it seems best to not solely rely on one protective mechanism.  I am unconvinced that having super surveillance will significantly lower other defence mechanisms. (I don't think people wearing seat belts drive more recklessly?).  Also, if we grant that people will be lulled into false sense of security, then I could well imagine malicious actors would likewise assume surveillance is very effective, and think 'oh well, I won't try to end the world as I'd just get caught.'  Alternately, if surveillance is more a bluff than something that actually works great, it may still impose significant costs on malicious actors, eg not being able to recruit or communicate over long distances, coordination problems, and generally just slow them down because they are spending resources trying not to be surveilled.
  2. Regarding Hanna's comment, as you note with CCTV, I think humans are just remarkably adaptable, and while there may be some transition pains, I think growing up in a fully-surveilled society wouldn't seem that bad or strange.  I think because people get used to things, we would also keep being weird and thinking well, as long as the surveillance was indeed very focused on preventing mega-bad things.
  3. I also share Jack's worry that these somewhat fuzzier concerns about people thinking less independently and being anxious and boring and mainstream do rather pale in comparison to reducing catastrophic risks, at least if one places some credence on more totalising versions of longtermism.  Thus, for me I think the key reasons I'm not super bullish on surveillance are that it would be really hard to implement well and globally, as you note, and I agree the totalitarianism risk seems major and plausibly outweighs the gains.

The intransitive dice work because we do not care about the margin of victory.  In expected value calculations the same trick does not work, so these three lives are all equal, with expected value 7/2

"We are not able to sponsor US employment visas for participants"  from https://www.openphilanthropy.org/open-philanthropy-technology-policy-fellowship/

Given this, I assume for people with no connection to the US (not citizens, no green card etc) there is no point in applying?

This seems like an important point to make in the main post as it rules out probably the majority of people opening this post.

Thanks, I had no idea!  Early signs are that it is not active, but I will update this if I hear otherwise.

Load More