Chris Kerr

28Joined Aug 2020


Day job: Cybersecurity

From England, now living in Estonia.


If SBF committed fraud, there’s a distinct possibility that SBF will use altruism as a defence and/or justification for his actions in the coming months.

Sadly, I think his having been the second largest donor to the Biden 2020 campaign fund will be a more effective defence. It certainly worked for the people who lost hundreds of billions of Other People's Money in 2008.

If you have a software engineering background but no particular expertise in biology or information security, then I would suggest trying to find some existing open-source software project which is helpful to biosecurity work and then help to make it more robust and user-friendly. I haven't worked in biosecurity myself, but I can tell you from experience in other areas of biology that there are many software packages written by scientists with no training in how to write robust and usable software, and so there is a lot of low-hanging fruit for someone who can configure automatic testing, use a debugger or profiler, or add a GUI or web front end.

Much of the work in biosecurity is related to handling and processing large amounts of data, so knowledge of how to work with distributed systems is in demand.

Nowadays the amounts have to be extremely large before it is worth the effort of setting up a distributed system. You can fit 1 TB of RAM and several hundred TB of disk space in a commodity 4U server at a price equivalent to a couple of weeks of salary + overhead for someone with the skills to set up a high performance cluster, port your software to work on it, or debug the mysterious errors.

I don't have any in-depth knowledge of this field, but my guess is that, out of the set of interventions whose effectiveness is easy to measure, the most effective ones will be those that target internally or regionally displaced refugees in the third world, as opposed to those who make it to first world countries.

One reason for avoiding talking about "1-to-N" moral progress on a public EA forum is that it is inherently political. I agree with you on essentially all the issues you mentioned in the post, but I also realise that most people in the world and even in developed nations will find at least one of your positions grossly offensive - if not necessarily when stated as above, then certainly after they are taken to their logical conclusions.

Discussing how to achieve concrete goals in "1-to-N" moral progress would almost certainly lead "moral reactionaries" to start attacking the EA community, calling us "fascists" / "communists" / "deniers" / "blasphemers" depending on which kind of immorality they support. This would make life very difficult for other EAs.

Maybe the potential benefits are large enough to exceed the costs, but I don't even know how we could go about estimating either of these.

I would love to see hiring done better at EA organizations, and if there was some kind of "help EA orgs do hiring better" role I would jump at the chance.

This would be great. Changing the human parts of the hiring process would be a lot of work, but if you can just get organizations to use some kind of software that automatically sends out "We received your application" and "Your application was rejected" e-mails then that would be a good start.

Good point. So if we can't hope for state alignment then that is an even stronger reason to oppose building state capabilities.

If there is a gene for "needing less sleep, high behavioural drive, etc", which seems like it ought to give an evolutionary advantage, and yet only a very small fraction of the population have the gene, there must be a reason for this.

I can think of the following possibilities:

  • It is a recent mutation.
  • The selective advantage of needing less sleep is not as great as it seems. (e.g. before artificial lighting was widespread, you couldn't get much done with your extra hours of wakefulness)
  • The gene also has some kind of selective disadvantage. (If we are lucky, the disadvantage will be something like "increased nutritional requirements" which is not a big problem in the present day.)

Do you have any idea which of these is the case?

Improving state capacity without ensuring the state is aligned to human values is just as bad as working on AI capabilities without ensuring that the AI is aligned to human values. The last few years have drastically reduced my confidence in "state alignment" even in so-called "liberal" democracies.

Some additional relevant historical background: in 1938, and especially before Kristallnacht, it was not at all obvious how bad the Nazi persecution of Jews would subsequently become. The Wannsee Conference, where the Nazi leadership decided to implement the "final solution", was still four years in the future. The pogroms of 19th-century Russia were still within living memory, and Western democracies still had colonial empires, Jim Crow laws, lynchings, and similar abominations. Without accurate statistics, it would have been hard to tell whether the newspaper stories coming out of Germany were any worse.

It would be interesting to hear what gave the organisers of the Kindertransport the foresight to know that this problem was urgent.

Load More