Consider paying me to do AI safety research work

by Rupert2 min read5th Nov 20203 comments


Funding RequestCommunityAI Alignment
Personal Blog

My friend Remmelt of Effective Altruism Netherlands wrote a post asking donors to consider funding his work providing valuable services to people in the effective altruism community to help them do more good. (And I think he's doing a fine job, by the way.)

EDIT: I think Remmelt has done a stronger job of creating a good appeal for funding than me, and I apologise if this needs more work.

Inspired by his example, here I explore the topic of whether funding me to do AI safety research work is a good donation opportunity. Please note that I am secure in my basic material needs and have decent prospects of finding paid work in the near future, but the marginal impact of this donation would be to allow me to put more hours per week into creating AI safety research papers rather than pure maths papers. You must judge whether it is a good donation at the margin.

I received a PhD in pure mathematics for a thesis entitled "Generalizations of the Fundamental Theorem of Projective Geometry" from the University of New South Wales in 2009. LinkedIn profile and CV are available on request. Linus Kramer of University of Muenster perceived sufficient research potential in my PhD thesis to invite me to do my first post-doc. I have now completed two post-docs in Germany and have been without full-time paid work for slightly more than three years. My peer-reviewed publications are in the theory of buildings and the theory of large cardinals in set theory, and two new ones have appeared in recent memory, full-texts available on request.

There are two public claims on Arxiv which are of a highly, perhaps almost absurdly, ambitious nature, which if accepted as correct would have an important and lasting impact on the field. First is a claim to have formulated a new family of large-cardinal axioms, found connections with the theory of virtual large cardinals, proved inconsistency of ZF+DC+Reinhardt cardinal, and applied large cardinals mentioned above to proving the Ultimate-L Conjecture. Concludes with a suggestion for a "correct effectively complete theory" of V. Available on Arxiv, full-text available on request. Title is "New Large-Cardinal Axioms and the Ultimate-L Program". Currently under review. Second is public claim of solution to P versus NP problem, generated critical commentary on Twitter from an expert. I have not become aware of why this approach can't work, but naturally it is an extremely ambitious claim and the default assumption with such claims is always that they will turn out to be wrong. Top-ranking reputable journal has acknowledged receipt of email but has not as yet communicated decision to initiate peer-review (stating that it takes time to make such decisions given COVID). 

Also public claim on both Arxiv and also LessWrong forum to have solved one of the open research problems which was on the exercise sheets at AISFP, namely whether Brouwer fixed point theorem can be viewed as special case of Lawvere fixed point theorem. Submitted and awaiting peer review. Full-texts of all of these available on request.

Completed research proposal for Centre for Long-term Risk related to AI safety, available on request (job interview was declined). Have conceived of research project for which MIRI declined to give funding, write-up of this available on request. Helped to create lecture content for Toon Alfrink's charity RAISE, interested in Vanessa Kosoy's work and have the aspiration to read her Arxiv papers. Regularly attending Linda Linsefor's AI Safety workshops (or at least I aspire to make it to the on-line meetings as often as possible, with my current time constraints).

Currently working on a number of pure maths and philosophy of set theory research projects and seeking my next academic position. Inspired by Connor  Leahy's talk about "GPT-3 fire alarm". Interested in quantum computing, teaching the basics of the theory to my wife. Want to put more time into AI safety research but need more financial stability.

Can put 10 hours per week for one month into AI safety research work in exchange for 350 euros (or multiply this by two, three, or four).

I can be contacted at rupertmccallum174 at gmail dot com.


3 comments, sorted by Highlighting new comments since Today at 8:33 AM
New Comment

Note from the lead moderator:

While posts like this are fine, I'll be moving them to "Personal Blog" from now on (including this post and Remmelt's) unless they also discuss topics outside of the author's personal request for funding.

Thanks Aaron, that's fine, but I would my hope that my decision to put my post up was not a factor in getting Remmelt's one moved off Frontpage. But anyway. I did worry a bit about whether my post was appropriate for this forum and Remmelt's appeal for funding is stronger than mine.

The post is perfectly appropriate to publish on the Forum! It's just not something that quite fits what we're looking to have at the top of the homepage. I still hope the Forum can be a helpful place to host content like this, where people can find it if they look for it in "All Posts" and you can share the link in other places.