Jay Bailey

Working (0-5 years experience)
481Brisbane QLD, AustraliaJoined Aug 2021

Bio

Participation
2

I'm a software engineer from Brisbane, Australia who's looking to pivot into AI alignment. I have a grant from the Long-Term Future Fund to upskill in this area full time until early 2023, at which point I'll be seeking work as a research engineer. I also run AI Safety Brisbane.

How others can help me

I will be looking for a research engineering position near the end of 2022. I'm currently working on improving my reinforcement learning knowledge. (https://github.com/JayBaileyCS/RLAlgorithms)

How I can help others

Reach out to me if you have questions about basic reinforcement learning or LTFF grant applications.

Comments
90

That's excellent advice! I just looked up Australia specifically (https://www.ato.gov.au/Individuals/Income-and-deductions/In-detail/Income/Scholarship-payments-and-tax) and it appears that:

For a scholarship payment to be exempt income it can't:

  • be an excluded government payment (Austudy, Youth Allowance or ABSTUDY)
  • come with a requirement for you to do work (either as an employee or contract for labour, now or in the future).

You must also meet both of the following conditions:

The key point here is the third one. So, if you're a uni student being funded to do a Masters or PhD, your grant is tax-exempt. If you're like me, and you're upskilling independently, tax does need to be paid for it.

That said, this took me almost no time and could have potentially saved the LTFF tens of thousands of dollars, so this was a very high EV thing to check.

Worth noting is that money like this is absolutely capable of shifting people's beliefs through motivated reasoning. Specifically, I might be tempted to argue for a probability outside the Future Fund's threshold, and for research I do to be motivated in favor of updating in this direction. Thus, my strategy would be to figure out your beliefs before looking at the contest, then look at the contest to see if you disagree with the Future Fund.

The questions are:

“P(misalignment x-risk|AGI)”: Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI

AGI will be developed by January 1, 2043

AGI will be developed by January 1, 2100

To be answered as a percentage.

$4,500 is the cost to save a life, whereas $200 is the quote for saving 1 year of life. Saving a life produces, IIRC, somewhere around 25-30 QALY's. So, $200/year would be correct, accounting for rounding, if GiveWell's estimates are trustworthy.

"So if a $50 Uber ride saves me half an hour, my half an hour must be more valuable than a three months of someone else’s life. That’s a pretty big claim."

That line hit hard. Something about reducing it to such a small scale made it really hit home - I can actually viscerally  understand why there are people who agonise over every purchase and struggle so much with guilt. I've always been able to emotionally remain distant - to donate my 10%, save lives each year, and yet somehow be okay with not donating more, even though I could. Thinking of it in terms of a single purchase and weeks/months of someone's life makes it feel so much more real all of a sudden, and my justifications of Schelling points and sustainable giving feel much more hollow.

Does the Impact Fund not take a small percentage to support GiveWell's overhead? I just always assumed they did.

This looks a lot more promising than the original post, so I'm very impressed at the continued evolution of this idea!

So, if I understand correctly, the current setup (or, the setup in a month or two) is roughly equivalent to the idea of - I give you money, you invest that money in a very low-risk investment, that profit goes to GiveDirectly, and if I need the money back, you give it to me. The reason it's a cryptocurrency is that there are plans to eventually allow GLO to be used as cash for various things. This is important because GLO is designed to be held in checking accounts, savings accounts, and emergency funds, not long-term investments - it doesn't compete in yield with the stock market, but that's not the intention.

Have I got that right?

Some additional questions:

How quickly, and at what cost, will I be able to exchange a currency (whether USD or non-USD) for GLO, and back again?

Is there a long-term plan to extract some amount of the T-bond interest for operational expenses? Do you see yourself being donor-funded indefinitely? 

 

I quite liked it! I left some comments, but I found it an engaging novel overall. I liked the different perspectives given by the characters who opposed Isaac's views in pretty reasonable ways, and how EA views were mentioned without getting too preachy.

Plus, I liked how the novel evoked the overall essence or vibe of cultivation novels without getting too lost in the weeds, as well as the well-developed military theory of cultivation warfare the characters had. Overall I quite enjoyed the novel both as an intro to EA concepts and on its own merits.

It could certainly use another editing pass or two for grammar, but I think it has fantastic potential!

Interesting. Do you have any good examples?

This is a fantastic resource, and I'm really glad to have it! 

My own path has been a little more haphazard - I completed Level 2 (Software Engineering) years ago, and am currently working on AI safety (1), mathematics (3) and research engineering ability (4) simultaneously. Having just completed the last goal of 4 (Completing 1-3 RL projects) I was planning to jump right into 6 at this point, since transformers haven't yet appeared in my RL perusal, but I'm now rethinking those plans based on this document - perhaps I should learn about transformers first.

All in all, the first four levels (The ones I feel qualified to write about, having gone through some or all of them) seem extremely good. 

The thing that most surprised me about the rest of the document was Level 6. Specifically, the part about being able to reimplement a paper's work in 10-20 hours. This seems pretty fast compared to other resources I've seen out there, though most of these resources are RL-focused.  For instance, this post (220 hours). This post from DeepMind about job vacancies a few months ago also says:

"As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you."

Thus, I don't think it's necessary to be able to replicate a paper in 10-20 hours. Replicating papers is a great idea according to my own research, but I think that one can be considerably slower than that and still be at a useful standard.

If you have other sources that suggest otherwise I'd be very interested to read them - it's always good to improve my idea of where I'm heading towards! 

 

Good piece! Upon seeing the title, I immediately wish I had thought to write something like it.

I was personally involved in FIRE before I got involved in EA. Even now, I donate 10% of my income and save most of what's left. Because of my decision to try and perform direct work to improve the world, I'm no longer planning the RE part of FIRE. I've also become a little less frugal as a result and willingly taken a pay cut to skill up for direct work - what does it matter if it takes an extra year or two to reach FI if I'm planning to perform direct work post-FI anyway?

So, I guess for me, these ideas are in conflict somewhat, in the sense that I can't simultaneously maximise both. But I agree there is a core to both of these movements that align very well. Mr Money Moustache, whether he identifies as EA or not, has donated significant amounts to GiveWell in the past. It makes complete sense that a person who wants to optimise their finances would also want to optimise their charitable giving in a similar fashion, so I think EA ideas will find fruitful soil in the FIRE movement.

Load More