Jay Bailey

Jay is a software engineer from Brisbane, Australia who is looking to move into more direct EA work. He currently facilitates Intro to EA courses, and is looking for opportunities to work directly for an EA-aligned organisation in EA movement building, global health, or AI safety. He is a signatory of the Giving What We Can pledge.

Topic Contributions

Comments

You Understand AI Alignment and How to Make Soup

Do you think this is a useful tool for AGI alignment? I can certainly see it being potentially useful for current models and a useful research tool, but I'm not sure if it is expected to scale.  It'd still be useful either way, but I'm curious about the scope and limitations of the dataset.

You Understand AI Alignment and How to Make Soup

Had to go digging into the paper to find a link, so I figured I'd add it to the comments: https://github.com/hendrycks/ethics

What is the journey to caring more about 1) others and 2) what is really true even if it is inconvenient?

For me, I feel like the big difference was around taking action, more than the other two. I heard about EA years ago, but only took action when I had already developed the habit of doing a good deed, however small or unimpactful, each day. Acting on a moral impulse, for me, became habitual. So when I revisited EA, I decided to actually start donating, because the move from "Someone should do something" -> "I should do something" -> Doing something had become much more a force of habit for myself.

I guess the lesson for this is that for people like me, something like Try Giving and committing just 1% of income or something small would have been a solid entry point, getting me into the habit of doing good.

Complex Systems for AI Safety [Pragmatic AI Safety #3]

Possibly a newbie question: I noticed I was confused about the paragraph around deep learning vs. reinforcement learning. 

"One example of obviously suboptimal resource allocation is that the AI safety community spent a very large fraction of its resources on reinforcement learning until relatively recently. While reinforcement learning might have seemed like the most promising area for progress towards AGI to a few of the initial safety researchers, this strategy meant that not many were working on deep learning."

I thought that reinforcement learning was a type of deep learning. My own understanding is that deep learning is any form of ML using multilayered neural networks, and that reinforcement learning today uses multilayered neural networks, and thus could be called "deep reinforcement learning", but is generally just RL for short. If that were true that would mean RL research was also DL research.

Am I misunderstanding some of the terminology?

Against “longtermist” as an identity

One thing I'm curious about - how do you effectively communicate the concept of EA without identifying as an effective altruist?

Fermi estimation of the impact you might have working on AI safety

I've discovered something that is either a bug in the code, or a parameter that isn't explained super well. 

Under "How likely is it to work" I assume "it" refers to AGI safety. If so, this parameter is reversed - the more likely I say AGI safety is to work, the higher the x-risk becomes. If I set it to 0%, the program reliably tells me there's no chance the world ends.

Fermi estimation of the impact you might have working on AI safety

I like the tool! One thing I would like to have added is total impact. I ended up using a calculator on a different webpage, but it would be nice to include something like "Expected lives saved", even if that's just 7 billion * P(world saved by you) that updates whenever P(world saved) does.

What We Owe the Past

I fully believe you when you say that 17!Austin was just as smart and selfless as 27!Austin. The same pattern is not the case for 20!Jay and 30!Jay, including all your points about 17!Austin. (except the one on slack, but 20!Jay did not meaningfully use it)

That said, I don't think we're actually in disagreement on this. I believe what you say about 17!Austin, and I assume you believe what I say about 20!Jay - neither of us have known each other's past selves, so we have no reason to believe that our current selves are wrong about them.

Given that, I'm curious if there are any specific points in my original comment that you disagree with and why. I think that'd be a constructive point of discussion. Alternatively, if you agree with what I wrote, but you don't think that is a sufficient argument against what you said, that'd be interesting to hear about too. 

What We Owe the Past

Has this low-hanging fruit remained unpicked, however? I feel like "respecting graves, temples, and monuments" is already something most people do most of the time. Are there particularly neglected things you think we ought to do that we as a society currently don't?

Load More