Jack R

I’m interested in EA infrastructure/community building.

Wiki Contributions

Comments

13 Very Different Stances on AGI

This is minor, but the first two Metaculus questions are about aging rather than AGI

Wikipedia editing is important, tractable, and neglected

Thanks for this!

Curious if anyone else would be fans if the community started using the following titling for this type of post, to emphasize to ourselves more that we have an impoverished understanding of the way the world works:

“X is tractable and neglected, and it might be important”

Should I go straight into EA community-building after graduation or do software engineering first?
  • Build career capital relevant for things like software engineering or machine learning engineering for AI safety.

Random, but a piece of advice that I've heard is that career capital for these things is only useful for getting your foot in the door (e.g. getting a coding test) and then your actually performance (rather than your resume) is what ends up getting you/not getting you the job. If you think you can succeed at this, I think it would almost certainly be better to just directly optimize for getting to a performance level that's good enough for the AI safety job you want, rather than spending a few years software engineering (which might still be useful, but would be a less optimal way to spend your time, probably). I recommend reaching out to any AI safety orgs you hope to work at to confirm whether this is the case, in case you can get a few additional years of impact.

ETA: That said, ignoring personal fit, probably: AI safety paths where you skip the SWE ~= community building path >> software engineering at a start-up

Also, if you want to chat about Lightcone and/or Redwood, I'm doing work trials for both of their generalist roles (currently at Redwood, switching to Lightcone soon)--feel free to DM me for my Calendly :)

Buck's Shortform

Good to know--thanks Bill!

Your Time Might Be More Valuable Than You Think

This analysis suggests that altruistic actors with large amounts of money giving or lending money to young, resource-poor altruists might produce large amounts of altruistic good per dollar.

A suspicious conclusion coming from a young altruist! (sarcasm)

Suffering-Focused Ethics (SFE) FAQ

I think I meant analogous in the sense that I can then see how statements involving the defined word clearly translate to statements about how to make decisions.

Suffering-Focused Ethics (SFE) FAQ

I feel like my question wasn't answered. For instance, Carl suggests using units such that when a painful experience and pleasurable experience are said to be of "equal intensity" then you are morally indifferent between the two experiences. This seems like a super useful way to define the units (the units can then be directly used in decision calculus). Using this kind of definition, you can then try to answer for yourself things like "do I think a day-long headache is more units of pain than a wedding day is units of pleasure?" or "do I think in the technological limit, creating 1 unit of pain will be easier than creating 1 unit of pleasure?"

What I meant by my original question was: do you have an alternative definition of what it means for pain/pleasure experiences to be of "equal intensity" that is analogous to this one?

Suffering-Focused Ethics (SFE) FAQ

Could you try to expand upon what you mean by “equal intensities”?

Suffering-Focused Ethics (SFE) FAQ

What is the SFE response to the following point, which is mostly made by Carl Shulman here? Pain/pleasure asymmetry would be really weird in the technological limit (Occam’s razor), and that it makes sense that evolution would develop downside-skewed nervous systems when you think about the kinds of events that can occur in the evolutionary environment (e.g. death, sex) and the delta “reproductive fitness points” they incur (i.e. the worst single things that can happen to you, as a coincidental fact about evolution, the evolutionary environment, and what kinds of "algorithms" are simple for your nervous system to develop, are way worse from evolution's perspective than the best single things that can happen to you), but that our nervous systems aren’t much evidence of the technological possibilities of the far-future?

Introducing Training for Good (TFG)

Thanks! This is the exact kind of thing I was interested in hearing about. If you don’t mind sharing, is there any significant way in which the 25 people were selected for? E.g. “people who expressed interest in a program about doing good” vs “people who had engaged with EA for at least N hours and were the top 25 most promising from our perspective out of 100 who applied.” I’m hoping for the sake of meta-EA tractability that it was closer to the former :)

Load More