Jack R

Stanford CS Master's Student

Wiki Contributions

Comments

Suffering-Focused Ethics (SFE) FAQ

What is the SFE response to the following point, which is mostly made by Carl Shulman here? Pain/pleasure asymmetry would be really weird in the technological limit (Occam’s razor), and that it makes sense that evolution would develop downside-skewed nervous systems when you think about the kinds of events that can occur in the evolutionary environment (e.g. death, sex) and the delta “reproductive fitness points” they incur (i.e. the worst single things that can happen to you, as a coincidental fact about evolution, the evolutionary environment, and what kinds of "algorithms" are simple for your nervous system to develop, are way worse from evolution's perspective than the best single things that can happen to you), but that our nervous systems aren’t much evidence of the technological possibilities of the far-future?

Introducing Training for Good (TFG)

Thanks! This is the exact kind of thing I was interested in hearing about. If you don’t mind sharing, is there any significant way in which the 25 people were selected for? E.g. “people who expressed interest in a program about doing good” vs “people who had engaged with EA for at least N hours and were the top 25 most promising from our perspective out of 100 who applied.” I’m hoping for the sake of meta-EA tractability that it was closer to the former :)

We're Redwood Research, we do applied alignment research, AMA

[Edited] How important do you think it is to have ML research projects be lead by researchers who have had a lot of previous success in ML? Maybe it's the case that the most useful ML research is done by the top ML researchers, or that the ML community won't take Redwood very seriously (e.g. won't consider using your algorithms) if the research projects aren't lead by people with strong track records in ML.

Additionally, what are/how strong are the track records of Redwood's researchers/advisors?

Introducing Training for Good (TFG)

Around May 2022, TFG will host a week-long retreat, training corporate executives with 10+ years experience

Interesting. Do you have any data/anecdata about the tractability of getting 30+ year-olds to switch into EA careers? My current guess is that on the margin, although this seems valuable, week-long retreats teaching people about EA should be done for high-achieving high-schoolers (mostly because they would be more willing to change their career paths). Targeting high-schoolers makes less sense if you want to solve the management gap, though you could, for instance, target high-achieving entrepreneurial high-schoolers to help solve the entrepreneur gap (that I perceive there to be).

We're Redwood Research, we do applied alignment research, AMA

Thanks for the response! I found the second set of bullet points especially interesting/novel.

We're Redwood Research, we do applied alignment research, AMA

Also, how important does it seem like governance is here versus other kinds of coordination? Any historical examples that inform your beliefs?

We're Redwood Research, we do applied alignment research, AMA

It’s 2035, Redwood has built an array of alignment tools that make SOTA models far less existentially risky without sacrificing hardly any performance. But these tools don’t end up being used by enough of the richest labs such that we still face doom. What happened?

How would you gauge random undergrads' "EA potential"?

What meaning of the phrase "hardcore EA" are you meaning to use here?

Buck's Shortform

This seems like really good advice, thanks for writing this!

Also, I'm compiling a list of CS/ML bootcamps here (anyone should feel free to add items).

Towards a Weaker Longtermism

I don’t think your point about Toby’s GDP recommendation is inconsistent with David’s claim that Toby/Will seem to imply “Effective Altruism should focus entirely on longtermism” since EA is not in control of all of the world’s GDP. It’s consistent to recommend EA focus entirely on longtermism and that the world spend .1% of GDP on x-risk (or longtermism).

Load More