E

erickb

32 karmaJoined Nov 2018

Bio

I am currently a nuclear engineer with a focus in nuclear plant safety and probabilistic risk assessment. I am also an aspiring EA, interested in X-risk mitigation and the intersection of science and policy.

https://www.lesswrong.com/users/erickball

Comments
9

It seems pretty clear that Amazon's intent is to have state of the art AI backing Alexa. That alone would not be particularly concerning. The problem would be if Amazon has some leverage to force Anthropic to accelerate capabilities research and neglect safety - which is certainly possible, but it seems like Anthropic wants to avoid it by keeping Amazon as a minority investor and maintaining the existing governance structure.

I don't think it's at all obvious whether this development is good or bad (though I would lean towards bad), but both here and on LessWrong you have not made a coherent attempt to support your argument. Your concept of "redundancy" in AI labs is confusing and the implied connection to safety is tenuous.

Remember these predictions were made in summer 2022, before ChatGPT, before the big Microsoft investment and before any serious info about GPT-4. They're still low, but not ridiculous.

The example used here is a stochastic process, which is a case where resilience of a subjective probability can be easily described with a probability distribution and Bayesian updates on observations. But the most important applications of the idea are one-off events with mainly epistemic uncertainty. Is there a good example we could include for that? Maybe a description of how you might express/quantify the resilience of a forecast for a past event whose outcome is not known yet?

I don't have time for a long reply, but I think the perspective in this post would be good to keep in mind: https://forum.effectivealtruism.org/posts/FpjQMYQmS3rWewZ83/effective-altruism-is-a-question-not-an-ideology

By putting an answer (reduce AI risk) ahead of the question (how can we do the most good?) we would be selling ourselves short.

Some people, maybe a lot of people, should probably choose to focus fully on AI safety and stop worrying about cause prioritization. But nobody should feel like they're being pushed into that or like other causes are worthless. EA should be a big tent. I don't agree that it's easier to rally people around a narrow cause; on the contrary, single minded focus on AI would drive away all but a small fraction of potential supporters, and have an evaporative cooling effect on the current community too.

18 years is a marathon, not a sprint.

I tend to think diversification in EA is important even if we think there's a high chance of AGI by 2040. Working on other issues gives us better engagement with policy makers and the public, improves the credibility of the movement, and provides more opportunities to get feedback on what does or doesn't work for maximizing impact. Becoming insular or obsessive about AI would be alienating to many potential allies and make it harder to support good epistemic norms. And there are other causes where we can have a positive effect without directly competing for resources, because not all participants and funders are willing or able to work on AI.

It was eventually matched, 17 hours later. 

I tried making a donation of bitcoin and it looks like that was not matched. Have other people tried this?

Thanks for pointing this out. Research particularly seems to show that people who commute by bike are the most satisfied, followed by walking, then trains, then buses and carpools (which are almost as bad as driving):

http://web.pdx.edu/~jbroach/654/homework/Smith_2013.pdf

Personally, I normally commute by a combo of bike and train, and I find that on days when I have to drive instead it does add stress, especially if traffic is bad.