Stefan_Schubert

I'm a researcher at London School of Economics and Political Science, working in the intersection of moral psychology and philosophy.

https://stefanfschubert.com/

Topic Contributions

Comments

2-factor voting (karma, agreement) for EA forum?

Interesting point. 

I guess it could be useful to be able to see how many have voted as well, since 75% agreement with four votes is quite different from 75% agreement with forty votes.

EA Forum feature suggestion thread

I would prefer a more failproof anti-spam system; e.g. preventing new accounts from writing Wiki entries, or enabling people to remove such spam. Right now there is a lot of spam on the page, which reduces readability.

Product Managers: the EA Forum Needs You

Extraordinary growth. How does it look on other metrics; e.g. numbers of posts and comments? Also, can you tell us what the growth rate has been per year? It's a bit hard to eyeball the graph. Thanks.

Impact markets may incentivize predictably net-negative projects

This kind of thing could be made more sophisticated by making fines proportional to the harm done

I was thinking of this. Small funders could then potentially buy insurance from large funders in order to allow them to fund projects that they deem net positive even though there's a small risk of a fine that would be too costly for them.

Impact markets may incentivize predictably net-negative projects

They refer to Drescher's post. He writes:

But we think that is unlikely to happen by default. There is a mismatch between the probability distribution of investor profits and that of impact. Impact can go vastly negative while investor profits are capped at only losing the investment. We therefore risk that our market exacerbates negative externalities.

Standard distribution mismatch. Standard investment vehicles work the way that if you invest into a project and it fails, you lose 1 x your investment; but if you invest into a project and it’s a great success, you may make back 1,000 x your investment. So investors want to invest into many (say, 100) moonshot projects hoping that one will succeed.

When it comes to for-profits, governments are to some extent trying to limit or tax externalities, and one could also argue that if one company didn’t cause them, then another would’ve done so only briefly later. That’s cold comfort to most people, but it’s the status quo, so we would like to at least not make it worse.

Charities are more (even more) of a minefield because there is less competition, so it’s harder to argue that anything anyone does would’ve been done anyway. But at least they don’t have as much capital at their disposal. They have other motives than profit, so the externalities are not quite the same ones, but they too increase incarceration rates (Scared Straight), increase poverty (preventing contraception), reduce access to safe water (some Playpumps), maybe even exacerbate s-risks from multipolar AGI takeoffs (some AI labs), etc. These externalities will only get worse if we make them more profitable for venture capitalists to invest in.

We’re most worried about charities that have extreme upsides and extreme downsides (say, intergalactic utopia vs. suffering catastrophe). Those are the ones that will be very interesting for profit-oriented investors because of their upsides and because they don’t pay for the at least equally extreme downsides.

 

On Deference and Yudkowsky's AI Risk Estimates

If anything, I think that prohibiting posts like this from being published would have a more detrimental effect on community culture.

Of course, people are welcome to criticise Ben's post - which some in fact do. That's a very different category from prohibition.

On Deference and Yudkowsky's AI Risk Estimates

I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form. 

That seems like a considerable overstatement to me. I think it would be bad if the forum rules said an article like this couldn't be posted.

What is the right ratio between mentorship and direct work for senior EAs?

This question is related to the question of how much effort effective altruism as a whole should put into movement growth relative to direct work. That question has been more discussed; e.g. see the Wiki entry and posts by Peter Hurford, Ben Todd, Owen Cotton-Barratt, and Nuño Sempere/Phil Trammell.

RyanCarey's Shortform

Yeah, I think it would be good to introduce premisses relating to the time that  AI and bio capabilities that could cause an x-catastrophe ("crazy AI" and "crazy bio") will be developed. To elaborate on a (protected) tweet of Daniel's.

Suppose that you have as long timelines for crazy AI and for crazy bio, but that you are uncertain about them, and that they're uncorrelated, in your view.

Suppose also that we modify 2 into "a non-accidental AI x-catastrophe is at least as likely as a non-accidental bio x-catastrophe, conditional on there existing both crazy AI and crazy bio, and conditional on there being no other x-catastrophe". (I think that captures the spirit of Ryan's version of 2.)

Suppose also that you think that the chance that in the world where crazy AI gets developed first, there is a 90% chance of an accidental AI x-catastrophe, and that in 50% of the worlds where there isn't an accidental x-catastrophe, there is a non-accidental AI x-catastrophe - meaning the overall risk is 95% (in line with 3). In the world where crazy bio is rather developed first, there is a 50% chance of an accidental x-catastrophe (by the modified version of 2), plus some chance of a non-accidental x-catastrophe , meaning the overall risk is a bit more than 50%.

Regarding the timelines of the technologies, one way of thinking would be to say that there is a 50/50 chance that we get AI or bio first, meaning there is a 49.5% chance of an AI x-catastrophe and a >25% chance of a bio x-catastrophe (plus additional small probabilities of the slower crazy technology killing us in the worlds where we survive the first one; but let's ignore that for now). That would mean that the ratio of AI x-risk to bio x-risk is more like 2:1. However, one might also think that there is a significant number of worlds where both technologies are developed at the same time, in the relevant sense - and your original argument potentially could be used as it is regarding those worlds. If so, that would increase the ratio between AI and bio x-risk.

In any event, this is just to spell out that the time factor is important. These numbers are made up solely for the purpose of showing that, not because I find them plausible. (Potentially my example could be better/isn't ideal.)

Load More