SB

sam_brenner

79 karmaJoined Mar 2021

Comments
8

I think I'm much more interested in the limit order mode than any of the other features you mentioned, so if there's room for a single additional setting inside the current calculator, I'd want it to be that one. However, I agree with your general thoughts on the cost of additional features, and all the other ones you mention do seem useful!

The way I imagine this working is that the tool could make its normal slippage assumptions until the limit is hit, and no more slippage after that

One idea: would it be possible to have a limit order mode? This would be useful I think!

I ask because of this message I got

"You have total loans greater than your current balance. Under strict Kelly betting, you should not bet at all in this scenario because there is non-zero risk of ruin. This calculator allows some leeway in this, and will still recommend a bet as long as losing all your money does not actually occur in any of the (up to 50,000) scenarios it simulates."

Does this take into account the fact that I could liquidate a position to generate more balance and avoid ruin?

Maybe this is stupid of me, but should this be a fraction of your balance or a fraction of your net asset value?

I've wanted to write something similar myself, and you've done a great job with it! Thanks!

I feel that there's an illusion of control that a lot of people have when it comes to imagining ways that EA could be different, and the parts of this post that I quote below do a good job of explaining how that control is so illusory. 

EA is a genie out of its bottle, so to speak. There's no way to guarantee that people won't call themselves EAs or fund EA-like causes in the ways that they want rather than the way you might want them to. This is in part a huge strength, because it allows a lot of funding and interest, but it's also a big weakness.

I feel this especially acutely when people lament how entangled EA was with SBF. It's possible to imagine a version of EA that was less officially exuberant about him, but very hard imo to imagine a version that refuses to grant any influence whatsoever to a person who credibly promises billions of dollars of funding. Avoiding a situation like the FTX/SBF one is hard not just because the central EA orgs can be swayed by funding but also, as the quotes below point out, because there will almost always be EAs who (rightly) will care more about their project getting funded than about complying with whatever regime of central control is created. 

Say you can't convince  Moskovitz and OpenPhil leadership to turn over their funds to community deliberation.

You could try to create a cartel of EA organizations to refuse OpenPhil donations. This seems likely to fail - it would involve asking tens, perhaps hundreds, of people to risk their livelihoods. It would also be an incredibly poor way of managing the relationship between the community and its most generous funder--and very likely it would decrease the total number of donations to EA organizations and causes. 

...

Let's say a new donor did begin funding EA organizations, and didn't want to abide by these rules. Perhaps they were just interested in donating to effective bio orgs. Would the community ask that all EA-associated organizations turn down their money? Obviously they shouldn't - insofar as they are not otherwise unethical - organizations should accept money and use it to do as much good as possible. Rejecting a potential donor simply because they do not want to participate on a highly unusual funding system seems clearly wrong headed.

One analogy on my mind is that EA is fundamentally Protestant as opposed to Catholic when it comes to governance. (I think a lot of people make this analogy but not for governance). One of the defining features of Catholicism is submission to the Pope and church hierarchy. In Catholicism's view, it is not enough to do and believe the right things on one's own; instead one has to do them as part of the bigger institution. This is a matter insisted on by the Catholic Church, and nobody who wants to become or stay Catholic is surprised by it. As a result, Catholics have been strongly selected for being coordinated in their belief and action. By contrast, Protestants almost by definition do not share this view, and so nearly-universal coordination across them is rare, even when they share important fundamentals of belief. 

The interesting feature that I have in mind is that, in this analogy, Catholicism's success at coordination is limited only to those  who submit to the coordination in the first place. Moreover, the insistence on coordination plays a big role in limiting broader coordination: many Protestants do not want to become Catholics precisely because it would mean giving up control and being coordinated. 

Reading back to EA, I would predict from this analogy that coordination attempts won't work very well, even among those who really believe in the importance of EA. Coordination attempts might dominate for centuries, like Catholicism did. But even in Catholicism's case the coordination failed eventually despite the huge amount of effort and dogma that went into sustaining the belief in its authority. Even with an extremely motivated attempt at central coordination (and one that succeeded for centuries), many religious upstarts with wildly divergent beliefs are all able to call themselves Christian and act in the world and receive funding in the name of Christianity. My prediction is that EA will not be able to fare any better at preventing upstart Martin Luthers or Joseph Smiths or Sam Bankman-Frieds from doing all sorts of things in its name. 

I do not mean to make any claim about whether EA should be more or less centrally controlled on the margin. The analogy does make it seem to me that a version of EA that would not accept SBF money would have to be centrally controlled to a nearly impossible degree.

On democratic control:

Any kind of democratic control that tries to have "EAs at large" make decisions will need to decide on who will get to vote. None of the ways I can think of for deciding seem very good to me (donating a certain amount? having engaged a certain amount in a visible way?). I think they're both bad as methods to choose a group of decisionmakers and more broadly harmful. "You have done X so now you are A Real EA" is the message that will be sent to some and "Sorry, you haven't done X, so you're not A Real EA" to others, regardless of the method used for voter selection. I expect that it will become a distraction or discouragement from the actual real work of altruism.

I also worry that this discussion is importing too much of our intuitions about political control of countries.  Like most people who live in democracies, I have a lot of intuitions about why democracy is good for me. I'd put them into two categories:

  1. Democracy is good for me because I am a better decisionmaker about myself than other people are about me
    1. Most of this is a feeling that I know best about myself: I have the local knowledge that I need to make decisions about how I am ruled
    2. But other parts of it are procedural: I think that when other people decide on my behalf, they'll arrange things in their own favor
  2. Democracy is good for me because it's deontologically wrong for other people to rule me

I don't think either of those categories really apply here. EA is not about me, is not for me, does not rule me, and should not take my own desires as a Sam into account any more than it does, probably.

I wasn't planning on commenting, but since you addressed me by name, I felt compelled to respond.