Thanks for posting this against the social incentives right now.
My initial reaction to the situation was similar to yours - wanting to trust SBF and believe that it was an honest mistake.
But there are two reasons I disagree with that position.
First, we may never know for sure whether it was an honest mistake or intentional fraud. EA should mostly not support people who cannot prove that they have not committed fraud. Many who commit fraud can claim they were making honest mistakes.
Second, when you are a custodian of that much wealth and bear that much responsibility, it's not ok to have insufficient safeguards against mistakes. It's immoral to fail in your duty of care when the stakes are this high.
The following is based on my experience advising institutional investors - hope it's helpful! But don't make decisions based solely on this. Better to get properly informed and tailored advice.
You're asking how much risk to take in your runway portfolio. Currently you're taking no risk.
It makes sense to take risk if your investment horizon is long enough. Retirement savings are very long-term, so they can afford to be invested in risky, growth-seeking assets like shares.
To give my intuition on the numbers, if your runway is intended to be mostly spen...
This is an interesting idea. A few thoughts from a student of international financial macroeconomics.
Seignorage is essentially the profits that come from devaluing money holdings. That means your basic mechanism is to transfer value from holders of GLO to people who claim your UBI. This could work with the early enthusiasts, or with there being transactional value in holding GLO (e.g. if sellers accept GLO then buyers will keep some of it on hand). Since enthusiasts will be attracted if there is a strong prospect for transactional value, I'll give a few co...
This is a really excellent piece of work on bringing these concepts to a broader audience. I'm quite interested in long-term investment modelling so I'd like to offer my thoughts. Of course, the below isn't advice, so please don't make investment decisions purely on my comments below.
It's great that you are thinking about how to adjust standard investing concepts based on the notion that it is the total altruistic portfolio that matters, which is formed in a decentralised way. I agree this adds to the rationale for being "overweight" the company that the i...
Good post. I would add a notion of idea pervasiveness in the public consciousness. What I mean is how often people think along EA-consistent lines, or make arguments around dinner tables that explicitly or implicitly draw upon EA principles. This will influence how EA-consistent government policy is. Ideas like democracy, impartial justice, and freedom of religion, have strong pervasiveness. You could measure it by surveying people about whether they have heard of EA, and if so, whether they would refer to it in casual conversations, or whether they think it would influence their actions. You could benchmark the responses by asking the same questions about democracy or some other ubiquitous idea.
This is a nice idea. There'll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change. For example, the probability of getting ISIS to donate to Givewell is practically zero, so it's likely better to target philanthropists who mean well but haven't heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.
My interpretation of the argument is not that it is equating atoms to $. Rather, it invokes whatever computations are necessary to produce (e.g. through simulations) an amount of value equal to today's global economy. Can these computations be facilitated by a single atom? If not, then we can't grow at the current rate for 8200 years.
Thanks for your detailed reply. Absolutely, there is some academic reward available from solving problems. Naively, the goal is to impress other academics (and thus get published, cited), and academics are more impressed when the work solves a problem.
You seem to encourage problem-solving work, and point out that governments are starting to push academia in that direction. This is great, and to me, it raises the interesting question of optimal policy in rewarding research. That is supremely difficult, at least outside of the commercialisable. My unde...
I should clarify - I don't mean a small amount of work, but a small conceptual adjustment. The example I give in the post is to adjust from fully addressing a specific application to partially addressing a more general question. And to do so in a way that is hopefully intellectually stimulating to other researchers.
In my own work, using a consumer intertemporal optimisation model, I've tried to calculate the optimal amount for humanity to spend now on mitigating existential risk. That is the sort of problem-solving question I'm talking about. A couple of p...
Ok, so you're talking about a scenario where humans cease to exist, and other intelligent entities don't exist or don't find Earth, but where there is still value in certain things being done in our absence. I think the answer depends on what you think is valuable in that scenario, which you don't define. Are the "best things" safeguarding other species, or keeping the earth at a certain temperature?
But this is all quite pessimistic. Achieving this sort of aim seems like a second best outcome, compared to humanity's survival.
For example, if ear...
I have another possible reason why focusing on one project might be better than dividing one's time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you've just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren't as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn't have missed.
I haven't seen much evidence about this, just my own experience. There might also be ...
Thanks, it does a bit.
What I was saying is that if I were Andrew, I'd make it crystal clear that I'm happy to make the cup of tea, but don't want to be shouted at; there are better ways to handle disagreements, and demands should be framed as requests. Chances are that Bob doesn't enjoy shouting, so working out a way of making requests and settling disagreements without the shouting would benefit both.
More generally, I'd try to develop the relationship to be less "transactional", where you act as partners willing to advance each other's interests and where there is more trust, rather than only doing things in expectation of reward.
Sounds like a really interesting and worthwhile topic to discuss. But it's quite hard to be sure I'm on the same page as you without a few examples. Even hypothetical ones would do. "For reasons that should not need to be said" - unfortunately I don't understand the reasons; am I missing something?
Anyway, speaking in generalities, I believe it's extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it's probably a misunderstanding. For example, if a condition is given that isn't palatable, it's worth finding out the under...
Ah, you're right about the hedonistic framework. On re-reading your intro I think I meant the idea of using pleasure as a synonym for happiness and taking pain and suffering as synonyms for unhappiness. This, combined with the idea of counting minutes of pleasure vs. pain, seems to focus on just the experiencing self.
Thanks for the post. I doubt the length is a problem. As long as you're willing to produce quality analysis, my guess is that most of the people on this forum would be happy to read it.
My thoughts are that destruction of ecosystems is not justifiable especially because many of its effects are probably irreversible (e.g. extinction of some species), and because there is huge uncertainty about its impact. The uncertainty arises because of the points you make, and because of the shakiness of even some of the assumptions you use such as the hedonistic framewor...
Sure. When I say "arbitrary", I mean not based on evidence, or on any kind of robust reasoning. I think that's the same as your conception of it.
The "conclusion" of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don't go as far as to actually make a recommendation.
To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a numbe...
EAs like to focus on the long term and embrace probabilistic achievements. What about pursuing policy reforms that are currently inconsequential, but might have profound effects in some future state of the world? That sort of reform will probably face little resistance from established political players.
I can give an example of something I briefly tried when I was working in Lesotho, a small, poor African country. One of the problems in poor countries is called the "resource curse". This is the counter-intuitive observation that the discovery of ...
On political reform, I'm interested in EAs' opinions on this one.
In Australia, we have compulsory voting. If you are an eligible voter and you don't register and show up on election day, you get a fine. Some people do submit a blank ballot paper, but very few. I know this policy is relatively uncommon among western democracies, but I strongly support it. Basically it leaves the government with less places to hide.
Compulsory voting of course reduces individual freedom. But that reduction is small, and the advantages from (probably) more inclusive governmen...
Sorry, this is going to be a "you're doing it wrong" comment. I will try to criticize constructively!
There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can't be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can't influence the scenarios' probabilities. Any of these could have decisive influe...
I agree that EAs should pay more attention to systemic risk. Aside from exerting indirect influence on many concrete problems, it is also one of the few methods available to combat the threat of unknown risks (or equivalently increase our ability to capitalize on unknown opportunities). Achieving positive systemic change may also be more sustainable than relying on philanthropy.
In particular, I like the global governance example as a cause. This can be seen as improving the collective intelligence of humanity, and increasing the level of societal welfare w...
Just a few ideas, but note I don't have enough knowledge to identify all options, or the best option.
Obviously the goal is to maximise the amount Givewell is able to deploy, out of the amount your parents don't need for themselves. Fees reduce this, but so do taxes.
It sounds like a DAF is not subject to tax at all. If your parents hold shares themselves, they will presumably still be subject to tax on dividends, and on any realised capital gains from trading. I feel like this could be above $5k/year, but it's worth checking.
It sounds like your idea is for ... (read more)