All of Michael_Wulfsohn's Comments + Replies

Just a few ideas, but note I don't have enough knowledge to identify all options, or the best option.

Obviously the goal is to maximise the amount Givewell is able to deploy, out of the amount your parents don't need for themselves. Fees reduce this, but so do taxes.

It sounds like a DAF is not subject to tax at all. If your parents hold shares themselves, they will presumably still be subject to tax on dividends, and on any realised capital gains from trading. I feel like this could be above $5k/year, but it's worth checking.

It sounds like your idea is for ... (read more)

1
Joseph B.
1y
Thank you for the thoughtful answer! My parents are based in the US and their investments are all in index funds and ETFs. I had forgotten that index funds in DAFs aren't subject to portfolio turnover capital gains taxes, which seems like that would be the reason it's worth switching the appreciated stocks to DAFs.  I think my parents' idea is that by having the money they expect to be able to donate in a DAF or other investment account instead of just giving it now, they protect themself from a horrible health shock or market crash by having the money available to withdraw. Their ideal situation would be "pay as little tax and management fees on their investments as possible, donate in the most taxed-advantage way possible, probably giving more and more as they got older and were more sure they won't need a big safety net" Given this as what they're optimizing for, do you think biting the bullet on the ~$5k/year vanguard management fee is their best option?

Thanks for posting this against the social incentives right now.

My initial reaction to the situation was similar to yours - wanting to trust SBF and believe that it was an honest mistake.

But there are two reasons I disagree with that position. 

First, we may never know for sure whether it was an honest mistake or intentional fraud. EA should mostly not support people who cannot prove that they have not committed fraud. Many who commit fraud can claim they were making honest mistakes.

Second, when you are a custodian of that much wealth and bear that much responsibility, it's not ok to have insufficient safeguards against mistakes. It's immoral to fail in your duty of care when the stakes are this high.

The following is based on my experience advising institutional investors - hope it's helpful! But don't make decisions based solely on this. Better to get properly informed and tailored advice.

You're asking how much risk to take in your runway portfolio. Currently you're taking no risk.

It makes sense to take risk if your investment horizon is long enough. Retirement savings are very long-term, so they can afford to be invested in risky, growth-seeking assets like shares. 

To give my intuition on the numbers, if your runway is intended to be mostly spen... (read more)

This is an interesting idea. A few thoughts from a student of international financial macroeconomics.

Seignorage is essentially the profits that come from devaluing money holdings. That means your basic mechanism is to transfer value from holders of GLO to people who claim your UBI. This could work with the early enthusiasts, or with there being transactional value in holding GLO (e.g. if sellers accept GLO then buyers will keep some of it on hand). Since enthusiasts will be attracted if there is a strong prospect for transactional value, I'll give a few co... (read more)

4
Seth Ariel Green
2y
Thanks for the close read and thoughtful comments, Michael. FWIW, if you’d like to discuss this more over email or a Zoom call, we’d be glad to connect.  To each point in turn (along with a paraphrase -- LMK if I got any of this wrong) Yep, that’s right. For this model to work, in the short term, we need people to buy, hold, and use GLO for altruistic reasons and accept a (relatively modest) depreciation of the asset over time. (Cash also devalues.) If our group of enthusiasts is large and vocal enough, they create an incentive for vendors to accept GLO, and, best case, to prefer  GLO for branding reasons. In the medium-term, we aim for parity in terms of ease of use with other means of exchange. In the long run, we aim for GLO to be the easiest, or among the easiest, ways of buying and selling stuff, which is something you’d pay a small price to deal with, just as sellers currently eat the costs of a fee to accept credit cards. Right now, we’re focused on the short-term problem of convincing folks to give it a try, with the basic calculus that if the project works, it could have transformational effects on global poverty. Even if you assign only a small probability to the project's eventual success, a small probability * a transformational impact is still a very large gain in expected utility. Probably reserves of something are necessary for the long run but they don’t necessarily have to be in USD. Your point is well taken though, especially today. Our vision is that in the early phase, reserves will effectively subsidize the value of GLO, since there is no natural demand;  later on, if/when GLO garners transactional, altruistic and branding demand, we’ll use the reserves to manage the float and dampen volatility. The action is the same (trading GLO for dollars) but during the ‘subsidy phase’ the reserves will on average only go down, and we hope to get to a point where natural demand growth allows us to grow the reserves as well. There’s a trading strategy sec

This is a really excellent piece of work on bringing these concepts to a broader audience. I'm quite interested in long-term investment modelling so I'd like to offer my thoughts. Of course, the below isn't advice, so please don't make investment decisions purely on my comments below.

It's great that you are thinking about how to adjust standard investing concepts based on the notion that it is the total altruistic portfolio that matters, which is formed in a decentralised way. I agree this adds to the rationale for being "overweight" the company that the i... (read more)

Good post. I would add a notion of idea pervasiveness in the public consciousness. What I mean is how often people think along EA-consistent lines, or make arguments around dinner tables that explicitly or implicitly draw upon EA principles. This will influence how EA-consistent government policy is. Ideas like democracy, impartial justice, and freedom of religion, have strong pervasiveness. You could measure it by surveying people about whether they have heard of EA, and if so, whether they would refer to it in casual conversations, or whether they think it would influence their actions. You could benchmark the responses by asking the same questions about democracy or some other ubiquitous idea.

1
Shaileen
2y
I like this line of thinking! I'll be entering civil service for my next career move, and being new to the EA community had got me thinking along these lines - I've been asking myself 'how can synergies be created at these intersections?'.

This is a nice idea. There'll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change.  For example, the probability of getting ISIS to donate to Givewell is practically zero, so it's likely better to target philanthropists who mean well but haven't heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.

Thanks for the post. I like Economical Writing by Deirdre McCloskey - entertaining as hell!

My interpretation of the argument is not that it is equating atoms to $. Rather, it invokes whatever computations are necessary to produce (e.g. through simulations) an amount of value equal to today's global economy. Can these computations be facilitated by a single atom? If not, then we can't grow at the current rate for 8200 years.

Thanks for your detailed reply. Absolutely, there is some academic reward available from solving problems. Naively, the goal is to impress other academics (and thus get published, cited), and academics are more impressed when the work solves a problem. 

You seem to encourage problem-solving work, and point out that governments are starting to push academia in that direction. This is great, and to me, it raises the interesting question of optimal policy in rewarding research. That is supremely difficult, at least outside of the commercialisable. My unde... (read more)

I should clarify - I don't mean a small amount of work, but a small conceptual adjustment. The example I give in the post is to adjust from fully addressing a specific application to partially addressing a more general question. And to do so in a way that is hopefully intellectually stimulating to other researchers.

In my own work, using a consumer intertemporal optimisation model, I've tried to calculate the optimal amount for humanity to spend now on mitigating existential risk. That is the sort of problem-solving question I'm talking about. A couple of p... (read more)

Ok, so you're talking about a scenario where humans cease to exist, and other intelligent entities don't exist or don't find Earth, but where there is still value in certain things being done in our absence. I think the answer depends on what you think is valuable in that scenario, which you don't define. Are the "best things" safeguarding other species, or keeping the earth at a certain temperature?

But this is all quite pessimistic. Achieving this sort of aim seems like a second best outcome, compared to humanity's survival.

For example, if ear... (read more)

I have another possible reason why focusing on one project might be better than dividing one's time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you've just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren't as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn't have missed.

I haven't seen much evidence about this, just my own experience. There might also be ... (read more)

2
Joey
6y
Returns on density of time seems pretty plausible to me and particularly for cognitively intensive projects. Regarding sink in effects, I suspect many of these benefits can be accomplished by working on different aspects within the same overall project. E.g. working on hiring to take a break from cost-effectiveness analysis work when founding a charity.

Thanks, it does a bit.

What I was saying is that if I were Andrew, I'd make it crystal clear that I'm happy to make the cup of tea, but don't want to be shouted at; there are better ways to handle disagreements, and demands should be framed as requests. Chances are that Bob doesn't enjoy shouting, so working out a way of making requests and settling disagreements without the shouting would benefit both.

More generally, I'd try to develop the relationship to be less "transactional", where you act as partners willing to advance each other's interests and where there is more trust, rather than only doing things in expectation of reward.

Sounds like a really interesting and worthwhile topic to discuss. But it's quite hard to be sure I'm on the same page as you without a few examples. Even hypothetical ones would do. "For reasons that should not need to be said" - unfortunately I don't understand the reasons; am I missing something?

Anyway, speaking in generalities, I believe it's extremely tempting to assume an adversarial dynamic exists. 9 times out of 10, it's probably a misunderstanding. For example, if a condition is given that isn't palatable, it's worth finding out the under... (read more)

0
Chris Leong
7y
I've expanded the first paragraph and added a hypothetical example. Let me know if this clarifies the situation. EDIT: Oh, I also added in a direct response to your comment.

Ah, you're right about the hedonistic framework. On re-reading your intro I think I meant the idea of using pleasure as a synonym for happiness and taking pain and suffering as synonyms for unhappiness. This, combined with the idea of counting minutes of pleasure vs. pain, seems to focus on just the experiencing self.

Thanks for the post. I doubt the length is a problem. As long as you're willing to produce quality analysis, my guess is that most of the people on this forum would be happy to read it.

My thoughts are that destruction of ecosystems is not justifiable especially because many of its effects are probably irreversible (e.g. extinction of some species), and because there is huge uncertainty about its impact. The uncertainty arises because of the points you make, and because of the shakiness of even some of the assumptions you use such as the hedonistic framewor... (read more)

0
MichaelPlant
7y
Hello Michael, Yeah, I totally agree. The scope of what I was talking about was more limited. If there were clearly net WAS, we'd have to weigh up the apparent benefits of ecosystem destruction (i.e. less animal misery) against the sort of costs you're taking about. My aim was to challenge the argument about their being net WAS. Unless there is net WAS (or you're a negative utilitarian) the case for habitat destruction looks pretty thin anyway. FWIW, I don't think their being experienced vs remembered selves is a problematic for a hedonic framework. In fact that distinctly requires the assumption people do feel things and can rate how bad it is, and those can then be compared to their memories. That stuff is a problem for our ability to make good affective forecasts (which I admit we suck at).

Sure. When I say "arbitrary", I mean not based on evidence, or on any kind of robust reasoning. I think that's the same as your conception of it.

The "conclusion" of your model is a recommendation between giving now vs. giving later, though I acknowledge that you don't go as far as to actually make a recommendation.

To explain the problem with arbitrary inputs, when working with a model, I often try to think about how I would defend any conclusions from the model against someone who wants to argue against me. If my model contains a numbe... (read more)

EAs like to focus on the long term and embrace probabilistic achievements. What about pursuing policy reforms that are currently inconsequential, but might have profound effects in some future state of the world? That sort of reform will probably face little resistance from established political players.

I can give an example of something I briefly tried when I was working in Lesotho, a small, poor African country. One of the problems in poor countries is called the "resource curse". This is the counter-intuitive observation that the discovery of ... (read more)

0
Evan_Gaensbauer
7y
Check this out: http://effective-altruism.com/ea/147/cause_better_political_systems_and_policy_making/

On political reform, I'm interested in EAs' opinions on this one.

In Australia, we have compulsory voting. If you are an eligible voter and you don't register and show up on election day, you get a fine. Some people do submit a blank ballot paper, but very few. I know this policy is relatively uncommon among western democracies, but I strongly support it. Basically it leaves the government with less places to hide.

Compulsory voting of course reduces individual freedom. But that reduction is small, and the advantages from (probably) more inclusive governmen... (read more)

1
DavidNash
7y
I'm not sure there's any evidence of it having changed election outcomes, the people who are forced to vote that wouldn't normally are divided along similar lines as those that do vote. Also there maybe more people voting who are easier to persuade because the only reason they're voting is the risk of a fine. I used to be quite pro this idea but now think it is neutral in outcome. One example might be the Brexit vote which saw the highest turnout since 1992.
0
kbog
7y
Another way of affecting the voting balance would be extending the right to vote to felons. I think this already has something of a campaign in the US and maybe isn't as controversial as compulsory voting. I was going to add that the 2000 and 2016 both had bad candidates lose the popular vote but win the general election so we should think about replacing the electoral college with a popular vote. But looking at all of American history, only four out of sixty presidential elections have had this kind of outcome. So abolishing the electoral college probably isn't worth the cost of pushing it, even though there's a fairly good way to get there (NPVIC).

Sorry, this is going to be a "you're doing it wrong" comment. I will try to criticize constructively!

There are too many arbitrary assumptions. Your chosen numbers, your categorization scheme, your assumption about whether giving now or giving later is better in each scenario, your assumption that there can't be some split between giving now and later, your failure to incorporate any interest rate into the calculations, your assumption that the now/later decision can't influence the scenarios' probabilities. Any of these could have decisive influe... (read more)

1
Milan_Griffes
7y
I basically agree with your critique, though I'd say my assumptions are more naïve than arbitrary (mostly semantic; the issues persist either way). On reflection, I don't think I've arrived at any solid conclusions here, and this exercise's main fruit is a renewed appreciation of how tangled these questions are. ---------------------------------------- I'm getting hung up on your last paragraph: "However, if it's 10 or 20, then you're probably going to be led astray by spurious results." This is pretty unsatisfying – thinking about the future is necessarily speculative, so people are going to have to use "arbitrary" inputs in their models for want of empirical data. If they only use a few arbitrary inputs, their models will likely be too simplistic to be meaningful. But if they use many arbitrary inputs, their models will give spurious results? It sort of feels like an impossible bind for the project of modeling the future. Or maybe I'm misunderstanding your definition of "arbitrary" inputs, and there is another class of speculative input that we should be using for model building.

I agree that EAs should pay more attention to systemic risk. Aside from exerting indirect influence on many concrete problems, it is also one of the few methods available to combat the threat of unknown risks (or equivalently increase our ability to capitalize on unknown opportunities). Achieving positive systemic change may also be more sustainable than relying on philanthropy.

In particular, I like the global governance example as a cause. This can be seen as improving the collective intelligence of humanity, and increasing the level of societal welfare w... (read more)

3
Rick
7y
I agree that systematic change should be given more thought in EA, but there's a very specific problem that I think we need to tackle before we can do this seriously: a lot of the tools and mindsets in EA are inadequate for dealing with systematic change. To explain what I mean, I want to quickly make reference to a chart that Caroline Fiennes uses in her book. Essentially, you can think of work on social issues as a sort of 'pyramid'. At the top of the pyramid you have very direct work (deworming, bed nets, cash transfers, etc.). This work is comparably very certain to work, and you can fairly easily attribute changes in outcomes to these programs. However, the returns are small - you only help those who you directly work with. As you go down the pyramid, you start to consider programs that focus on communities... then those that focus on changing larger policy and practice ... then changing attitudes and norms (or some types of systematic change) ... and eventually you get to things like existential risks. As you go down the pyramid, you get greater returns to scope (can impact a lot more people), but it becomes a lot more uncertain that you will have an impact, and it also becomes very hard to attribute change in any outcome to an program. My worry is that the tools that the EA movement relies on were created with the top of the pyramid in mind - the main forms of causal research, cost effectiveness analysis, and so on that we rely on were not built with the bottom or even middle of the pyramid. Yes, members of EA have gotten very good at trying to apply these tools to the bottom and middle, but it can get a bit screwy very quickly (as someone with an econ background, I shudder whenever someone uses econ tools to try and forecast the cost effectiveness of X-risk reduction activities - it's like trying to peel a potato while blindfolded using a pencil: it's not what the pencil was made for, and even though it is technically possible I'll be damned if the blindfo