Michael_Wulfsohn

Posts

Sorted by New

Topic Contributions

Comments

a UBI-generating currency: Global Income Coin

This is an interesting idea. A few thoughts from a student of international financial macroeconomics.

Seignorage is essentially the profits that come from devaluing money holdings. That means your basic mechanism is to transfer value from holders of GLO to people who claim your UBI. This could work with the early enthusiasts, or with there being transactional value in holding GLO (e.g. if sellers accept GLO then buyers will keep some of it on hand). Since enthusiasts will be attracted if there is a strong prospect for transactional value, I'll give a few comments on the prospect of GLO becoming a global currency. My comments are mostly issues, problems, and questions that you may have to answer to convince people that the GLO ambition has potential. But that shouldn't detract from the value of the project.

Any currency needs to have its value continually supported in some way. Your summary contains a misconception: that, to maintain the $1 value, USD reserves won't be required after some point. In fact, entire countries can fail to defend their national currencies' pegs despite having billions of USD reserves. It's similar to a bank run, and it can happen to stablecoins not backed 1 for 1.

Generating demand for GLO may be difficult. Since seignorage is a devaluation of money holdings, it would create a disincentive to hold GLO. For example, cryptocurrencies often constrain supply or burn tokens in a bid to get people to buy and hold. You're proposing to do the opposite. That is why getting people to use GLO for transactions, or some other utility such as altruistic appeal, is vital. So generating demand is not impossible, but challenging.

Your ambition for GLO may not be consistent with a $1 peg, since your ambition is effectively for the dollar to become irrelevant. Of course, a $ peg would take you a long way at first. Nevertheless, a natural solution in the case of runaway GLO success may be to peg to a CPI-like weighted basket of prices. Perhaps CPI - x%, to generate some value to transfer to UBI claimants.

The amount of seignorage revenue in a given period will depend on the growth of demand for GLO in that period. Demand may fluctuate, and with it the UBI income amount. The income will be zero in periods in which reserves are used to prop up the value. That is not a deal breaker, but you will have to dip into reserves to produce a steady UBI income stream, or accept income fluctuation.

If GLO becomes a ubiquitous global currency, it will limit countries' ability to use domestic monetary policy to stabilise the business cycle and unemployment. That would open the question of global monetary policy in a GLO world, and whether the policy variables e.g. GLO supply should be used for macro stability as well as UBI, and who should make those decisions.

I hope my comments are constructive enough to be helpful. Best of luck!

The Risk of Concentrating Wealth in a Single Asset

This is a really excellent piece of work on bringing these concepts to a broader audience. I'm quite interested in long-term investment modelling so I'd like to offer my thoughts. Of course, the below isn't advice, so please don't make investment decisions purely on my comments below.

It's great that you are thinking about how to adjust standard investing concepts based on the notion that it is the total altruistic portfolio that matters, which is formed in a decentralised way. I agree this adds to the rationale for being "overweight" the company that the investor founded, or investing in individual properties. This is not how a typical investor thinks, so there is likely scope to think further along these lines. Either to improve coordination between EA investors, or to better implement a decentralised solution by departing from standard investment concepts.

I think your idea extends to alternative investments. Common wisdom in institutional investment is that it requires greater governance capabilities to invest in the more diversifying assets, such as infrastructure, some hedge funds, unlisted (commercial or residential) property, and private equity. That is, they require greater expertise, more time spent on investment processes, necessitate more careful cashflow management due to illiquidity, and potentially other challenges. And that greater governance capabilities are rewarded - see https://link.springer.com/article/10.1057/jam.2008.1. If an EA investor cares only about the overall altruistic portfolio and is capable of making/managing such investments, then it might make sense to overweight them. Some of them might be accessible through pooled funds.

In the article you rely on the standard deviation of annual returns as a measure of risk. But long term risk isn't well captured by that. Taking a step back, risk should ultimately be defined based on altruists' utility function over spending at different points in time. For example, there might be "hinge" moments when altruistic spending is especially effective. Imagine there is going to be a massive opportunity in 100 years to influence the creation of AGI by altruistic spending. In that case, we don't really care if the annual standard deviation of returns is high. We care only about the probability distribution of the 100 year return.

There is a limit to the ability of leverage to magnify returns. This is partly because of the asymmetry of returns. For example, if you start with $100, then experience -50% return then +50% return, you end up with $75. Assuming you readjust your borrowing amount regularly alongside changes in the asset value,  this effect is magnified by leverage and detracts from the overall return. See https://holygrailtradingstrategies.com/images/Leveraged-ETFs.pdf for more. 

Leverage has a strong role in the Capital Asset Pricing Model theory you're using. The theory does however assume away various challenges to do with leverage, like the one above. In general, it is uncommon for institutional investors (pension funds, university endowments, charitable foundations, etc) to directly borrow to invest. However, they may outsource it to a money manager, e.g. a hedge fund, who can access a decent borrowing rate on their behalf and who has the expertise to manage it. I'm not saying that leverage should never be used by EA investors. Rather, I would be quite careful before deciding to use it.

When actuaries model (commercial) real estate, it's normally assumed that both its risk and expected return are somewhere in between those of shares and bonds. Arguably, real estate has characteristics of each, as it is an asset used for productive enterprise, and since leases typically provide regular fixed rental payments. Nevertheless, I would look to property indices' historical data for guidance. 

Certainty equivalence may not be the right concept for measuring the value of moving all EA investments to a global market portfolio. I would instead compare the sharpe ratios. If you want to put an expected dollar figure on it, one way would be to calculate the increase in expected return you could achieve while holding risk constant. This avoids needing to make an assumption about investor risk preferences, which the certainty equivalent concept relies on.

I haven't read all your footnotes so perhaps some of the above is mentioned there. Nevertheless, I hope my comments are helpful and I am glad people in EA is actively thinking about this. Happy to chat more if you are interested.

Different forms of capital

Good post. I would add a notion of idea pervasiveness in the public consciousness. What I mean is how often people think along EA-consistent lines, or make arguments around dinner tables that explicitly or implicitly draw upon EA principles. This will influence how EA-consistent government policy is. Ideas like democracy, impartial justice, and freedom of religion, have strong pervasiveness. You could measure it by surveying people about whether they have heard of EA, and if so, whether they would refer to it in casual conversations, or whether they think it would influence their actions. You could benchmark the responses by asking the same questions about democracy or some other ubiquitous idea.

This is a nice idea. There'll be a tradeoff because, the less EA-aligned a source of funds is, the harder it is likely to be to convince them to change.  For example, the probability of getting ISIS to donate to Givewell is practically zero, so it's likely better to target philanthropists who mean well but haven't heard of EA. So the measure to pay attention to is [(marginal impact of EA charity) - (marginal impact of alternative use of funds)] * [probability of success for given fundraising effort] . This measure, or some more sophisticated version, should be equalised accross potential funding sources, to maximise impact.

Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

Thanks for the post. I like Economical Writing by Deirdre McCloskey - entertaining as hell!

This Can't Go On

My interpretation of the argument is not that it is equating atoms to $. Rather, it invokes whatever computations are necessary to produce (e.g. through simulations) an amount of value equal to today's global economy. Can these computations be facilitated by a single atom? If not, then we can't grow at the current rate for 8200 years.

Making impact researchful

Thanks for your detailed reply. Absolutely, there is some academic reward available from solving problems. Naively, the goal is to impress other academics (and thus get published, cited), and academics are more impressed when the work solves a problem. 

You seem to encourage problem-solving work, and point out that governments are starting to push academia in that direction. This is great, and to me, it raises the interesting question of optimal policy in rewarding research. That is supremely difficult, at least outside of the commercialisable. My understanding is that optimal policy would pay each researcher something like the marginal societal benefit of their work, summed globally and intertemporally forever. How on earth do you estimate that for the seminal New Keynesian model paper? Governments won't come close, and (I imagine) will tend to focus on projects whose benefits can be more easily measured or otherwise justified. So we are back to the problem of misaligned researcher incentives. But surely a government push towards impact is a step in the right direction.

Until our civilisation solves that optimal policy problem, I think academia will continue to incentivise the pursuit of knowledge at least partly for knowledge's sake. I wrote the post because understanding the implications of that has been useful to me.

Making impact researchful

I should clarify - I don't mean a small amount of work, but a small conceptual adjustment. The example I give in the post is to adjust from fully addressing a specific application to partially addressing a more general question. And to do so in a way that is hopefully intellectually stimulating to other researchers.

In my own work, using a consumer intertemporal optimisation model, I've tried to calculate the optimal amount for humanity to spend now on mitigating existential risk. That is the sort of problem-solving question I'm talking about. A couple of possible ways forward for me: include multiple countries and explore the interactions between x-risk mitigation and global public good provision; or use the setting of existential risk to learn more about a particular type of utility function which someone pointed me to for that purpose.

Open Thread #39

Ok, so you're talking about a scenario where humans cease to exist, and other intelligent entities don't exist or don't find Earth, but where there is still value in certain things being done in our absence. I think the answer depends on what you think is valuable in that scenario, which you don't define. Are the "best things" safeguarding other species, or keeping the earth at a certain temperature?

But this is all quite pessimistic. Achieving this sort of aim seems like a second best outcome, compared to humanity's survival.

For example, if earth becomes uninhabitable, colonisation of other planets is extremely good. Perhaps you could do more good by helping humans to move beyond earth, or to become highly resilient to environmental conditions? Surely the best way to ensure that human goals are met is to ensure that at least a few humans survive.

Anyway, going with your actual question, how you should pursue it really depends on your situation, skill set, finances, etc, as well as your values. The philosophical task of determining what should be done if we don't survive is one potential. (By the way, who should get to decide on that?) Robotics and AI seem like another, based on the examples you gave. Whatever you decide, I'd suggest keeping the flexibility to change course later, e.g. by learning transferrable skills, in case you change your mind about what you think is important.

When to focus and when to re-evaluate

I have another possible reason why focusing on one project might be better than dividing one's time between many projects. There may be returns to density of time spent. That is, an hour you spend on a project is more productive if you've just spent many hours on that project. For example, when I come back to a task after a few days, the details of it aren't as fresh in my mind. I have to spend time getting back up to speed, and I miss insights that I wouldn't have missed.

I haven't seen much evidence about this, just my own experience. There might also be countervailing effects, like time required for concepts to "sink in", and synergies, or insights for one project gleaned from involvement in another. It probably varies by task. My impression is that research projects feature very high returns to density of time spent.

Load More