Larks

Larks's Comments

New member--essential reading and unwritten rules?

Welcome! And congratulations on your achievements, which I'm sure you are more responsible for than modesty would allow you to acknowledge.

Maximizing the Long-Run Returns of Forced Savings

My understanding is the committees generally make rules for the indices, and then apply them relatively mechanistically, though they do occasionally change the rules. I think it is hard to totally get rid of this. You need some way to judge that a company's market cap is actually representative of market trading, as opposed to being manipulated by insiders (like LFIN was). Presumably if the index committee changed it to something absurd the regulator could change their index provider for the next year's bidding, though you are at risk of small changes that do not meet the threshold for firing.

As a minor technical note gross returns often are (very slightly) higher than the index's, because the managers can profit from stock lending. This is what allows zero-fee ETFs (though they are also somewhat a marketing ploy).

Maximizing the Long-Run Returns of Forced Savings

Ahhh, so basically the idea is that no underwriter would be willing to vouch for anything but a credible index shop. Seems plausible.

Maximizing the Long-Run Returns of Forced Savings

I think this is a neat idea.

However, I think it might have problem with the fiscal limits of the asset managers; there is a reason that even hedge funds with sophisticated clients do not structure themselves this way. At the moment most asset managers do not have very large balance sheets - even Blackrock, the largest in the world, only has $5bn of cash, and a market cap of 85bn. If the winning fund under-performed the second bid by 0.5%, they would face a 15bn loss - or, more likely, they would go bankrupt and the pensioners would bear the loss. Even if you divided the funds up between multiple managers the total capitalisation of the industry is not that large, and the winning bids would disproportionately be submitted by low-capitalisation funds that wanted a free call-option. This gives managers an asymmetric payoff curve that encourages them to take a lot of risk.

To solve this you could try regulating the asset managers, but at that point you have basically re-invented insurance companies, and they would not be able to take much risk.

Another possible solution would be to implement very harsh penalties for the individual managers. But I think it would be difficult to calibrate these penalties well, and might make it hard to attract talent.

Concern, and hope
On one side, we've had multiple posts talking about the risks of an incipient new Cultural Revolution; on the other, we've had someone accuse a widely-admired writer associated with the movement of abetting some pretty abhorrent worldviews.

I'm not sure what contrast you are trying to make here:

  • The first post argues that, while SJ cancellations are a problem, we should not fight back against them because it would be too expensive. The second post agrees that SJ cancellations are a problem that could become much worse, but argues we should try to do something about it.
  • The third post is an example of an attempted SJ cancellation, criticizing the community for being insufficiently zealous in condemning the outgroup. (It was downvoted into oblivion for being dishonest and nasty).

The first two are motivated by concern over the rise of bullying and its ability to intimidate people from communicating honestly about important issues, and discuss what we should do in response. The third article is... an example of this bad behaviour?

For the symmetry argument you want to make, it seems like you would need a right-wing version of the third post - like an article condemning the community for not doing enough to distance itself from communists and failing to constantly re-iterate its support for the police. Then it would make sense to point out that, despite the conflict, both sides were earnestly motivated by a desire to make the world a better place and avoid bad outcomes, and we should all remember this and respect each other.

But to my knowledge, no such article exists, partly because there are very few right-wing EAs. Rather, the conflict is between the core EA movement of largely centre-left people who endorse traditional enlightenment values of debate, empiricism and universalism, vs the rise of extreme-left 'woke' culture, which frequently rejects such ideals. Accusing the moderate left of being crypto-fascists is one of the standard rhetorical moves the far-left uses against the centre-left, and one they are very vulnerable to.


Note that I removed the link to the attack article because I think it is probably a violation of implicit forum norms to promote content with more than 100 net downvotes. If it hadn't been linked in this article I would not have come across it, which is probably desirable from the perspective of the moderators and the community.


Edit: the OP was edited between when I opened the page and starting writing this comment, and when I hit publish; at the request of the author I have updated the quote to reflect his edits, though I think this makes the comment a little harder to understand.

Estimating the Philanthropic Discount Rate

Great post, thanks very much for writing.

Such events do not existentially threaten one's financial position, so they should not be considered as part of the expropriation rate for our purposes.

Could you give some sense for why you think this is the case? Naively I would have thought that a double chance of getting half your assets expropriated would be approximately as bad as losing all of them. There will be diminishing marginal utility, but surely not enough to totally neglect this issue.

According to Sandberg (n.d.)[13], nations have a 0.5% annual probability of ceasing to exist. Most institutions don't last as long as nations, but an institution that's designed to be long-lasting might outlast its sovereign country. So perhaps we could infer an institutional failure rate of somewhere around 0.5%.

This seems like an upper bound for what we care about. Many countries and institutions that have existed for centuries have done so at the cost of wholesale change in their values. The 21st century catholic church promotes quite different things than it did in the 11th century, and the US federal government of 2020 doesn't have that much in common with the articles of confederation.

Similarly, organizations that avoid value drift will tend to gain power over time relative to those that don't.

I'm not sure this is true in the sense you need it to be. Consider evolution - we haven't seen species that have low rates of change (like sharks) come to dominate the world. They have gained power relative to proto-mammals (as the latter no longer exist) but have lost power relative to the descendants of those proto-mammals. Similarly, a human organisation that resisted memetic pressure remained true to its values will find itself competing with other organisations that do not have to pay the value-integrity costs, despite outlasting its rivals of yesteryear.

How to Fix Private Prisons and Immigration

That's really interesting - do you have any recommended reading on the UK system?

EA Forum feature suggestion thread

Changing the raw totals sounds confusing, but you could implement some form of regularisation in ranking contexts - for example karma relative to total karma across all posts for that month.

It is a little strange that if I go to an old post I upvoted, un-upvote, and then re-upvote, its karma increases I think.

EA is vetting-constrained
Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Do you think it is bad that other pools of EA capital exist, with perhaps lower thresholds, who presumably sometimes fund things that OP has deliberately passed on?

Study results: The most convincing argument for effective donations
Chris McVey, Josh May, and I had several times tried and failed to write arguments that would be effective in increasing participants' donation rates. When we presented participants emotionally moving narratives about children who had been rescued by charitable donations, charitable donations were higher than in a control condition -- but never when we presented ordinary philosophical arguments that donation is good or is your duty. ... We wondered whether the failure might just be the result of our inability to write convincing arguments.

[E]ach of the five selected arguments was viewed by about 335 participants, while 471 participants viewed the middle school science text. The results were clear: All five of the arguments substantially outperformed the control condition.

Presumably the theory is that philosophical argument can(not) increase donations, and it sounds like they had a randomised control in the form of an unrelated text.

Load More