PeterMcCluskey

I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.

PeterMcCluskey's Posts

Sorted by New

PeterMcCluskey's Comments

How Much Leverage Should Altruists Use?

Hmm. Maybe you're right. I guess I was thinking there was an important difference between "constant leverage" and infrequent rebalancing. But I guess that's a more complicated subject.

How Much Leverage Should Altruists Use?

I like this post a good deal.

However, I think you overstate the benefits.

I like the idea of shorting the S&P and buying global ex-US stocks, but beware that past correlations between markets only provide a rough guess about future correlations.

I'm skeptical that managed futures will continue to do as well as backtesting suggests. Futures are new enough that there's likely been a moderate amount of learning among institutional investors that has been going on over the past couple of decades, so those markets are likely more efficient now than history suggests. Returns also depend on recognizing good managers, which tends to be harder than most people expect.

Startups might be good for some people, but it's generally hard to tell. Are you able to find startups before they apply to Y Combinator? Or do startups only come to you if they've been rejected by Y Combinator? Those are likely to have large effects on your expected returns. I've invested in about 10 early-stage startups over a period of 20 years, and I still have little idea of what returns to expect from my future startup investments.

I'm skeptical that momentum funds work well. Momentum strategies work if implemented really well, but a fund that tries to automate the strategy via simple rules is likely to lose the benefits to transaction costs and to other traders who anticipate the fund's trades. Or if it does without simple rules, most investors won't be able to tell whether it's a good fund. And if the strategy becomes too popular, that can easily cause returns to become significantly negative (whereas with value strategies, popularity will more likely drive returns to approximately the same as the overall market).

2019 AI Alignment Literature Review and Charity Comparison

Nearly all of CFAR's activity is motivated by their effects on people who are likely to impact AI. As a donor, I don't distinguish much between the various types of workshops.

There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they're selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn't have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn't expect them to try to admit only those with high probabilities of working on AI-related tasks.

2019 AI Alignment Literature Review and Charity Comparison

OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.

I attended a 2-day OAK retreat. It was run in a professional manner that suggests they'll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK's impact will have less effect on peoples' behavior than CFAR has.

I considered donating to OAK as an EA charity, but have decided it isn't quite effective enough for me to treat it that way.

I believe that the person who promoted that grant at SFF has more experience with OAK than I do.

I'm surprised that SFF gave more to OAK than to ALLFED.

The Future of Earning to Give

With almost all of those proposed intermediate goals, it's substantially harder to evaluate whether the goal will produce much value. In most cases, it will be tempting to define the intermediate goal in a way that is easy to measure, even when doing so weakens the connection between the goal and health.

E.g. good biomarkers of aging would be very valuable if they measure what we hope they measure. But your XPrize link suggests that people will be tempted to use expert acceptance in place of hard data. The benefits of biomarkers have been frequently overstated.

It's clear that most donors want prizes to have a high likelihood of being awarded fairly soon. But I see that desire as generally unrelated to a desire for maximizing health benefits. I'm guessing it indicates that donors prefer quick results over high-value results, and/or that they overestimate their knowledge of which intermediate steps are valuable.

A $10 million aging prize from an unknown charity might have serious credibility problems, but I expect that a $5 billion prize from the Gates Foundation or OpenPhil would be fairly credible - they wouldn't actually offer the prize without first getting some competent researchers to support it, and they'd likely first try out some smaller prizes in easier domains.

The Future of Earning to Give

I agree with most of your comment.

>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.

That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.

The Future of Earning to Give

Yes, large donors more often reach diminishing returns on each recipient than do small donors. The one charity heuristic is mainly appropriate for people who are donating $50k per year or less.

The Future of Earning to Give

Yes. The post Drowning children are rare seemed to be saying that OPP was capable of making most EA donations unimportant. I'm arguing that we should reject that conclusion, even if many of that post's points are correct.

X-risk dollars -> Andrew Yang?

They may not have budged climate scientists, but there other ways they may have influenced policy. Did they (or other partisans) alter the outcomes of Washington Initiative 1631 or 732? That seems hard to evaluate.

Load More