PeterMcCluskey

I'm a stock market speculator who has been involved in transhumanist and related communities for a long time. See my website at http://bayesianinvestor.com.

PeterMcCluskey's Posts

Sorted by New

PeterMcCluskey's Comments

How to Fix Private Prisons and Immigration

When I tell people that prisons and immigration should use a similar mechanism, they sometimes give me a look of concern. This concern is based on a misconception

I'll suggest that some people's concerns are due to an accurate intuition that your proposal will make it harder to hide the resemblance between prisons and immigration restrictions. Preventing immigration looks to me to be fairly similar to imprisoning them in their current country.

Idea: statements on behalf of the general EA community

It would be much easier to make a single, more generic policy statement. Something like:

When in doubt, assume that most EAs agree with whatever opinions are popular in London, Berkeley, and San Francisco.

Or maybe:

When in doubt, assume that most EAs agree with the views expressed by the most prestigious academics.

Reaffirming this individually for every controversy would redirect attention (of whatever EAs are involved in the decision) away from core EA priorities.

Will protests lead to thousands of coronavirus deaths?

Another risk is that increased distrust impairs the ability of authorities to do test and trace in low-income neighborhoods, which seem to now be key areas where the pandemic is hardest to control.

Climate Change Is Neglected By EA

EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk

EA has been a niche cause, and changing that seems harder than solving climate change. Increased popularity would be useful, but shouldn't become a goal in and of itself.

If EAs should focus on climate change, my guess is that it should be a niche area within climate change. Maybe altering the albedo of buildings?

Policy idea: Incentivizing COVID-19 tracking app use with lottery tickets

How about having many locations that are open only to people who are running a tracking app?

I'm imagining that places such as restaurants, gyms, and airplanes could require that people use tracking apps in order to enter. Maybe the law should require that as a default for many locations, with the owners able to opt out if they post a conspicuous warning?

How hard would this be to enforce?

How Much Leverage Should Altruists Use?

Hmm. Maybe you're right. I guess I was thinking there was an important difference between "constant leverage" and infrequent rebalancing. But I guess that's a more complicated subject.

How Much Leverage Should Altruists Use?

I like this post a good deal.

However, I think you overstate the benefits.

I like the idea of shorting the S&P and buying global ex-US stocks, but beware that past correlations between markets only provide a rough guess about future correlations.

I'm skeptical that managed futures will continue to do as well as backtesting suggests. Futures are new enough that there's likely been a moderate amount of learning among institutional investors that has been going on over the past couple of decades, so those markets are likely more efficient now than history suggests. Returns also depend on recognizing good managers, which tends to be harder than most people expect.

Startups might be good for some people, but it's generally hard to tell. Are you able to find startups before they apply to Y Combinator? Or do startups only come to you if they've been rejected by Y Combinator? Those are likely to have large effects on your expected returns. I've invested in about 10 early-stage startups over a period of 20 years, and I still have little idea of what returns to expect from my future startup investments.

I'm skeptical that momentum funds work well. Momentum strategies work if implemented really well, but a fund that tries to automate the strategy via simple rules is likely to lose the benefits to transaction costs and to other traders who anticipate the fund's trades. Or if it does without simple rules, most investors won't be able to tell whether it's a good fund. And if the strategy becomes too popular, that can easily cause returns to become significantly negative (whereas with value strategies, popularity will more likely drive returns to approximately the same as the overall market).

2019 AI Alignment Literature Review and Charity Comparison

Nearly all of CFAR's activity is motivated by their effects on people who are likely to impact AI. As a donor, I don't distinguish much between the various types of workshops.

There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they're selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn't have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn't expect them to try to admit only those with high probabilities of working on AI-related tasks.

2019 AI Alignment Literature Review and Charity Comparison

OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.

I attended a 2-day OAK retreat. It was run in a professional manner that suggests they'll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK's impact will have less effect on peoples' behavior than CFAR has.

I considered donating to OAK as an EA charity, but have decided it isn't quite effective enough for me to treat it that way.

I believe that the person who promoted that grant at SFF has more experience with OAK than I do.

I'm surprised that SFF gave more to OAK than to ALLFED.

Load More