All of PeterMcCluskey's Comments + Replies

We shouldn't be focused too heavily on what is politically feasible this year. A fair amount of our attention should be on what to prepare in order to handle a scenario in which there's more of an expert consensus a couple of years from now.

3
Cecil Abungu
6d
This is a fair point, but we're thinking about a scenario where such consensus takes a much much longer time to emerge. There's no real reason to be sure that a super advanced model a couple of years from now will do the kinds of things that would produce a consensus. 

Nanotech progress has been a good deal slower than was expected by people who were scared of it.

4
Robi Rahman
4mo
I agree, however, isn't there still the danger that as scientific research is augmented by AI, nanotechnology will become more practical? The steelmanned case for nanotech x-risk would probably argue that various things that are intractable for us to do now, have no theoretical reasons why they couldn't be done if we were slightly better at other adjacent techniques.

I have alexithymia.

Greater awareness seems desirable. But I doubt it "severely affects" 1 in 10 people. My impression is that when it's correlated with severe problems, the problems are mostly caused by something like trauma, and alexithymia is more a symptom than a cause of the severe problems.

3
Misha_Yagudin
9mo
Yes, the mechanism is likely not alexithymia directly causing undesirable states like trauma but rather diminishing one's ability to get unstack given that traumatic events happened.

Author of the manifesto and Animi here. I was also doubtful initially when I was researching alexithymia to improve my condition. But that was gradually changing the more papers I read and the more people I talked with. There are 50+ years of research on the topic, and some papers show more than 10% of the general population with alexithymia score in the "clinical" range where it is correlated with all the associated problems. 1 in 10 actually makes a lot of sense given how prevalent and comorbid it is with mental disorders or i.e. neurodiversity - ~50% of... (read more)

It's not obvious that unions or workers will care as much about safety as management. See this post for some historical evidence.

6 months sounds like a guess as to how long the leading companies might be willing to comply.

The timing of the letter could be a function of when they were able to get a few big names to sign.

I don't think they got enough big names to have much effect. I hope to see a better version of this letter before too long.

Something important seems missing from this approach.

I see many hints that much of this loneliness results from trade-offs made by modern Western culture, neglecting (or repressing) tightly-knit local community ties to achieve other valuable goals.

My sources for these hints are these books:

One point from WEIRDest People is summarized here:

Neolocal residence occurs when a newly married couple establishes their home independent of both sets of relatives. While only about 5%

... (read more)

I doubt most claims about sodium causing health problems. High sodium consumption seems quite correlated with dietary choices that have other problems, which makes studying this hard.

See Robin Hanson's comments.

2
Joel Tan
1y
It seems the scientific consensus, and Cochrane reviews/meta-analysis of RCTs(e.g. https://www.bmj.com/content/346/bmj.f1325) are supportive. I wouldn't rule out the possibility that sodium isn't as harmful as health authorities think it is (c.f. the whole fracas over saturated fat vs sugar), but I guess I don't see this as a serious worry or something that demands more research given the current evidence/expert opinion and limited research time.

I expect most experts are scared of the political difficulties. Also, many people have been slow to update on the declining costs of solar. I think there's still significant aversion to big energy-intensive projects. Still, it does seem quite possible that experts are rejecting it for good reasons, and it's just hard to find descriptions of their analysis.

I agree very much with your guess that SBF's main mistake was pride.

I still have some unpleasant memories from the 1984 tech stock bubble, of being reluctant to admit that my successes during the bull market didn't mean that I knew how to handle all market conditions.

I still feel some urges to tell the market that it's wrong, and to correct the market by pushing up prices of fallen stocks to where I think they ought to be. Those urges lead to destructive delusions. If my successes had gotten the kind of publicity that SBF got, I expect that I would have made mistakes that left me broke.

I haven't expected EAs to have any unusual skill at spotting risks.

EAs have been unusual at distinguishing risks based on their magnitude. The risks from FTX didn't look much like the risk of human extinction.

8
Nathan Young
1y
But half our resources to combat human extinction were at risk due to risks to FTX. Why didn't we take that more seriously.

I agree that there's a lot of hindsight bias here, but I don't think that tweet tells us much.

My question for Dony is: what questions could we have asked FTX that would have helped? I'm pretty sure I wouldn't have detected any problems by grilling FTX. Maybe I'd have gotten some suspicions by grilling people who'd previously worked with SBF, but I can't think of what would have prompted me to do that.

Nitpick: I suspect EAs lean more toward Objective Bayesianism than Subjective Bayesianism. I'm unclear whether it's valuable to distinguish between them.

1
Noah Scales
1y
I read Violet's post, am reviewing some of the background material, and just browsed some online stuff about Bayesianism. I would learn something by your elaboration on the difference you think applies to EA's.

It's risky to connect AI safety to one side of an ideological conflict.

2
NickGabs
2y
I think you can stress the "ideological" implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences.  The idea that externalities exist and require intervention is not IMO super ideologically charged.
3
JakubK
2y
There are ways to frame AI safety as (partly) an externality problem without getting mired in a broader ideological conflict.

Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.

I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:

These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you're imagining that the AI would only speed up the job functions that get classified as "science", whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.

My understanding of Henrich's model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.

European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn't be confident that we understand what the most important features are, much less that we can cause LMICs to have them.

Successful societies ought to be risk-averse abou... (read more)

Resilience seems to matter for human safety mainly via food supply risks. I'm not too concerned about that, because the world is producing a good deal more food than is needed to support our current population. See my more detailed analysis here.

It's harder to evaluate the effects on other species. I expect a significant chance that technological changes will make current biodiversity efforts irrelevant. So to the limited extent I'm worried about wild animals, I'm focused more on ensuring that technological change develops so as to keep as many options open as possible.

2
Karla Still
2y
How would technological change make current biodiversity efforts irrelevant? And by irrelevant, do you mean that the technologies reduce environmental burden and degregation e.g. by being more resource efficienct or that they would be actual new solutions aimed at reducing biodiversity loss?
1
RayTaylor
2y
Is there another link? I couldn't open that one. Does your analysis consider GCRs and tail risks through this century?

Why has this depended on NIH? Why aren't some for-profit companies eager to pursue this?

2
gwern
2y
What disease would you seek FDA approval for? "I sleep more than 4 hours a day" is not a recognized disease under the status quo. (There is the catch-all of 'hypersomnia', but things like sleep apnea or neurodegenerative disorders or damage to clock-keeping neurons would not plausibly be treated by some sort of knockout-mimicking drug.)
7
JohnBoyle
2y
I think both Ying-Hui and I had the impression that the research had to be somewhat further along before any profit-minded people would fund it.  But someone recently explained to me that Silicon Valley companies are often funded with much less scientific backing than this, so this week I've written to one venture capitalist I know, and will probably contact others.  Regarding getting funding from an existing company, I don't know much about that option. Advice is appreciated.

This seems to nudge people in a generally good direction.

But the emphasis on slack seems somewhat overdone.

My impression is that people who accomplish the most typically have had small to moderate amounts of slack. They made good use of their time by prioritizing their exploration of neglected questions well. That might create the impression of much slack, but I don't see slack as a good description of the cause.

One of my earliest memories of Eliezer is him writing something to the effect that he didn't have time to be a teenager (probably on the Extropian... (read more)

This seems mostly right, but it still doesn't seem like the main reason that we ought to talk about global health.

There are lots of investors visibly trying to do things that we ought to expect will make the stock market more efficient. There are still big differences between companies in returns on R&D or returns on capital expenditures. Those returns go mainly to people who can found a Moderna or Tesla, not to ordinary investors.

There are not (yet?) many philanthropists who try to make the altruistic market more efficient. But even if there were, the... (read more)

Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?

4
Lorenzo Buonanno
2y
Also RAISE https://www.lesswrong.com/posts/oW6mbA3XHzcfJTwNq/raise-post-mortem

CSER is the obvious example in my mind, and there are other non-public examples.

Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.

I have some relevant knowledge. I was involved in a relevant startup 20 years ago, but haven't paid much attention to this area recently.

My guess is that Drexlerian nanotech could probably be achieved in less than 10 years, but would need on the order of a billion dollars spent on an organization that's at least as competent as the Apollo program. As long as research is being done by a few labs that have just a couple of researchers, progress will likely continue to be slow to need much attention.

It's unclear what would trigger that kind of spending and th... (read more)

Acting without information on the relative effectiveness of the vaccine candidates was not a feasible strategy for mitigating the pandemic.

I'm pretty sure that with a sufficiently bad virus, it's safer to vaccinate before effectiveness is known. We ought to plan ahead for how to make such a decision.

This was the fastest vaccine rollout ever

Huh? 40 million doses of the 1957 flu vaccine were delivered within about 6 months of getting a virus sample to the US. Does that not count due to its similarity to existing vaccines?

Here are some of my reasons for disliking high inflation, which I think are similar to the reasons of most economists:

Inflation makes long-term agreements harder, since they become less useful unless indexed for inflation.

Inflation imposes costs on holding wealth in safe, liquid forms such as bank accounts, or dollar bills. That leads people to hold more wealth in inflation-proof forms such as real estate, and less in bank accounts, reducing their ability to handle emergencies.

Inflation creates a wide variety of transaction costs: stores need to change the... (read more)

I don't see high value ways to donate money for this. The history of cryonics suggests that it's pretty hard to get more people to sign up. Cryonics seems to grow mainly from peer pressure, not research or marketing.

1
AndyMcKenzie
2y
Hi Peter, I agree with you that right now there are not any obvious high-value ways to donate money to this area. Although as I just wrote in a comment elsewhere in this thread, I am hoping to do more research on this question in the future, and hopefully others can contribute to that effort as well.  I also agree with you that the history of cryonics suggests it's hard to get people to sign up. But, I do think that the cost of signing up is an obvious area where interventions can be made. My understanding is that the general public's price sensitivity has not really been tested very thoroughly. 

I expect speed limits to hinder the adoption of robocars, without improving any robocar-related safety.

There's a simple way to make robocars err in the direction of excessive caution: hold the software company responsible for any crash it's involved in, unless it can prove someone else was unusually reckless. I expect some rule resembling that will be used.

Having speed limits on top of that will cause problems, due to robocars having to drive slower than humans drive in practice (annoying both the passengers and other drivers), when it's safe for them to ... (read more)

How much of this will become irrelevant when robocars replace human drivers? I suspect the most important impact of safety rules will be how they affect the timing of that transition. Additional rules might slow that down a bit.

7
vicky_cox
2y
Hi Peter, thanks for your comment!  I must admit I have not really thought about this before, but intuitively it still seems important to have appropriate road safety legislation like speed limits in place even if it is robocars following them rather than human drivers. In fact, I could see it as important to have appropriate speed limits in place before the introduction of robocars in case robocars are programmed to drive faster than is safe as a reflection of a too high speed limit. I think the use of seat belts is still a good norm to have, even if robocars will drive safer than human drivers.   I'm not sure whether this would affect the timing of the transition, but if the robocar was going to be programmed with a speed limit anyway then lowering the speed limit doesn't seem like it would slow down the transition (not sure on this though).

CFTC regulations have been at least as much of an obstacle as gambling laws. It's not obvious whether the CFTC would allow this strategy.

You're mostly right. But I have some important caveats.

The Fed acted for several decades as if it was subject to political pressure to reduce inflation. Economists mostly agree that the optimal inflation rate is around 2%. Yet from 2008 to about 2019 the Fed acted as if that were an upper bound, not a target.

But that doesn't mean that we always need more political pressure for inflation. In the 1960s and 1970s, there was a fair amount of political pressure to increase monetary stimulus by whatever it took to reduce unemployment. That worked well when infla... (read more)

1
Remmelt
2y
These caveats are helpful, thank you. I appreciate the elaboration on changing plans for interest rates and inflation by the Fed board and changing influences by non-high income employees and people with pension plans. I was wondering about whether I had misinterpreted OpenPhil staff’s opinion being that rich people have been indirectly influencing the Fed towards a more hawkish stance (I recalled hearing something like this in another interview with Holden, but haven’t been able to find that interview back). Either way, OpenPhil’s analysis around this is probably much more ‘clustery’ and nuanced. I would agree with you though that high net-worth individuals who have most of their capital put into ownership stakes of companies that hold relatively little cash or bonds on their balance sheets and can flexibly hike up pricing of their products/services won’t be impacted much by rising inflation. Edit: Good nuance re: not assuming a constant velocity of money (how fast money passes hands from transaction to transaction). What you wrote doesn’t seem to refute the argument I made concerning model error in current macroeconomic theories. As again a complete amateur, I don’t have any comment on what range of inflation to target or what the trade-offs are, except that all else equal a 2% inflation rate seems pretty benign. Overall, your points makes me more uncertain about my understanding of what current stakeholder groups particularly can and tend to influence Fed monetary policy decisions, and how they are motivated to act. Will read your review.

Hanson reports estimates that under our current system, elites have about 16 times as much influence as the median person.

My guess is that under futarchy, the wealthy would have somewhere between 2 and 10 times as much influence on outcomes that are determined via trading.

You seem to disagree with at least one of those estimates. Can you clarify where you disagree?

The original approach was rather erratic about finding high value choices, and was weak at identifying the root causes of the biggest mistakes.

So participants would become more rational about flossing regularly, but rarely noticed that they weren't accomplishing much when they argued at length with people who were wrong on the internet. The latter often required asking embarrassing questions their motives, and sometimes realizing that they were less virtuous than assumed. People will, by default, tend to keep their attention away from questions like that.

T... (read more)

To the best of my knowledge, internal CEAs rarely if ever turn up negative.

Here's one example of an EA org analyzing the effectiveness of their work, and concluding the impact sucked:

CFAR in 2012 focused on teaching EAs to be fluent in Bayesian reasoning, and more generally to follow the advice from the Sequences. CFAR observed that this had little impact, and after much trial and error abandoned large parts of that curriculum.

This wasn't a quantitative cost-effectiveness analysis. It was more a subjective impression of "we're not getting good enough re... (read more)

It might be orthogonal to the point you're making, but do we have much reason to think that the problem with old-CFAR was the content? Or that new-CFAR is effective?

Another two examples off the top of my head:

Thanks a lot! Is there a writeup of this somewhere? I tend to be a pretty large fan of explicit rationality (at least compared to EAs or rationalists I know), so evidence that reasoning in this general direction is empirically kind of useless would be really useful to me!

It seems strange to call populism anti-democratic.

My understanding is that populists usually want more direct voter control over policy. The populist positions on immigration and international trade seem like stereotypical examples of conflicts where populists side with the average voter more than do the technocrats who they oppose.

Please don't equate anti-democratic with bad. It seems mostly good to have democratic control over the goals of public policy, but let's aim for less democratic control over factual claims.

2
Hauke Hillebrandt
3y
Sorry for being unclear, I didn't mean that populism must necessarily be anti-democratic- I've made a small edit to say that populism has any of the three features 'anti-democratic, illiberal, or anti-technocratic' to make this more clear - thanks for the feedback! I've used my own rough and fuzzy definition of populism as a bit of a catch-all term for some things that are not liberal democracy, where illiberalism violates minority rights. So for example the Swiss Minaret controversy, where a majority banned the building of a Turkish Minaret through a popular referendum, I call populist here, despite being democratic. But you could replace 'populism' with another term, but I think it's not worth to get hung up on definitions. Yes, agreed- I don't think direct democracy  (a la Switzerland) is always better. But yes, in the long-term  policy goals should ideally not be 'anti-democratic', even if they're technocratic and not very  illiberal (like the King of Jordan).  If you have too much technocracy and too little democratic accountability that might lead to populist backlash (see David Autor's studies on trade I cite here or Peter Singer's case against migration). So let's aim for whatever create most utility on the margin, which can sometimes be more democratic control (Jordan, but not Switzerland), sometimes more technocracy (e.g. US left), and sometimes more liberalism (e.g. US right).

I doubt that that study was able to tell whether the dietary changes improved nutrition. They don't appear to have looked at many nutrients, or figured out which nutrients the subjects were most deficient in. Even if they had quantified all important nutrients in the diet, nutrients in seeds are less bioavailable than nutrients in animal products (and that varies depending on how the seeds are prepared).

There's lots of somewhat relevant research, but it's hard to tell which of it is important, and maybe hard for the poor to figure out whether they ought ... (read more)

Could much of the problem be due to the difficulty of starting treatment soon enough after infection?

6
Davidmanheim
3y
Given the failure of antivirals to work even prophylactically, and the fundamental issues I mentioned, I don't think that is the key issue.

I see some important promise in this idea, but it looks really hard to convert the broad principles into something that's both useful, and clear enough that a lawyer could decide whether his employer was obeying it.

10 years worth of cash sounds pretty unusual, at least for an EA charity.

But part of my point is that when stocks are low, the charity won't have enough of a cushion to do any investing, so it won't achieve the kind of returns that you'd expect from buying stocks at a no-worse-than-random time. E.g. I'd expect that a charity that tries to buy stocks would have bought around 2000 when the S&P was around 1400, sold some of that in 2003 when the S&P was around 1100 to make up for a shortfall in donations, bought again in 2007 at 1450, then sold again ... (read more)

5
Grayden
4y
I think what you are referring to is an Anti-Nightingale. If you always sell after a market crash, you will most likely (as in mode, not mean) have poor returns, but that doesn't change the expected value from investing. The odds of a roulette wheel never change, but you can change your strategy to give you a >50% chance of coming away with a profit. My strategy will give you a >50% chance of coming away with an underperformance of the market, but will not change the underlying odds. Another trap some people (including professional investors) fall into is "buying the dip". It feels natural to expect that when the market is low, the future expectations must be higher and it must be a good time to invest. In a perfect market (not a given!) this is not the case. In fact due to government responses (lowering the interest rate), returns should actually be lower. In very practical terms, this time last year you might have expected a 6% return from investing in the S&P 500 for one year. Right now, that 6% might be 5.5% because interest rates are lower.
  1. Cash sitting in a charity bank account costs money, so if you have lots of it, invest some;

But the obvious ways to invest (i.e. stocks) work poorly when combined with countercyclical spending. Charities are normally risk-averse about investments because they have plenty of money to invest when stocks are high, but need to draw down reserves when stocks are low.

1
Grayden
4y
That's a good point and I don't think I was particularly clear in my post. I will have a think about whether I can rephrase in a way that keeps it concise. I'd like to separate my response into two issues: (1) liquidity (cash vs. Treasuries) and (2) risk tolerance (Treasuries vs. stocks). On liquidity, I think it's a good idea to keep a few months of expenditure in cash to ensure you can access it in an instant. Depending on your size, you may get some interest paid by the bank, but it's very unlikely to keep pace with inflation. However, anything you don't need at short notice can be invested in risk-free assets (e.g. short-dated US Treasuries), which have a better chance at keeping pace with inflation (with the usual caveat that the benefits have to outweigh the added admin). Risk tolerance, i.e. whether to invest in stocks (maybe even with leverage) rather than Treasuries, is another topic and lots of smart people have written previous stuff on this, e.g. here. This is where the practical difficulties I mention come in. You need to be willing for income (including potential gains and losses on investments) and expenditure to be going in opposite directions, potentially over a number of years. Certainly, if a charity has 6 months' expenditure in the bank, I wouldn't recommend putting 3 months worth in stocks. But if a charity has 10 years' expenditure in the bank, I think it needs to realise how much that is costing it. If it puts 9 years' expenditure in stocks, then with even a bad market crash, it will still have 5 years' expenditure.

When I tell people that prisons and immigration should use a similar mechanism, they sometimes give me a look of concern. This concern is based on a misconception

I'll suggest that some people's concerns are due to an accurate intuition that your proposal will make it harder to hide the resemblance between prisons and immigration restrictions. Preventing immigration looks to me to be fairly similar to imprisoning them in their current country.

1
FCCC
4y
No, I think they imagined I wanted to imprison all immigrants (which, as you can see, is not what I'm suggesting). To be clear, I'm not talking about preventing anyone from leaving your country; I'm talking about how to select which non-citizens can become permanent residents in your country. As for non-open borders being equal to imprisonment, that's incorrect. The fact that I can't live in North Korea does not mean that I'm imprisoned in my country. By this definition of imprisonment, everyone is the world is imprisoned. I believe this system would allow for more immigrants than there are currently. Government is slow at determining which people can enter (at least in my country). This might fix that. And knowing that every immigrant family is a positive contribution to the government's balance sheet (and unlikely to commit crimes) will probably help society see immigration as a good thing, which may help gain political support for more immigration.

It would be much easier to make a single, more generic policy statement. Something like:

When in doubt, assume that most EAs agree with whatever opinions are popular in London, Berkeley, and San Francisco.

Or maybe:

When in doubt, assume that most EAs agree with the views expressed by the most prestigious academics.

Reaffirming this individually for every controversy would redirect attention (of whatever EAs are involved in the decision) away from core EA priorities.

Another risk is that increased distrust impairs the ability of authorities to do test and trace in low-income neighborhoods, which seem to now be key areas where the pandemic is hardest to control.

8
Will Bradshaw
4y
True, though the loss of trust seems to fall more on the authorities than the protesters to me.

EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk

EA has been a niche cause, and changing that seems harder than solving climate change. Increased popularity would be useful, but shouldn't become a goal in and of itself.

If EAs should focus on climate change, my guess is that it should be a niche area within climate change. Maybe altering the albedo of buildings?

How about having many locations that are open only to people who are running a tracking app?

I'm imagining that places such as restaurants, gyms, and airplanes could require that people use tracking apps in order to enter. Maybe the law should require that as a default for many locations, with the owners able to opt out if they post a conspicuous warning?

How hard would this be to enforce?

Hmm. Maybe you're right. I guess I was thinking there was an important difference between "constant leverage" and infrequent rebalancing. But I guess that's a more complicated subject.

9
Brian_Tomasik
4y
That seems to be a common view, but I haven't yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.

Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).

He says: "In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage." So I found his next post on leverage, which coincidentally is one mentioned in the OP: "The Line Between Aggressive and Crazy". There he clarifies why he doesn't like leveraged ETFs:

From this we start to see the problem with lever

... (read more)

I like this post a good deal.

However, I think you overstate the benefits.

I like the idea of shorting the S&P and buying global ex-US stocks, but beware that past correlations between markets only provide a rough guess about future correlations.

I'm skeptical that managed futures will continue to do as well as backtesting suggests. Futures are new enough that there's likely been a moderate amount of learning among institutional investors that has been going on over the past couple of decades, so those markets are likely more efficient now than history su

... (read more)
2
MichaelDickens
4y
I was reviewing my notes and I found this paper on managed futures: https://www.aqr.com/Insights/Research/White-Papers/Trend-Following-in-Focus The paper has a section on why they don't think managed futures (a.k.a. trendfollowing) will stop working in the near future. Here's the summary I wrote in my notes (of the relevant section): * Assets invested in trendfollowing peaked in mid-2008 at $210B, and have declined to $120B * All systematic hedge fund strategies have $500B AUM, or 17% of all hedge fund assets * Futures market has grown since 2008, so trendfollowing as a % of futures markets has decreased by more than half I don't find this super convincing, it's definitely still conceivable that trendfollowing strategies could basically stop working, but it's evidence that trendfollowing is not over-subscribed.
4
MichaelDickens
4y
Thanks for the comments, Peter! Me too, I did adjust the return estimate way down from the backtest I quoted, but I can see an argument that managed futures will provide zero excess return in the future. Regarding momentum, see AQR's Fact, Fiction and Momentum Investing—specifically, "Myth No. 4: Momentum Does Not Survive, Or Is Seriously Limited By, Trading Costs."

Nearly all of CFAR's activity is motivated by their effects on people who are likely to impact AI. As a donor, I don't distinguish much between the various types of workshops.

There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they're selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn't have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn't expect them to try to admit only those with high probabilities of working on AI-related tasks.

4
Misha_Yagudin
4y
Thank you, Peter. If you are curious Anna Salamon connected various types of activities with CFAR's mission in the recent Q&A.
Load more