keller_scholl

Posts

Sorted by New

Wiki Contributions

Comments

Technocracy vs populism (including thoughts on the democratising risk paper and its responses)

Two points, but I want to start with praise. You noticed something important and provided a very useful writeup. I agree that this is an important issue to take seriously.

While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making

I don't think that this is an accurate representation of how policymakers operate, either for elected officials or bureaucrats. My view comes from a gestalt of years of talking with congressional aides, bureaucrats in and around DC, and working at a think tank that does policy research. Simply put, there are so many people trying to make their point in any rich democracy that being "available" is largely equivalent to being ignored.

There are exceptions, particularly academics who publish extensively on a topic and gain publicity for it, but most people who don't actively attempt to participate in governance simply won't. Nobody has enough spare time, and nobody has enough spare energy, to actively seek out points of view and ideas reliably.

More importantly, I think that marginal expert influence mostly crowds out other expert influence, and does not crowd out populist impulses. Here I am more speculative, but my sense is that elected officials get a sense of what the expert/academic view is, as one input in a decision making process that also includes stakeholders, public opinion (based on polling, voting, and focus groups), and party attitudes (activists, other elected officials, aligned media, etc). Hence an EA org that attempts to change views mostly displaces others occupying a similar social / epistemic / political role, not any sense of public opinion.

 

On the bureaucracy side,  expert input, lawmaker input, and stakeholder input are typically the primary influences when considering policy change. Occasionally public pressure will be able to notice something, but the federal registry is very boring, and as the punctuated equilibrium model of politics suggests, most of the time the public isn't paying attention. And bureaucrats usually don't have extra time and energy to go out and find people whose work might be relevant, but they don't have anyone actively presenting. Add that most exciting claims are false, so decisionmakers would really have to read through entire literatures to be confident in a claim, and experts ceding influence goes primarily not to populist impulses but existing stakeholders.

Democratising Risk - or how EA deals with critics

Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.

Democratising Risk - or how EA deals with critics

The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate  abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.

There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
"Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated"
"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.

While they don't explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.

Making large donation decisions as a person focused on direct work

I came here to say this: you have a relatively unique work position relative to most EAs, and are likely to be unusually good at identifying opportunities in countries Wave is located in.

FTX EA Fellowships

Should we have received a confirmation that our application was successfully received? 

What's the role of donations now that the EA movement is richer than ever?

I was parsing your comment here as saying that the marginal impact of a GiveWell donation was pretty close to GiveDirectly. Here it seems like you don't endorse that interpretation?

Issues with Futarchy

I found myself unconvinced by a number of your factual points, though I agree with your overall conclusion for very different reasons. I've included three that I think are particularly key.

1. 

"Traders who don't account for their lack of understanding of things like poverty-related policies (by, say, polling poor people on their policy preferences), will lose money to traders who do.

  • I think this solves part of the problem, but the problem will remain as long as the futarchy markets are not perfect, and as long as bettors whose wealth is mostly independent of the futarchy markets are influential for futarchy."

The right comparison here seems to be the stock market: obviously markets are imperfect, and some people whose wealth is mostly independent of the stock market are influential. But the overall result is that, once in a long while, you will get an extraordinary event like the recent Gamestop/AMC/etc rise, representing a tiny tiny fraction of the total stock market. This source suggests that they are, at most, on the order of 2.6% of the specific markets that include them. This for a highly unusual event. I do not think it is at all a stretch to suggest that this is mostly a non-problem, presented without numbers.

More broadly, you argue that the rich having more influence than the poor over policy relevant to addressing poverty is bad, but surely that equally implies that the rich having more influence over policy related to wealth is good? While I agree that's a little extreme and positionality is not equivalent, I generally expect wealthier individuals to be better educated, have more spare time to devote to politics, and be more cosmopolitan. While I am sympathetic to the specific case you bring up, not including this seems like a weak point.

"Many people care about policy decisions, so I don’t think we can expect that bettors whose wealth is mostly independent of the futarchy markets (i.e. the futarchy is not their chief source of income) will have no or little influence. So while wealth may end up slightly correlated with policy assessment skills, I don’t think we can expect that correlation to be strong."

You argue that the correlation is negative! That is the crux of your point!

2.

"Hanson does not account for the possibility that the wealth landscape could change drastically in the next 10 years (in the near future, there could conceivably be individuals who are orders of magnitude richer than anyone is today)."

I don't...think that's particularly plausible? Elon Musk is currently "worth" about 200 billion dollars (standard caveats about why that's an overestimate aside), and multiple orders of magnitude would imply  something closer to two trillion dollars. You don't cite any source to defend the likelihood of this claim, so I am not sure how to disagree with it.

3. 

"It seems possible that futarchy might make us more efficiently pursue whatever metrics most people today genuinely think are good, but which ignore many or almost all moral patients that currently exist or that will exist in future."

Lots of things are possible: is there any reason to expect this problem to be worse under futarchy than under democracy?

2020 AI Alignment Literature Review and Charity Comparison

Thank you for this! I'm not sure if this was intentional or not, but it seems worth noting that my work with Robin was funded under a grant from OpenPhil, including my salary as a research assistant and some bought off classes for him.

How effective are financial incentives for reaching D&I goals? Should EA orgs emulate this practice?

Using the links you provide, 50% of cash incentives comes from Strategic Performance Goals in three categories (product & strategy, customers & stakeholders, culture & organizational leadership), and of one of those categories diversity and inclusion(D&I) is one of three parts listed, so at a rough guess 5% of annual cash incentives is tied to D&I. Cash incentives at Microsoft for the executives analyzed are about a fifth of total compensation, so about 1% of executive compensation is tied to D&I.

I think that having a headline "base 50% of executive compensation", when the actual fraction seems to be 1%, is actively deceptive, and think that this question should be rewritten.

I would hope that, if EA orgs gave bonuses to leadership for success in diversity and inclusion, it would be more than 1% of total pay.

At Intel, 7% of total compensation (50% the cash incentive is "operational performance", and cash incentive is about a seventh of total pay for the CEO) which is adjusted by D&I, but how much adjustment there is is not made clear. Given that operational performance goals include many other targets, I would be surprised if Intel was substantially different from Microsoft here.

Load More