Habryka

8226Joined Sep 2014

Bio

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
632

I think the site needs a dark mode. More and more people are favoring it. 

The site already has one! Or more precisely LessWrong has one, and it probably wouldn't be too hard to adapt it to the EA Forum (which shares a codebase).

The font I used could be one size larger, I did made an alternate screenshot to compare. Yet research suggests the current font size, not the one from my script, is ideal. I still favor higher density, as I can analyze the content faster.

I am generally skeptical of research in this space, but yeah, the current font size is what seems to work pretty well in user tests I've done. I do also think sometimes it makes sense to have more density and smaller font sizes (and like, comment text is already almost that small)

I can't get behind the gray background though. I mean, how many sites does that? I find it harder to read.

I mean, how about Reddit? 

Or how about Youtube (the background of the videos): 

Or how about Facebook:

The pattern of "grey background with white boxes in front, occasional header or nav element on the grey background" is as far as I can tell the standard pattern to reduce eye fatigue while also ensuring high text contrast. I actually can't think of a content heavy site that doesn't do this.

It does sadly look very broken for me: 

It does look better on the all-posts page: 

Some thoughts

  • I like the idea of making the text smaller and increasing the density of the post list. Seems good to experiment with
  • I think getting rid of the grey background really breaks a lot of the recent discussion section as well as the overall navigability of the UI (and also we've gotten tons of user feedback that people found the perfect white as the whole background to feel quite straining on their eyes).
  • I do overall think the font is just too small for me to read. I expect most users would zoom in a decent amount in order to actually make it comfortable to skim. 
  • I think having line-breaks in the post-titles is quite bad for skimming, and also gives undue attention to posts that have longer titles, which seems quite bad.
  • While I do find it easier to skim to move the post-icons to the left of the items, I think it gets the information hierarchy wrong. I think the type of post (link post, curated, personal blog) is at best a secondary piece of information, and the design you proposed gives it too much prominence. 

I think we should generally have a prior that social dynamics of large groups of people end up pushing heavily towards conformity, and that those pressures towards conformity can cancel out many orders of magnitude of growth of the number of people who could theoretically explore different directions. 

As a concrete case study, I like this Robin Hanson post "The World Forager Elite"

The world has mostly copied bad US approaches to over-regulating planes as well. We also see regulatory convergence in topics like human cloning; many had speculated that China would be defy the consensus elsewhere against it, but that turned out not to be true. Public prediction markets on interesting topics seems to be blocked by regulations almost everywhere, and insider trading laws are most everywhere an obstacle to internal corporate markets.

Back in February we saw a dramatic example of world regulatory coordination. Around the world public health authorities were talking about treating this virus like they had treated all the others in the last few decades. But then world elites talked a lot, and suddenly they all agreed that this virus must be treated differently, such as with lockdowns and masks. Most public health authorities quickly caved, and then most of the world adopted the same policies. Contrarian alternatives like variolation, challenge trials, and cheap fast lower-reliability tests have also been rejected everywhere; small experiments have not even been allowed.

One possible explanation for all this convergence is that regulators are just following what is obviously the best policy. But if you dig into the details you will quickly see that the usual policies are not at all obviously right. Often, they seem obviously wrong. And having all the regulatory bodies suddenly change at once, even when no new strong evidence appeared, seems especially telling.

It seems to me that we instead have a strong world culture of regulators, driven by a stronger world culture of elites. Elites all over the world talk, and then form a consensus, and then authorities everywhere are pressured into following that consensus. Regulators most everywhere are quite reluctant to deviate from what most other regulators are doing; they’ll be blamed far more for failures if they deviate. If elites talk some more, and change their consensus, then authorities must then change their polices. On topic X, the usual experts on X are part of that conversation, but often elites overrule them, or choose contrarians from among them, and insist on something other than what most X experts recommend.

The number of nations, as well as the number of communities and researchers that were capable of doing innovative things in response to COVID was vastly greater in 2020 than for any previous pandemic. But what we saw was much less global variance and innovation in pandemic responses. I think there was scientific innovation, and that innovation was likely greater than for previous pandemics, but overall, despite the vastly greater number of nations and people in the international community of 2020, this only produced more risk-aversion in stepping out of line with elite consensus.

I think by-default we should expect similar effects in fields like AI Alignment. I think maintaining a field that is open to new ideas and approaches is actively difficult. If you grow the field without trying to preserve the concrete and specific mechanisms that are in place to allow innovation to grow, more people will not result in more innovation, it will result in less, even from the people that have previously been part of the same community. 

In the case of COVID, the global research community spent a substantial fraction of its effort on actively preventing people from performing experiments like variolation or challenge trials, and we see the same in fields like Psychology research where a substantial fraction of energy is spent on ever-increasing ethical review requirements

We see the same in the construction industry (a recent strong interest of mine), which despite its quickly growing size, is performing substantially fewer experiments than it was 40 years ago, and is spending most of its effort actively regulating what other people in the industry can do, and limiting the type of allowable construction materials and approaches to smaller and smaller sets.

I think by-default, I expect fast growth of the AI Alignment community to reduce innovation for the same reasons. I expect a larger community will increase pressures towards forming an elite consensus, and that consensus will be enforced via various legible and illegible means. Most of the world is really not great at innovation, and the default outcome of large groups of people, even when pointed towards a shared goal, is not innovation, but conformity, and if we recklessly grow, I think we will default towards the same common outcome. 

I think this is still in the framework of thinking that large groups of people having to coordinate leads to stagnation. To change my mind, you'd have to make the case that having a larger number of startups leads to less innovation, which seems like a hard case to make. 

I think de-facto right now people have to coordinate in order to do work on AI Alignment, because most people need structure and mentorship and guidance to do any work, and want to be part of a coherent community. 

Separately, I also think many startup communities are indeed failing to be innovative because of their size and culture. Silicon Valley is a pretty unique phenomenon, and I've observed "startup communities" in Germany that felt to me liked they harmed innovation more than they benefitted it. The same is true for almost any "startup incubator" that large universities are trying to start up. When I visit them, I feel like the culture there primarily encourage conformity and chasing the same proxy metrics as everyone else. 

I think actually creating a startup ecosystem is hard, and I think it's still easier than creating a similar ecosystem for something as ill-defined as AI Alignment. The benefit that startups have is that you can very roughly measure success by money, at least in the long run, and this makes it pretty easy to point many people at the problem (and like, creates strong incentives for people to point themselves at the problem). 

I think we have no similar short pointer for AI Alignment, and most people who start working in the field seem to me to be quite confused about what the actual problem to be solved is, and then often just end up doing AI capabilities research while slapping an "AI Alignment" label on it, and I think scaling that up mostly just harms the world.

There is a cambrian explosion of research groups, but basically no new agendas as far as I can tell? Of the agendas listed on that post, I think basically all are 5+ years old (some have morphed, like ELK is a different take on scalable oversight than Paul had 5 years ago, but I would classify it as the same agenda). 

There is a giant pile of people working on the stuff, though the vast majority of new work can be characterized "let's just try to solve some near-term alignment problems and hope that it somehow informs our models of long-term alignment problems" and a large pile of different types of transparency research. I think there are good cases for that work, though I am not very optimistic about it helping with existential risk.

I think more people in a worldwide population generally leads to more innovation, but primarily in domains where there is a lot of returns to scale, and where you have a lot of incentives for people to make progress towards. If you want to have get people to explore a specific problem, I think more people rarely helps (because the difficulty lies in aiming people at the problem, not in the firepower you have). 

I think adding more people also rarely causes more exploration to happen. Large companies are usually much less innovative than small companies. Coordinating large groups of people usually requires conformity and because of asymmetries in how easy it is to cause harm to a system vs. to produce value, requires widespread conservativism in order to function. I think similar things are happening in EA, where the larger EA gets, the more people are concerned about someone "destroying the reputation of the community" the more people have to push on the brakes in order to prevent anyone from taking risky action. 

I think there exist potential configurations of a research field that can scale substantially better, but I don't think we are currently configured that way, and I expect by default exploration to go down as scale goes up (in general, the number of promising new research agendas and direction seems to me to have gone down a lot during the last 5 years as EA has grown a lot, and this is a sentiment I've heard mirrored from most people who have been engaged for that long).

I want to push back on the conception of the progress of a research field being well-correlated with "the number of people working in that field". 

I think the heuristic of "a difficult problem is best solved by having a very large number of people working on it" is not a particularly successful heuristic when predicting past successes in science, nor is it particularly successful if you are trying to forecast business success. When a company is trying to solve a difficult technical or scientific problem, they don't usually send 10,000 people to work on it (and doing so would almost never work). They send their very best people to work on it, and spend a good number of resources supporting them. 

Right now, we don't have traction for AI Alignment, and indeed, many if not most of the people who I think have most of a chance to find traction on AI Alignment are instead busy dealing with all the newcomers to the field. When I do interviews with top researchers they often complain that the quality of their research environment has gotten worse over time as more and more people with less context are filling their social environment, and they have found the community  worse over time for making intellectual progress in (this is not a universally reported thing, but it's a pretty common thing I've heard). 

I don't think the right goal should be to have 10,000 people work on our current confused models of AI Alignment. I think we don't currently really know how to have 10,000 people work on AI Alignment, and if we tried, I expect that group of people would end up optimizing for proxy variables that have little to do with research progress, like how cognitive psychology ended up optimizing extremely hard for p-values and as a field produces less useful insight than (as far as I can tell) Daniel Kahnemann himself while he was actively publishing.

I think it's good to get more smart people thinking about the problem, but it's easy to find examples of extremely large efforts with thousands of people working on a problem, being vastly less effective than a group of 20 people. Indeed, I think that's the default for most difficult problems in the world (I think FTX would be less successful if it had 10,000 employees, as would most startups, and, I argue, also most research fields).

I don't have anything great, but the best thing I could come up with was definitely "I feel most stuck because I don't know what your cruxes are". 

I started writing a case for why I think AI X-Risk is high, but I really didn't know whether the things I was writing were going to be hitting at your biggest uncertainties. My sense is you probably read most of the same arguments that I have, so our difference in final opinion is probably generated by some other belief that you have that I don't, and I don't really know how to address that preemptively. 

I might give it a try anyways, and this doesn't feel like a defeater, but in this space it's the biggest thing that came to mind.

I think it's not obvious in this case what is better, though I think I mildly prefer publicly. Sending it privately keeps the conversation less tense and has less risk of making people feel embarrassed, but sending it publicly is better for helping newcomers orient to the culture (99% of people never post or comment, so private norm enforcement is a losing battle, especially if you hope that EA Forum norms expand to the in-person realm).

I would prefer it quite a lot if this post didn't have me read multiple paragraphs (plus a title) that feel kind of clickbaity and don't give me any information besides "this one opportunity that Effective Altruists ignore that's worth billions of dollars". I prefer titles on the EA Forum to be descriptive and distinct, whereas this title could be written about probably hundreds of posts here. 

A better title might be "Why aren't EAs spending more effort on influencing individual donations?" or "We should spend more effort on influencing individual donations".

Load More