Hide table of contents

I would like to gain mastery in the domain of alignment research. Deliberate practice is a powerful sledge hammer for gaining mastery. But unlike something like chess or piano, it's not clear to me how to use this sledge hammer for this domain. The feedback loops are extremely long, and the "correct action" is almost never known ahead of time or even right after doing the action.

What are some concrete ways I could apply deliberate practice to alignment research?

One way would be to apply it to skills that are sub-components of research, rather than trying to rapidly practice research end-to-end.

The sub-skill I've thought of that is the best fit to deliberate practice is solving math and physics problems, a la Thinking Physics or other textbook exercises. Being better at this would certainly make me a better researcher, but it might not be worth the opportunity cost, and if I ask myself, "Is this cutting the enemy with every strike?" then I get back a no.

Another thing I can think of is trying to deliberately practice writing, which is a big part of my research. I could try to be more like John, and write a post every week, to get lots of quick feedback. But is this fast enough for deliberate practice? I get the sense that the feedback cycle has to be almost real-time. Maybe doing tweet explanations is the minimal version of this?

I'd appreciate any other concrete ideas! (Note that my research style is much more mathy/agent-foundations flavored, so programming is not really a sub-skill of my research.)

19

0
0

Reactions

0
0
Comments4


Sorted by Click to highlight new comments since:

Not directly relevant to the OP, but another post covering research taste: An Opinionated Guide to ML Research (also see Rohin Shah's advice about PhD programs (search "Q. What skills will I learn from a PhD?") for some commentary.

"a la Thinking Physics or other textbook exercises."

Very much think this is the wrong move, for the reason you mention that it doesn't even have a clear intended path to cut the enemy. I would advise that for projects where there's an imaginable highly-detailed endstate you're trying to get to (as opposed to chess, where there are a million different checkmate patterns with few shared features that can guide your immediate next moves), you should start by mapping out the endstate. From there, you can backchain until you see a node you could plausibly forward-chain to--aka "opportunistic search".

I think the greatest bottleneck to producing more competent alignment researchers is basically self-confidence. People are too afraid of embarrassment, so they don't trust in their own judgment, so they won't try to follow it, so they won't grow better judgments by successively making embarrassing mistakes and correcting themselves. It's socially frowned upon to innocently take your own impressions seriously when there exists smarter people than you, and it reflects an oppressive "thou shalt fall in line" group mentality that I find really unkind.

Like a GAN that wants to produce art, but doesn't trust its own discriminator, so it atrophies and the only source of feedback that remains for the generator is the extremely slow loop of outside low-bandwidth opinion. Or like the pianist who's forgotten how to listen, and looks to their parent after every press of a key to infer whether it was beautifwl or not.

I think that researchers that intend to produce something should forget about probability. You're not optimising for accurate forecasts, you are optimising for building new models that can be tested and iteratively modified/abandoned until you have something that seems robust to all the evidence it catches. It's the difference between searching for sources of Bayesian evidence related to specific models you already know about, vs searching for information that maximises the expected Kullbeck-Leibler Divergence between all your prior and posterior intuitions in order to come up with new models no one's thought of before.

That means that you have to just start out trying to make your own models at some point, and you have to learn to trust your impressions so you're actively motivated to build them. Which also means you'll probably suffer in terms of your forecasting ability for a while until you get good enough. But if you're always greedily following the estimated-truth-gradient at every step, you have no momentum to escape being stuck in local optima.

I realise you were asking for concrete advice, but I usually don't think people are bottlenecked by lack of ideas for concrete options. I think the larger problem is upstream, in their generator, and resolving it lets them learn to generate and evaluate-but-not-defer-to ideas on their own.[1]

  1. ^

    Of course, my whole ramble lacks lots of nuance, disclaimers, and doesn't apply to all that it looks like I'm saying it applies to. But I'm not expecting you to defer to me, I'm revealing patterns that I hope people will steal and apply for themselves. Whether lack of nuance makes me be literally wrong is irrelevant. I'm not optimising for being judged "right" or "wrong"--this isn't a forecasting contest--I'm just trying to be helpfwl by revealing tools that may be used.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig