Adam Jermyn

33 karmaJoined


I'd also add that "Research Engineer" is an extremely important profile to doing a lot of scaling lab work and feels more important to me than this post suggests (at least when I think of how much Anthropic Interpretability is interested in these roles). E.g. our recent paper Scaling Monosemanticity would have been completely, 100% impossible without research engineers tackling seriously hard engineering challenges. [See the Author Contributions/"Infrastructure, Tooling, and Core Algorithmic Work" section for a flavor of the sorts of work involved.]

I don't think it's quite right that scaling labs universally need more in the way of "Iterators" than other archetypes. For instance, from Anthropic's Interpretability April update (https://transformer-circuits.pub/2024/april-update/index.html), we currently believe management is the most important role our team is hiring for right now:

  • Managers - We see this as the most important role that we’re hiring for right now.
    • Our growth is likely to be bottlenecked on management capacity, and finding the right fit for the team could make a huge difference to our long-term success.
    • Filling this role has been challenging because we’re looking for someone with experience in a research or engineering environment, who is excited about and experienced with people and project management, and who is enthusiastic about our research agenda and mission.

Seconding this: I think food at EAG is really high-impact:

  • It keeps people together in one place (rather than dispersing to restaurants).
  • It gives a natural informal social time.
  • It stops people from wasting time/energy finding food.

That last one is particularly valuable if, like me, you find one-on-one meetings both valuable and draining, and end up with ~no energy by the end of the day.

In terms of tradeoffs, I'd much prefer (full catering + worse venue) over (reduced catering + nicer venue), and I think that applies even if the venue is like, a tent in a field 20 miles from Heathrow. 

The problem is that spending is dominated by a narrow focus on technical solutions, including carbon capture, improving currently existing energy technologies and infrastructure, and the clean energy transition.

Just to clarify: As far as I can tell, the money is mostly not spent on developing technologies and e.g. carbon capture development is a tiny fraction of spend. Rather it's mostly going towards deploying technologies we already have (which is rather different). My guess is that climate change is more neglected than the $640 billion figure suggests if you focus on "what technologies would be most impactful to develop?".

the widely marketed message that we can ‘technology’ our way out of the climate crisis is misleading, and highly improbable.

It seems like the options are "develop and deplwoy the technology" or "convince everyone on Earth to take a massive lifestyle hit", and the latter just seems implausible to me (as well as undesirable)?

annual deaths related to fuel combustion alone (i.e., outside air pollution) are estimated to be 8.7 million

This is not the same thing as climate change, and would be happening even if e.g. we had perfect carbon capture. So seems wrong to lump these in together?

One approach I have not seen addressed in EA fund literature and offers the potential to be a game-changer, is degrowth. The principles of degrowth critique the global capitalist system which pursues growth at all costs, resulting in human exploitation and ecological destruction.

I can only speak for myself, but degrowth seems (1) politically untenable (as noted above), and (2) requires that people accept much lower living standards. That seems bad to me? I generally want people to have higher living standards, not lower.

Nice post! One question: what are some things that could happen that you would view as requiring a move to low-trust?

Just to mention that with sufficiently good simulation technology, experimental data may not be necessary, and if experimental data sets your timescale then things could happen a lot faster than you're estimating. We don't have that tech now, but in at least some domains it has the shape of something that could be solved with lots of cognitive resources thrown at the problem.

I'm thinking specifically about simulating systems of large (but still microscopic) numbers of atoms, where we know the relevant physical laws and mostly struggle to approximate them in realistic ways.

My intuition here is rough, but I think the core factors driving it are:

  1. Current R&D structures really don't incentivize building good simulation tools outside of narrow domains.
    1. In academia physical simulation tools are often only valued for the novel results they produce, and it can be hard to fund simulation-development efforts (particularly involving multiple people, which is often what's needed).
    2. In industry there's no reason to develop a tool with broader applicability than the domain you need it for, so you get more narrowly-tailored tooling than you'd need to do totally-transformative R&D.
  2. It's often not that hard to devise approximations that work in one domain or another, but it is very tedious to "stitch" the different approximations together into something that works over a broader domain. This further incentivizes people away from building broadly useful simulation tools.
  3. There has been a fair amount of success using neural networks to directly approximate physical systems (by training them on expensive brute-forced simulations, or by framing the simulation  as an optimization problem and using the neural network as the anzats). E.g. the quantum many-body problem and turbulence closures and cosmology simulation.