Jan_Kulveit

Wiki Contributions

Comments

How to succeed as an early-stage researcher: the “lean startup” approach

I would guess the 'typical young researcher fallacy' also applies to Hinton  - my impression is he is  basically advising his past self, similarly to Toby. As a consequence,  the advice is likely  sensible for people-much-like-past-Hinton, but  not a good general advice for everyone.

In  ~3 years most people are able to re-train their intuitions a lot (which is part of the point!). This seems particularly dangerous in cases where expertise in the thing you are actually interested in does not exist, but expertise in something somewhat close does -  instead of following your curiosity, you 'substitute the question' with a different question, for which a PhD program exists, or senior researchers exist, or established directions exist. If your initial taste/questions was better than the expert's, you run a risk of overwriting your taste with something less interesting/impactful.

Anecdotal illustrative story:

Arguably, large part of what are now the foundations of quantum information theory / quantum computing could have been discovered much sooner, together with taking more sensible interpretations of quantum mechanics than Copenhagen interpretation seriously. My guess what was happening during multiple decades (!) was many early career researchers were curious what's going on, dissatisfied with the answers, interested in thinking about the topic more... but they were given the advice along the lines 'this is not a good topic for PhDs or even undergrads; don't trust your intuition; problems here are distasteful mix of physics and philosophy; shut up and calculate, that's how a real progress happens' ... and they followed it; acquired a taste telling them that solving difficult scattering amplitudes integrals using advanced calculus techniques is tasty, and thinking  about deep things formulated using tools of high-school algebra is for fools.   (Also if you did run a survey in year 4 of their PhDs, large fraction of quantum physicists would probably endorse the learned  update from thinking about young foolish questions about QM interpretations to the serious and publishable thinking they have learned.)



 

How to succeed as an early-stage researcher: the “lean startup” approach

Let's start with the third caveat: maybe the real crux is what we think are the best outputs;  what I consider some of the best outputs by young researchers of AI alignment is easier to point at via examples - so it's e.g. the mesa-optimizers paper or multiple LW posts by John Wentworth.  As far as I can tell, none of these seems to be following the proposed 'formula for successful early-career research'. 

My impression is PhD students in AI in Berkeley need to optimise, and actually optimise a lot for success in an established field (ML/AI), and subsequently, the advice should be more applicable to them. I would even say part of what makes a field "established" actually is something like "somewhat clear direction in the space of unknown in which people are trying to push the boundary" and "shared taste in what is good, according to the direction". (The general direction or at least the taste seems to be ~ self-perpetuating once the field is "established", sometimes beyond the point of usefulness). 

In contrast to your experience with AI students in Berkeley, in my experience about ~20% of ESPR students have generally good ideas even while at high school/first year in college, and I would often prefer these people to think about ways in which their teachers, professors or seniors are possibly confused, as opposed to learning that their ideas are now generally bad and they should seek someone senior to tell them what to work on. (Ok - the actual advice would be more complex and nuanced, something like "update on the idea  taste of people who are better/are comparable and have spent more time thinking about something, but be sceptical and picky about your selection of people"). (ESPR is also very selective, although differently.) 

With hypothetical surveys, the conclusion (young researchers should mostly defer to seniors in idea taste) does not seem to follow from estimates like "over 80% of them would think their initial ideas were significantly worse than their later ideas".  Relevant comparison is something like "over 80% of them would think they should have spent marginally more time thinking about ideas of more senior AI people at Berkeley, and more time on problems they were given by senior people, and smaller amount of time thinking about their own ideas, and working on projects based on their ideas". Would you guess the answer would still be 80%? 


 

Announcing the launch of EA Impact CoLabs (beta) + request for projects, volunteers and feedback

It's good to see a new enthusiastic team  working on this! My impression, based on working on the problem ~2 years ago is this has good chances to provide value in global health a poverty, animal suffering, or parts of meta- cause areas; in case of x-risk focused projects, something like a 'project platform' seems almost purely bottlenecked by vetting. In the current proposal this seems to mostly depend on "Evaluation Commission"->  as a result,  the most important part for x-risk projects seems judgement of members of this commission and/or it's ability to seek external vetting

How to succeed as an early-stage researcher: the “lean startup” approach

In my view this text should come with multiple caveats.

- Beware 'typical young researcher fallacy'. Young researchers are very diverse, and while some of them will benefit from the advice, some of them will not. I do not  believe there is a general 'formula for successful early-career research'. Different people have different styles of doing research, and even different metrics for  what 'successful research' means. While certainly many people would benefit from the advice 'your ideas are bad', some young researchers actually have great ideas, should work on them, and avoid generally updating on research taste of most of the"senior researchers". 

- Beware 'generalisation out of training distribution' problems. Compared to some other fields, AI governance as studied by Allan Dafoe is relatively well decomposed into a hierarchy of problems and you can meaningfully scale it by adding junior people and telling them what to do (work on sub-problems senior people consider interesting). This seems more typical for research fields with established paradigms than for fields which are pre-paradigmatic, or fields in need of a change of paradigm. 

- Large part of the described  formula for success seems to be optimised for success in the direction getting attention of senior researchers, writing something well received, or similar. This is highly practical, and likely good for many people in fields like Ai governance; at the same time, it seems the best research outputs by early career researchers in eg AI safety do not follow this generative pattern, and seem to be motivated more by curiosity,  reasoning from first principles, and  ignoring authority opinions.

EA Group Organizer Career Paths Outside of EA

Contrary to what seems an implicit premise of this post,  my impression is 

- most EA group organizers  should have this as a side-project, and should not think about "community building" as about their "career path" where they could possibly continue to do it in a company like Salesforce
- the label "community building" is unfortunate for what most of the EA group organizing work should consist of
- most of the tasks in "EA community building" involve skills which are pretty universal a generally useable in most other fields, like "strategizing", "understanding people", "networking" or  "running events"
- for example: in my view, what can an EA group organizer on a research career path get from  organizing an EA group as a side-project are skills like "organizing event", "explaining complex ideas to people" or even "thinking clearly in groups about important topics". Often the benfits of improving/practicing such skills for a research career are similar or larger than e.g. learning a new programming language

There are exceptions to this, such as people who want to work on large groups full time, build national groups, or similar. In my view these projects are often roughly of the scope of founding or leading a startup or a NGO and should be attempted by people who, in general, have a lot of optionality in what to do both before working on an EA group and eventually after it. 

Vint Cerf seems actually more of a counterexample toward "community building and evangelism" as a career objective: anyone who wants to follow this path should note he wrote the TCP protocol internet is still running on first, co-founded one of the entities governing internet later, and worked for Google on community building only after all these experiences. 

Another reason I'm sceptical of the value of this argument is my guess is people who would be convinced by it ("previously I was hesitant about organizing an EA group because the career path seems too narrow and tied to EA, now I see career paths in for-profit world") are people who should mostly not lead or start EA groups. In most cases EA group organizing  involves significant amount of talking to people about careers, and whoever has so limited understanding of the careers to benefit from this advice seems likely to have  non-trivial chance of giving people harmful career advice.

How much does performance differ between people?

1.

For different take on very similar topic check  this discussion between me and Ben Pace  (my reasoning was  based on the same Sinatra paper).


For practical purposes, in case of scientists, one of my conclusions was

Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q seems relatively stable in the career and can be usefully estimated after ~20 publications. I would guess you can predict even with less data, but the correct "formula" would be trying to disentangle interestingness of the problems the person is working on from the interestingness of the results.


2.

For practical purposes, my impression is some EA recruitment efforts could be more often at risk of over-filtering by ex-ante proxies and being bitten by tails coming apart, rather than at risk of not being selective enough.

Also, often the practical optimization question is how much effort you should spend on on how extreme tail of the ex-ante distribution. 

3. 

Meta-observation is someone should really recommend more EAs to join the complex systems / complex networks community.  

Most of the findings from this research project seem to be based on research originating in complex networks community, including research directions such as "science of success", and there is more which can be readily used,  "translated" or distilled. 

Some thoughts on EA outreach to high schoolers

First EuroSPARC was in 2016. Targeting 16-19 year olds, my prior would be participants should still mostly study, and not work full-time on EA, or only exceptionally.

Long feedback loops are certainly a disadvantage.

Also in the meantime ESPR underwent various changes and actually is not optimising for something like "conversion rate to an EA attractor state".

The case of the missing cause prioritisation research

Quick reaction:

I. I did spent a considerable amount of time thinking about prioritisation (broadly understood)

My experience so far is

  • some of the foundations / low hanging sensible fruits were discovered
  • when moving beyond that, I often run into questions which are some sort of "crucial consideration" for prioritisation research, but the research/understanding is often just not there.
  • often work on these "gaps" seems more interesting and tractable than trying to do some sort of "lets try to ignore this gap and move on" move

few examples, where in some cases I got to writing something

  • Nonlinear perception of happiness - if you try to add utility across time-person-moments, it's plausible you should log-transform it (or non-linearly transform it) . sums and exponentiation do not commute, so this is plausibly a crucial consideration for part of utilitarian calculations trying to be based on some sort of empirical observation like "pain in bad"
  • Multi-agent minds and predictive processing - while this is framed as about AI alignment, super-short version of why this is relevant for prioritisation is: theories of human values depend on what mathematical structures you use to represent these values. if your prioritization depnds on your values, this is possible important
  • Another example could be the style of thought explained in Eliezer's "Inadequate Equillibria". While you may not count it as "prioritisation research", I'm happy to argue the content is crucially important for prioritisation work on institutional change or policy work. I spent some time thinking about "how to overcome inadequate equillibria", which leads to topics from game theory, complex systems, etc.

II. My guess is there are more people who work in a similar mode, trying to basically 'build as good world model as you can', dive into problems you run into, and at the end prioritise informally based on such a model. Typically I would expect such model to be in parts implicit / be some sort of multi-model ensemble / ...

While this may not create visible outcomes labeled as prioritisation, I think it's important part of what's happening now

'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

I posted a short version of this, but I think people found it unhelpful, so I'm trying to post somewhat longer version.

  • I have seen some number of papers and talks broadly in the genre of "academic economy"
  • My intuition based on that is, often they seem to consist of projecting complex reality into a space of single-digit real number dimensions and a bunch of differential equations
  • The culture of the field often signals solving the equations is profound/important, and the how you do the projection "world -> 10d" is less interesting
  • In my view for practical decision making and world-modelling it's usually the opposite: the really hard and potentially profound part is the projection. Solving the maths is in often is some sense easy, at least in comparison to the best maths humans are doing
  • While I overall think the enterprise is worth to pursue, people should in my view have a relatively strong prior that for any conclusions which depends on the "world-> reals" projection there could be many alternatives leading to different conclusions; while I like the effort in this post to dig into how stable the conclusions are, in my view people who do not have cautious intuitions about the space of "academic economy models" could still easily over-update or trust too much the robustness
  • If people are not sure, an easy test could be something like "try to modify the projection in any way, so the conclusions do not hold". At the same time this will usually not lead to an interesting or strong argument, it's just trying some semi-random moves is the model space. But it can lead to a better intuition.
  • I tried to do few tests in a cheap and lazy way (eg what would this model tell me about running at night on a forested slope?) and my intuitions was:
  • I agree with the cautious the work in the paper represents very weak evidence for the conclusions that follow only from the detailed assumptions of the model in the present post. (At the same time it can be an excellent academic economy paper)
  • I'm more worried about other writing about the results, such as linked post on Phil's blog , which in my reading signals more of "these results are robust" than it's safe
  • Harder and more valuable work is to point to something like some of the most significant way in which the projection fails" (aspects of reality you ignored etc.). In this case this was done by Carl Shulman and it's worth discussing further
  • In practice I do have some worries about some meme 'ah, we don't know, but given we don't know, speeding up progress is likely good' (as proved in this good paper) being created in the EA memetic ecosystem. (To be clear I don't think the meme would reflect what Leopold or Ben believe)
Load More