All of eca's Comments + Replies

Yea, idk. I was thinking of the quotes where you explicitly mentioned Van der Waals forces. Tbc, my preference would be to not be forced to pick a single force

eca
4mo49
5
1
1

Hi, computational protein engineer and person who-thinks-biology-can-do-amazing-stuff here.

Just wanted to report that while "Proteins are like Folded Spaghetti Held Together By Static Cling" is obviously incorrect as a matter of fact, I immediately thought it was a pretty good analogy for capturing some critical and often under-appreciated aspects of the functionally important character of proteins. When I read the sentences you've quoted him saying about proteins held together by covalent bonds, I (think) I understood what he was pointing at with this and... (read more)

2
titotal
4mo
Hey, thanks for replying! I explained my issues with the wording he used in a different comment. I would rather know more about what you think about the subject.  What do you mean by "protein-like" here? Like, a stitched together chain of molecules that also folds up, but has much stronger cross-links? Or like a 2d-layering system? Or the gears and manipulators that Drexler proposed and wrote up here? Do any of these sound plausible, or easily built off of regular biological systems?  Are you aware of any potential alternative designs for biology compared to the DNA/RNA approach?  Sorry if these are too many questions, I'm very interested in this subject but have reached the limit of my expertise. 
6
EliezerYudkowsky
4mo
I'm not sure what's a truer analogy than static cling for hydrophobia as a force holding things together which the general audience has any experience with.  Macroscopic experience of hydrophobia is, like, oil collecting on the surface of water, which isn't experienced as a binding force the way that static cling is.

Sorry I'm late to the party- as per the OP's request for short takes with no explanation, mine is that this is probably not worth doing, fwiw.

One impression I could imagine having after reading this post for the first time is something like: "eca would prefer fewer connections to people and doesn't value that output of community building work" or even more scandalously, "eca thinks community builders are wasting their time".

I don't believe that, and would have edited the draft to make that more clear if I had taken a different approach to writing it.

A quick amendment to that vibe.

  1. Community building is mission critical. It's also complicated, and not something I expect to have good opinions abo
... (read more)

Meta note: this was an experiment in jotting something down. I've had a lot of writers block on forum posts before and thought it would be good to try erring on the side of not worrying about the details.

As I'm rereading what I wrote late last night I'm seeing things I wish I could change. If I have time, I'll try writing these changes as comments rather than editing the post (except for minor errors).

(Curious for ideas/approaches/ recommendations for handling this!)

eca
2y13
0
0

This seems like a great idea- I actually woke up this morning realizing I forgot it from my list!

One part of my perspective which is possibly worth reemphasizing: IMO, what you choose to work together does not need to be highly optimized or particularly EA. At least to make initial progress in this direction, it seems plausible that you should be happy with anything challenging/ without an existing playbook, collaborative, and “real” in the sense of requiring you to act like you would if you were solving a real problem instead of playing a toy game.

So in ... (read more)

thanks for the kind words! I agree that we didn't have much good stuff for ppl to do 4 yrs ago when i started in bio but don't feel like my model matches yours regarding why.

But I'm also wanting to confirm I've understood what you are looking for before I ramble.

How much would you agree with this description of what I could imagine filling in from what you said re 'why it took so long':

"well I looked at this list of projects, and it didn't seem all that non-obvious to me, and so the default explanation of 'it just took a long time to work out these project... (read more)

8
Ozzie Gooen
2y
That sounds like much of it.  To be clear, it's not that the list is obvious, but more that it seems fairly obvious that a similar list was possible. It seemed pretty clear to me a few years ago that there must be some reasonable lists of non-info-hazard countermeasures that we could work on, for general-purpose bio safety. I didn't have these particular measures in mind, but figured that roughly similar ones would be viable. Another part of my view is, "Could we have hired a few people to work full-time coming up with a list about this good, a few years earlier?" I know a few people who were discouraged from working in the field earlier on because their was neither the list, nor the go-ahead to try to make a list.

meta note- its super cool to see all this activity! but the volume is makin me a bit stressed and i probably won't be trying to respond to lots even if i do one sporadically. does not mean i am ignoring you!

Well I hope it works out for ya! Thanks haha

In case you are looking for content and have interests similar to me I like the following for audio:

  • Institute for Advanced Study lectures (random fun science)
  • Yannic Kilcher (ML paper summaries)
  • Wendover Productions/ Kurzgesagt (random probably not as useful but interesting science and econ funfacts)
  • LiveOverflow (Security)

And i find that searching for random academics names is more likely to turn up lectures/ convos than podcasts

4
BrianTan
2y
Thanks for the recommendations! I don't think I'll listen to these, though Wendover Productions looks cool.  I might try listening to some EA Global videos or GPI lectures via audio on YouTube premium.
eca
2y13
0
0

Are you looking for shovel ready bounties (eg write them up and you are good to go) or things which might need development time (eg figuring out exactly what to reward, working out the strategy of why the bounty might be good etc)?

Shovel ready bounties are preferred but to avoid premature exploitation I'd just like to hear as many ideas as possible at this point.  Some ideas might require back and forth, but that's ok!

Seeing the ideas coming in is already giving me lots of ideas for ways to potentially scale this.

eca
2y15
0
0

FWIW this seems like a reasonable idea to me and I would be pretty sad if no one at e.g. Givewell had even considered it.

Answer by ecaNov 02, 202171
0
0
  • Order groceries online! Maybe this is obvious but I have the impression not as many ppl do this as they should. Saves me at least 1 hr (usually closer to 2) for < $20
  • Pay for a bunch of disk space. I find it generates a lot of overhead to have files in different places. For me, the solution has been a high performance workstation plus remote desktop forwarding to my laptop when I travel so I can always have the same disk and workspace
  • Buy more paid apps/ premium upgrades/ digital subscriptions. I haven’t done the math on this so might not be as good as
... (read more)
4
Quinn McHugh (he/him)
2y
+1 for ordering groceries online - Tried this for the first time last week. For me, not having to expend the time & mental energy searching for the things I need was well worth the small financial cost. For those in the US and Canada who still like to get out of the house, Instacart offers grocery pickup.
6
BrianTan
2y
50% of why I got YouTube Premium just now is because of your recommendation. Thanks!

Quest: see the inside of an active bunker

1
N N
2y
Why, if you don't mind me asking?

Seems like a good idea if it were easy

(Sorry, when I said your story for impact was "plausible", in my head I was comparing it to my own idea for why this would be good, and I meant that it was plausibly better than my story. I actually buy your pitch as written, seems like a solidly good thing; apologies)

What a cool project! I listen to the vast majority of my reading these days and am perpetually out of good things to read.

The linked audio is reasonably high quality, and more importantly, it doesn't have some of the formatting artifacts that other TTS programs have. Well done.

Your story for why this is a potentially high impact project is plausible to me, especially given how much you've automated. I have independently been thinking about building something similar, but with a very different story for why it could be worth my time to do it. That means th... (read more)

2
eca
3y
(Sorry, when I said your story for impact was "plausible", in my head I was comparing it to my own idea for why this would be good, and I meant that it was plausibly better than my story. I actually buy your pitch as written, seems like a solidly good thing; apologies)

And there are various things one could probably do to make it not illegal but still messed up and the wrong thing to do! Like make it mandatory to check a box saying you waive your copyright for audio on a thing before you post on the forum. I think if, like some of the tech companies, you made this box really little and hard to find, most people would not change their posting behavior very much, and would now be totally legal (by assumption).

but it would still be a bad thing to do.

eca
3y17
0
0

This is a reason to fix the system! My point is that it reduces to "make all the authors happy with how you are doing things", there is not some spooky extra thing having to do with illegality

TBC I do not endorse using people's content in a way they aren't happy with, but I would still have that same belief if it wasn't illegal at all to do so.

I use speechify, its voices are quite good but has the same formatting issues as all the rest (reading junk text) which I think is the real bottleneck here

eca
3y34
1
0

FWIW I think I endorse Kat's reasoning here. I don't think it matters if it is illegal if I'm correct in suspecting that the only people who could bring a copyright claim are the authors, and assuming the authors are happy with the system being used. This is analogous to the way it is illegal, by violating minimum wage laws, to do work for your own company without paying yourself, but the only person who has standing to sue you is AFAIK yourself.

Not a lawyer, not claiming to know the legal details of these cases, but I think this standing thing is real and an appropriate way to handle

I've seen this reasoning a lot, where EA organisations assume they won't get sued because the only people they're illegally using the data of are other EAs, and and as someone whose data has been misused with this reasoning, I don't love it!

Empirical differential tech development?

Many longtermist questions related to dangers from emerging tech can be reduced to “what interventions would cause technology X to be deployed before/ N years earlier than/ instead of technology Y”.

In, biosecurity, my focus area, an example of this would be something like "how can we cause DNA synthesis screening to be deployed before desktop synthesizers are widespread?"

It seems a bit cheap to say that AI safety boils down to causing an aligned AGI before an unaligned, but it kind of basically does, and I suspect ... (read more)

eca
3y10
0
0

I wonder how these compare with fitting a Beta distribution and using one of its statistics? I’m imagining treating each forecast (assuming they are probabilities) as an observation, and maximizing the Beta likelihood. The resulting Beta is your best guess distribution over the forecasted variable.

It would be nice to have an aggregation method which gave you info about the spread of the aggregated forecast, which would be straightforward here.

It's not clear to me that "fitting a Beta distribution and using one of it's statistics" is different from just taking the mean of the probabilities.

I fitting a beta distribution to Metaculus forecasts and looked at:

  • Median forecast
  • Mean forecast
  • Mean log-odds / Geometric mean of odds
  • Fitted beta median
  • Fitted beta mean

Scattering these 5 values against each other I get:

We can see fitted values are closely aligned with the mean and mean-log-odds, but not with the median. (Unsurprising when you consider the ~parametric formula for the mean / median).

The performan... (read more)

7
Jaime Sevilla
3y
Hmm good question. For a quick foray into this we can see what would happen if we use our estimate the mean of the max likelihood beta distribution implied by the sample of forecasts p1,...,pN. The log-likelihood to maximize is then  logL(α,β)=(α−1)∑ilogpi+(β−1)∑ilog(1−pi)−NlogB(α,β)  The wikipedia article on the Beta distribution discusses this maximization problem in depth, pointing out that albeit no closed form exists if α and β can be assumed to be not too small the max likelihood estimate can be approximated as ^α≈12+^GX2(1−^GX−^G1−X)  and ^β≈12+^G1−X2(1−^GX−^G1−X), where GX=∏ip1/Ni and G1−X=∏i(1−pi)1/N. The mean of a beta with these max likelihood parameters is ^α^α+^β=(1−G1−X)(1−GX)+(1−G1−X). By comparison, the geometric mean of odds estimate is: p=∏Ni=1p1/Ni∏Ni=1p1/Ni+∏Ni=1(1−pi)1/N=GXGX+G1−X Here are two examples of how the two methods compare aggregating five forecasts   I originally did this to convince myself that the two aggregates were different. And they seem to be! The method seems to be close to the arithmetic mean in this example. Let's see what happens when we extremize one of the predictions:   We have made p3 one hundred times smaller. The geometric mean is suitable affected. The maximum likelihood beta mean stays close to the arithmetic mean, unperturbed.  This makes me a bit less excited about this method, but I would be excited about people poking around with this method and related ones!

I’m vulnerable to occasionally losing hours of my most productive time “spinning my wheels”: working on sub-projects I later realize don’t need to exist.

Elon Musk gives the most lucid naming of this problem in the below clip. He has a 5 step process which nails a lot of best practices I’ve heard from others and more. It sounds kind of dull and obvious to write down, but somehow I think staring at the steps will actually help. Its also phrased somewhat specifically to building physical stuff, but I think there is a generic version of each. I’m going to try ... (read more)

eca
3y11
0
0

One more unsolicited outreach idea while I’m at it: high school career / guidance counselors in the US.

I’m not sure how idiosyncratic this was of my school, but we had this person whose job it was to give advice to older highschool kids about what to do for college and career. Mine’s advice was really bad and I think a number of my friends would have glommed onto 80k type stuff if it was handed to them at this time (when people are telling you to figure out your life all of a sudden). This probably hits the 16yo demographic pretty well.

Could look like addi... (read more)

1
Adam Steinberg
2y
This could lead to quite a bit of cost-effective positive impact on students, especially those who already have an interest in choosing a career that has positive social consequences. Many students, in my experience, would be very happy to consider higher-impact careers if they had a little wisely-presented encouragement at the right juncture. Such materials would not have to be extensive, and they could be tied to online content that goes deeper into the topic or even provides some interaction. That said, the above OP call for proposals seems highly oriented towards students at elite schools, or elite students at other schools, and specifically is aimed at students heading for a university education. I might suggest that we should be considering how young people likely to enter other professions, be they white- or blue-collar, might benefit from an understanding of these topics (e.g., those listed above: EA, rationality, longtermism, and global catastrophic risk reduction).  I will start a discussion on this in the proper forum...but this much larger group of future consumers/workers/influencers/voters should not be ignored. Charities need staff at many levels, and people in many vocations can incorporate these ideas into their work, giving, volunteering, and political activities. Is it too soon for EA to open up to a broader audience?

Exciting!

This is probably not be the best place to post this but I’ve been learning recently about the success of hacking games in finding and training computer security people (https://youtu.be/6vj96QetfTg for a discussion, also this game I got excited about in high school: https://en.m.wikipedia.org/wiki/Cicada_3301).

I think there might be something to an EA/ rationality game. Like something with a save-the-world but realistically plot and game mechanics built around useful skills like Fermi estimation. This is a random gut feeling I’ve had for a while ... (read more)

4
Linch
3y
I thought Decision Problem: Paperclips introduced a subset of AI risk arguments fairly well in gamified form, but I'm not aware of anybody  where the game made them become interested enough in AGI alignment/risk/safety enough to work on it. Does anybody else on this forum have data/anecdata?

I appreciate the answers so far!

One thing I realized I'm curious about in asking this is something about how many groups of people/ governing bodies are actually crazy enough to use nuclear weapons even if self-annihilation is assured. This seems like an interesting last check against horrible mutual destruction stuff. The hypothesis to invalidate is: maybe the types of people assembled into the groups we call "governments" are very unlikely to carry an "activate mutual destruction" decision all the way through. To be clear, I don't believe this, and I th... (read more)

Great set of links, appreciate it. Was especially excited to see lukeprog's review and the author's presentation of Atomic Obsession.

I'm inclined toward answers of the form "seems like they would have been used more or some civilizational factor would need to change" (which is how I interpret Jackson's answer on strong global policing). Which is why I'm currently most interested in understanding the Atomic Obsession-style skeptical take.

If anyone is interested, the following are some of the author's claims which seem pertinent, at least as far as I can te... (read more)

Re direct military conflicts between nuclear weapons states: this might not exactly fit the definition of "direct" but I enjoyed skimming the mentions of nuclear weapons in this wikipedia on the yom kippur war, which saw a standoff between Israel (nuclear) and Egypt (not nuclear, but had reportedly been delivered warheads by USSR). There is some mention of Israel "threatening to go nuclear" possibly as a way of forcing the US to intervene with conventional military resources.

3
Max_Daniel
3y
Interesting, thank you! I hadn't been aware of this case.

Interesting! For (1) how do you expect the economic superpowers to respond to smaller nations using nuclear weapons in this world? It sounds like because of MAD between the large nations, your model is that they must allow small nuclear conflicts, or alternatively pivot into your scenario 2 of increased global policing, is that correct?

1
Jackson Wagner
3y
Yes, that's what I'm thinking.  As I'm continuing to develop this thought (sorry for being a bit repetitive in my posts), perhaps the main things that determine where the world might fall between scenarios (1) and (2) are: -How hard it is to establish stricter global governance: Is there an easy proliferation bottleneck that can be controlled, like ICBM technology or uranium mines?  Can the leading nations get along well enough to cooperate on the shared goals of global governance?  When everyone has nukes, how easy is it to boss around small countries?  If the leading nations don't have the state capacity to pull off global governance, then we'll be stuck in a multipolar anything-goes world no matter what we think is preferable. -The "contagiousness" of nuclear conflict helps determine the value of strict global governance: if conflicts are extremely contagious (such that something like the real-world Syrian Civil War ends up with the superpowers at DEFCON 1), then small-scale wars are still extremely dangerous, and global policing is very desirable.  If nuclear conflict isn't contagious at all and it's easy to stay out of a dispute, then it would be a lot more acceptable for the leading nations to just let nuclear wars happen, in the same way that the modern world often lets civil wars happen without intervening too much.  Just play defense by being really paranoid about your ports/borders, and threatening to first-strike anyone who develops suspicious new long-range capabilities. I really don't know much about the question of contagiousness.  Is there something special about nuclear weapons and the "nuclear taboo" that affects contagiousness?  (Maybe nations feel like they have to "use or lose" their ICBMs before they are destroyed by opponents.)  Or does all war seem contagious because it naturally erupts at the center of complex knots of geopolitical tensions and alliances?  (Like the rapid domino-like declarations of war that set off WW1, or the agglomerati

Thanks for this post Luisa! Really nice resource and I wish I caught it earlier. A couple methodology questions:

  1. Why do you choose an arithmetic mean for aggregating these estimates? It seems like there is an argument to be made that in this case we care about order-of-magnitude correctness, which would imply taking the average of the log probabilities. This is equivalent to the geometric mean (I believe) and is recommended for fermi estimates e.g. (here)[https://www.lesswrong.com/posts/PsEppdvgRisz5xAHG/fermi-estimates].

  2. Do you have a sense for how

... (read more)
2
Pablo
3y
FYI, this post by Jaime has an extended discussion of this issue.

Why do you choose an arithmetic mean for aggregating these estimates? 

 

This is a good point.

I'd add that as a general rule when aggregating binary predictions one should default to the average log odds, perhaps with an extremization factor as described in (Satopää et al, 2014).

The reasons are a) empirically, it seems to work better, b) the way Bayes rules works it seems to suggest very strongly than log odds are the natural unit of evidence, c) apparently there are some complex theoretical reasons ("external bayesianism") why this is better (the ... (read more)

Stumbling on this today-did this article ever get published? Would be keen to read

3
Owen Cotton-Barratt
3y
I'm not certain I remember what I was referring to here, but my best guess is that it was this article: https://forum.effectivealtruism.org/posts/HENbwrDYnTktRtNdE/report-allocating-risk-mitigation-across-time

Strong +1 to this. I think I have observed people who have really good academic research taste but really bad EA research taste

Taste is huge! I was trying to roll this under my "Process" category, where taste manifests in choosing the right project, choosing the right approach, choosing how to sequence experiments, etc etc. Alas, not a lossless factorization

These exercises look quite neat, thanks for sharing!

Thanks Seb. I don't think I have energy to fully respond here, possibly I'll make a separate post to give this argument its full due.

One quick point relevant to Crux 2: "I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science)."

I think there are many-fold differences in impact/dollar between the tech you build if you are trying to actually solve the problem a... (read more)

I bet it is! The example categories I think I had in mind at time of writing would be 1) people in ML academia who want to be doing safety instead doing work that almost entirely accelerates capabilities and 2) people who want to work on reducing biological risk instead publish on tech which is highly dual use or broadly accelerates biotechnology without deferentially accelerating safety technology.

I know this happens because I've done it. My most successful publication to date (https://www.nature.com/articles/s41592-019-0598-1) is pretty much entirely c... (read more)

This is interesting and also aligns with my experience depending on exactly what you mean!

  • If you mean that it seems less difficult to get tenure in CS (thinking especially about deep learning) than the vibe I gave, (which is again speaking about the field I know, bioeng) I buy this strongly. My suspicion is that this is because relative to bioengineering, there is a bunch of competition for top research talent by industrial AI labs. It seems like even the profs who stay in academia also have joint appointment in companies, for the most part. There isn't
... (read more)
1
AdamGleave
3y
To clarify, I don't think tenure is guaranteed, more that there's significant margin of error. I can't find much good data on this, but this post surveys statistics gathered from a variety of different universities, and finds anywhere between 65% of candidates get tenure (Harvard) to 90% (Cal State, UBC). Informally, my impression is that top schools in CS are the higher end of this: I'd have guessed 80%. Given this, the median person in the role could divert some of their research agenda to less well-received topics and still get tenure. But I don't think they could work on something that no one in the department or elsewhere cared about. I've not noticed much tenure switch in CS but have never actually studied this, would love to see hard data here. I do think there's a significant difference in research agendas between junior and senior professors, but it's more a question of what was in vogue when they were in grad school and shaped their research agenda, than tenured vs non-tenured per se. I do think pre-tenure professors tend to put their students under more publication pressure though.

"Working backwards" type thinking is indeed a skill! I find it plausible a PhD is a good place to do this. I also think there might be other good ways to practice it, like for example seeking out the people who seem to be best at this and trying to work with them.

+1 on this same type of thinking being applicable to gathering resources. I don't see any structural differences between these domains.

This is an excellent comment, thanks Adam.

A couple impressions:

  • Totally agree there are bad incentives lots of places
  • I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better
... (read more)

Sorry for the (very) delayed reply here. I'll start with the most important point first.

But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned.

I think overall the incentives set up by EA funders are somewhat better than run-of-the-mill academic incentives, but I think the difference is smaller than you seem to believe, and I think we're a long way from cracking it. I think this is something we can get better at, but ... (read more)

Yeah this is great; I think Ed probably called them sleeping beauties and I was just misremembering :)

Thanks for the references!

Appreciate your comment! I probably won't be able to give my whole theory of change in a comment :P but if I were to say a silly version of it, it might look like: "Just do the thing"

So, what are the constituent parts of making scientific progress? Off the cuff, maybe something like:

  1. You need to know what questions are worth asking / problems are worth solving
  2. You need to know how to decompose these questions in sub-questions iteratively until a subset are answerable from the state of current knowledge
  3. You need to have good research project management ski
... (read more)

Thanks Charles! I think of your two options I most closely mean (1). For evidence I don't mean 2: "Optimize almost exclusively for compelling publications; for some specific goals these will need to be high-impact publications."

My attempt to restate my position would be something like: "Academic incentives are very strong and its not obvious from the inside when they are influencing your actions. If you're not careful, they will make you do dumb things. To combat this, you should be very deliberate and proactive in defining what you want and how you want i... (read more)

Publishing good papers is not the problem, deluding yourself is.

Big +1 to this. Doing things you don't see as a priority but which other people are excited about is fine. You can view it as kind of a trade: you work on something the research community cares about, and the research community is more likely to listen on (and work on) things you care about in the future.

But to make a difference you do eventually need to work on things that you find impactful, so you don't want to pollute your own research taste by implicitly absorbing incentives or others opinions unquestioningly.

I am doing 1. 2 is an incidental from the perspective of this post, but is indeed something I believe (see my response to bhalperin). I think my attempt to properly flag my background beliefs may have led to the wrong impression here. Or alternatively my post doesn't cover very much on pursuing academia, when the expected post would have been almost entirely focused on this, thereby seeming like it was conveying a strong message?

In general I don't think about pursuing "sectors" but instead about trying to solve problems. Sometimes this involves trying to g... (read more)

2
DirectedEvolution
3y
That makes sense. I like your approach of self-diagnosing what sort of resources you lack, then  tailoring your PhD to optimize for them. One challenge with the "work backwards" approach is that it takes quite a bit of time to figure out what problems to solve and how to solve them. As I attempted this planning my own immanent journey into grad school, my views gained a lot of sophistication, and I expect they'll continue to shift as I learn more. So I view grad school partly as a way to pursue the ideas I think are important/good fits, but also as a way to refine those ideas and gain the experience/network/credentials to stay in the game. The "work backwards" approach is equally applicable to resource-gathering as finding concrete solutions to specific world problems. I think it's important for career builders to develop gears-level models of how a PhD or tenured academic career gives them resources + freedom to work on the world problems they care about; and also how it compares to other options. Often, people really don't seem to do that. They go by association: scientists solve important problems, and most of them seem to have PhDs and academic careers, so I guess I should do that too. But it may be very difficult to put the resources you get from these positions to use in order to solve important problems, without a gears-level model of how those scientists use those resources to do so.
eca
3y24
0
0

Ugh. Shrug. That isn't supposed to be the point of this post. All my comments on this are to alert the reader that I happen to believe this and haven't tried to stop it from seeping into my writing. It felt disingenuous not to.

But since you raised, I feel like making it clear, if it isn't already, that I do not recommend reversing this advice. At least if you are considering cause areas/ academic domains that I might know about (see my preamble). I have no idea how applicable this is outside of longtermist technical-leaning work.

If you think you might be a... (read more)

I'm not convinced that academia is generally a bad place to do useful technical work. In the simplest case, you have the choice between working in academia, industry or a non-profit research org. All three have specific incentives and constraints (academia - fit to mainstream academic research taste; industry - commercial viability; non-profit research - funder fit, funding stability and hiring). Among these, academia seems uniquely well-suited to work on big problems with a long (10-20 year) time horizon, while having access to extensive expertise and col... (read more)

5
Anthony DiGiovanni
3y
Could you be a bit more specific about this point? This sounds very field-dependent.

You approximately can't get directly useful/ things done until you have tenure.

At least in CS, the vast majority of professors at top universities in tenure-track positions do get tenure. The hardest part is getting in. Of course all the junior professors I know work extremely hard, but I wouldn't characterize it as a publication rat race. This may not be true in other fields and outside the top universities.

The primary impediment to getting things done that I see is professors are also working as administrator and teaching, and that remains a problem post-tenure.

Sorry meant to write "component of scientific achievement is predictable from intrinsic characteristics" in that first line

Neat. I'd be curious if anyone has tried blinding the predictive algorithm to prestige: ie no past citation information or journal impact factors.  And instead strictly use paper content (sounds like a project for GPT-6).

It might be interesting also to think about how talent vs. prestige-based models explain the cases of scientists whose work was groundbreaking but did not garner attention at the time. I'm thinking, e.g. of someone like Kjell Keppe who basically described PCR, the foundational molbio method, a decade early.

If you look at natural  ... (read more)

Interesting! Many great threads here. I definitely agree that some component of scientific achievement is predictable, and the IMO example is excellent evidence for this. Didn't mean to imply any sort of disagreement with the premise that talent matters; I was instead pointing at a component of the variance in outcomes which follows different rules.

Fwiw, my actual bet is that to become a top-of-field academic you need both talent AND to get very lucky with early career buzz. The latter is an instantiation of preferential attachment. I'd guess for each top-... (read more)

3
Max_Daniel
3y
No, they considered the full distribution of scientists with long careers and sustained publication activity (which themselves form the tail of the larger population of everyone with a PhD).  That is, their analysis includes the right tail but wasn't exclusively focused on it. Since by its very nature there will only be few data points in the right tail, it won't have a lot of weight when fitting their model. So it could in principle be the case that if we looked only at the right tail specifically this would suggest a different model. It is certainly possible that early successes may play a larger causal role in the extreme right tail - we often find distributions that are mostly log-normal, but with a power-law tail, suggesting that the extreme tail may follow different dynamics.
1
eca
3y
Sorry meant to write "component of scientific achievement is predictable from intrinsic characteristics" in that first line
eca
3y11
0
0

Great post! Seems like the predictability questions is impt given how much power laws surface in discussion of EA stuff.

More precisely, future citations as well as awards (e.g. Nobel Prize) are predicted by past citations in a range of disciplines

I want to argue that things which look like predicting future citations from past citations are at least partially "uninteresting" in their predictability, in a certain important sense. 

(I think this is related to other comments, and have not read your google doc, so apologies if I'm restating. But I think it... (read more)

2
Max_Daniel
3y
A related phenomenon has been studied in the scientometrics literature under the label 'sleeping beauties'. Here is what Clauset et al. (2017, pp. 478f.) say in their review of the scientometrics/'science of science' field: [See doc linked in the OP for full reference.]
3
Max_Daniel
3y
Relatedly, you might be interested in these two footnotes discussing how impressive it is that Sinatra et al. (2016) - the main paper we discuss in the doc - can predict the evolution of the Hirsch index (a citation measure) over a full career based on the the Hirsch index after the 20 or 50 papers:

Thanks! I agree with a lot of this.

I think the case of citations / scientific success is a bit subtle:

  • My guess is that the preferential attachment story applies most straightforwardly at the level of papers rather than scientists. E.g. I would expect that scientists who want to cite something on topic X will cite the most-cited paper on X rather than first looking for papers on X and then looking up the total citations of their authors.
  • I think the Sinatra et al. (2016) findings which we discuss in our relevant section push at least slightly against a story
... (read more)

To operate in the broad range of cause areas openphil does, I imagine you need to regularly seek advice from external advisors. I have the impression that cultivating good sources of advice is a strong suite of both yours and OpenPhils.

I bet you also get approached by less senior folks asking for advice with some frequency.

As advisor and advisee: how can EAs be more effective at seeking and making use of good advice?

Possible subquestions: What common mistakes have you seen early career EAs make when soliciting advice, eg on career trajectory? When do you s... (read more)

Interesting point. Note that a requirement for retaliation is knowledge of the actor to retaliate against. This is called “attribution” and is a historically hard problem for bioweapons which is maybe getting easier with modern ML (COI- I an a coauthor: https://www.nature.com/articles/s41467-020-19149-2)

Load more