I mostly agree with that with the further caveat that I tend to think the low value reflects not that ML is useless but the inertia of a local optima where the gains from automation are low because so little else is automated and vice-versa ("automation as colonization wave"). This is part of why, I think, we see the broader macroeconomic trends like big tech productivity pulling away: many organizations are just too incompetent to meaningful restructure themselves or their activities to take full advantage. Software is surprisingly hard from a social and organizational point of view, and ML more so. A recent example is coronavirus/remote-work: it turns out that remote is in fact totally doable for all sorts of things people swore it couldn't work for - at least when you have a deadly global pandemic solving the coordination problem...
As for my specific tweet, I wasn't talking about making $$$ but just doing cool projects and research. People should be a little more imaginative about applications. Lots of people angst about how they can possibly compete with OA or GB or DM, but the reality is, as crowded as specific research topics like 'yet another efficient Transformer variant' may be, as soon as you add on a single qualifier like, 'DRL for dairy herd management' or 'for anime', you suddenly have the entire field to yourself. There's a big lag between what you see on Arxiv and what's out in the field. Even DL from 5 years ago, like CNNs, can be used for all sorts of things which they are not at present. (Making money or capturing value is, of course, an entirely different question; as fun as This Anime Does Not Exist may be, there's not really any good way to extract money. So it's a good thing we don't do it for the money.)
Lousy paper, IMO. There is much more relevant and informative research on compute scaling than that.
I think your confusion with the genetics papers is because they are talking about _effective_ population size (N~e~), which is not at all close to 'total population size'. Effective population size is a highly technical genetic statistic which has little to do with total population size except under conditions which definitely do not obtain for humans. It's vastly smaller for humans (such as 10^4) because populations have expanded so much, there are various demographic bottlenecks, and reproductive patterns have changed a great deal. It's entirely possible for effective population size to drop drastically even as the total population is growing rapidly. (For example, if one tribe with new technology genocided a distant tribe and replaced it; the total population might be growing rapidly due to the new tribe's superior agriculture, but the effective population size would have just shrunk drastically as a lot of genetic diversity gets wiped out. Ancient DNA studies indicate there has been an awful lot of population replacements going on during human history, and this is why effective population size has dropped so much.) I don't think you can get anything useful out of effective population size numbers for economics purposes without making so many assumptions and simplifications as to render the estimates far more misleading than whatever direct estimates you're trying to correct; they just measure something irrelevant but misleadingly similar sounding to what you want.
This seems like a retread of Bostrom's argument that, despite astronomical waste, x-risk reduction is important regardless of whether it comes at the cost of growth. Does any part of this actually rely on Roodman's superexponential growth? It seems like it would be true for almost any growth rates (as long as it doesn't take like literally billions or hundreds of billions of years to reach the steady state).
“Recent GWASs on other complex traits, such as height, body mass index, and schizophrenia, demonstrated that with greater sample sizes, the SNP h2 increases. [...] we suspect that with greater sample sizes and better imputation and coverage of the common and rare allele spectrum, over time, SNP heritability in ASB [antisocial behavior] could approach the family based estimates.”
I don't know why Tielbeek says that, unless he's confusing SNP heritability with PGS: a SNP heritability estimate is unconnected to sample size. Increasing n will reduce the standard error but assuming you don't have a pathological case like GCTA computations diverging to a boundary of 0, it should not on average either increase or decrease the estimate... Better imputation and/or sequencing more will definitely yield a new, different, larger SNP heritability, but I am really doubtful that it will reach the family-based estimates: using pedigrees in GREML-KIN doesn't reach the family-based Neuroticism estimate, for example, even though it gets IQ close to the IQ lower bound.
For example, the meta-analysis by Polderman et al. (2015, Table 2) suggests that 93% of all studies on specific personality disorders “are consistent with a model where trait resemblance is solely due to additive genetic variation”. (Of note, for “social values” this fraction is still 63%).
Twin analysis can't distinguish between rare and common variants, AFAIK.
The SNP heritabilities I'm referring to are https://en.wikipedia.org/w/index.php?title=Genome-wide_complex_trait_analysis&oldid=871623331#Psychological There's quite low heritabilities across the board, and https://www.biorxiv.org/content/10.1101/106203v2 shows that the family-specific rare variants (which are still additive, just rare) are almost twice as large as the common variants. A common SNP heritability of 10% is still a serious limit, as it upper bounds the PGS which will be available anytime soon, and also hints at very small average effects making it even harder. Actually, 10% is much worse than it seems even if you compare to the quoted IQ's 30%, because personality is easy to measure compared to IQ, and the UKBB has better personality inventories than IQ measures (at least, substantially higher test-retest reliabilities IIRC).
Dominance...And what about epistasis? Is it just that there are quadrillions of possible combinations of interactions and so you would need astronomical sample sizes to achieve sufficient statistical power after correcting for multiple comparisons?
Yes. It is difficult to foresee any path towards cracking a reasonable amount of the epistasis, unless you have faith in neural net magic starting to work when you have millions or tens of millions of genomes, or something. So for the next decade, I'd predict, you can write off any hopes of exploiting epistasis to a degree remotely like we already can additivity. (Epistasis does make it a little harder to plan interventions: do you wind up in local optima? Does the intervention fall apart in the next generation after recombination? etc. But this is minor by comparison to the problem that no one knows what the epistasis is.) I'm less familiar with how well dominance can work.
So to summarize: the SNP heritabilities are all strikingly low, often <10%, and pretty much always <20%. These are real estimates and not anomalies driven by sampling error, nor largely deflated by measurement error. The PGSes, accordingly, are often near-zero and have no hits. The affordable increases in sample sizes using common SNP genotyping will push it up to the SNP heritability limit, hopefully; but for perspective, recall that IQ PGSes 2 years ago were *already* up to 11% (Allegrini et al 2018) and still have at least 20% to go, and IQ isn't even that big a GWAS success story (eg height is >40%). The 'huge success' story for personality research is that with another few million samples years and years from now, they can reach where a modestly successful trait was years ago before they hit a hard deadend and will need much more expensive sequencing technology in generally brandnew datasets, at which point the statistical power issues become far more daunting (because rare variants by definition are rare), and other sources of predictive power like epistatic variants will remain inaccessible (barring considerable luck in someone coming up with a method which can actually handle epistasis etc). The value of the possible selection for the foreseeable future will be very small, and is already exceeded by selection on many other traits, which will continue to progress more rapidly, increasing the delta, and making selection on personality traits an ever harder sell to parents since it will largely come at the expense of larger gains on other traits.
Could you select for personality traits? A little bit, yeah. But it's not going to work well compared to things selection does work well for, and it will continue not working well for a long time.
How do you plan to deal with the observation that GWASes on personality traits have larger failed, the SNP heritabilities are often near-zero, and that this fits with balancing-selection models of how personality works in humans?
Also, how mature is the concept of Iterated Embryo Selection?
The concept itself dates back to 1998 , as far as I can tell, based on similar ideas dating back at least a decade before that.
There has been enormous progress in various parts of the hypothetical process, like just yesterday Tian et al 2019 reported taking ovarian cells (not eggs) and converting them into mouse eggs and fertilizing and yielding live healthy fertile mice. This is a big step towards 'massive embryo selection' (do 1 egg harvesting cycle, create hundreds or thousands of eggs from the collected egg+non-egg cells, fertilize, and select, yielding >1SD gains), and of course, the more control you have over gametogenesis in general, the closer you are to a full IES process.
The animal geneticists are excited about IES, to the point of reinventing it like 3 times over the past few years, and are actively discussing implementing it for cattle. Humans, of course, who knows? But I wouldn't want to bet against IES happening during the 2020s for some species, at least in lab demonstrations. (For comparison, think about the state of the art for GWASes, editing, gametogenesis, and cloning in 2010 vs now.)
So I would phrase it as, much more obscure an idea than it deserves to be, with lots of challenging technical & engineering work still to be done, but well within current foreseeability; and will likely happen quite soon on the scale of 1-3 decades (being highly conservative) even without any particularly focused research efforts or 'Manhattan projects', because the required technologies are either far too useful in general (stem cell creation, gametogenesis), or have constituencies who want it a lot (animal breeders/geneticists, wealthy gay couples).
One of the amusing things about the 'hinge of history' idea is that some people make the mediocrity argument about their present time - and are wrong.
Isaac Newton, for example, 300 years ago appears to have made an anthropic argument that claims that he lived in a special time which could be considered any kind of, say, 'Revolution', due to the visible acceleration of progress and recent inventions of technologies, were wrong, and in reality, there was an ordinary rate of innovation and the invention of many things recently merely showed that humans had a very short past and were still making up for lost time (because comets routinely drove intelligent species extinct).
And Lucretius ~1800 years before Newton (probably relaying older Epicurean arguments) made his own similar argument, arguing that Greece & Rome were not any kind of exception compared to human history - certainly humans hadn't existed for hundreds of thousands or millions of years! - and if Greece & Rome seemed innovative compared to the dark past, it was merely because "our world is in its youth: it was not created long ago, but is of comparatively recent origin. That is why at the present time some arts are still being refined, still being developed."
One could read these mistakes in a very Kurzweilian fashion: if progress is accelerating or even just stable, every era *can* be (much) more innovative and influential on the future than every preceding era was, and the mediocrity argument wrong every time.
On the other hand, in that same talk, Hamming pointed out the importance of abundant computing resources:
One lesson was sufficient to educate my boss as to why I didn't want to do big jobs that displaced exploratory research and why I was justified in not doing crash jobs which absorb all the research computing facilities. I wanted instead to use the facilities to compute a large number of small problems. Again, in the early days, I was limited in computing capacity and it was clear, in my area, that a "mathematician had no use for machines." But I needed more machine capacity. Every time I had to tell some scientist in some other area, "No I can't; I haven't the machine capacity," he complained. I said "Go tell your Vice President that Hamming needs more computing capacity." After a while I could see what was happening up there at the top; many people said to my Vice President, "Your man needs more computing capacity." I got it!
Both the hover-over and sidenotes on gwern.net are pure JS, and require no modifications to the original Markdown or generated HTML footnotes; they just run and modify the appearance clientside and degrade to the original footnotes if JS is disabled. (Obormot says feel free to contact him if you want/need any help integrating stuff.) For more on sidenotes, see https://www.gwern.net/Sidenotes