Abby Hoskin

977Princeton, NJ, USAJoined Sep 2019

Comments
68

Very few of my peers are having kids. My husband and I are the youngest parents at the Princeton University daycare at 31 years old. The next youngest parent is 3 years older than us, and his kid is a year younger than ours. Considering median age of first birth at the national level is 30 years old, it seems like a potential problem that the national median is the Princeton minimum. 

I wonder what the birth rate is specifically among American parents with/doing STEM PhDs. I'm guessing it's extremely low for people under the age of 45. Possibly low enough to raise concerns about how scientists are not procreating anymore.

Most birth rate statistics I've seen group doctorates in with any professional degree other than a masters, so it's hard to tell what's going on outside anecdotal evidence. For example: https://www.cdc.gov/nchs/data/nvsr/nvsr70/nvsr70-05-508.pdf

Princeton is raising annual stipends to about $45,000. Two graduate student parents now have a reasonable combined household income, especially if they can live in subsidized student housing. I wonder if this will make a big difference in Princeton fertility rates. 

On the other hand, none of my NYC friends making way over $90,000 have kids, so this might be a deeper cultural problem. 

To be clear, I don't think people who don't want to have kids should have them, or that they're being "selfish" or whatever. But societies without children will literally die, so it's concerning that American society has such strong anti-natal sentiment. Especially if it's the part of American society with some of the smartest people who are more motivated by truth seeking than money. 

A lot of people (myself very much included) don't know how to talk about loss in a way that provides comfort to the person experiencing the loss.  Thank you so much for this extremely well articulated set of suggestions and framework for implementing them! 

Makes sense! Thanks again for writing such a comprehensive report!

Definitely!!!! A lot of journalists seem to cover topics they don't really understand (mainstream media coverage of things like nuclear power or cryptocurrency can be particularly painful), so it was awesome to read something written by a person who gets the basic philosophy. 

I think this is a really comprehensive report on this space! Nothing against the report itself, I think you did a great job. 

As somebody who has spent the last ~10 years studying neuroscience, I'm basically pretty cynical about current brain imaging/BCI methods. I plan to pivot out of neuro into higher impact fields once I graduate. I just wanted to add my 2 cents as somebody who has spent time doing EEG, CT Scan, MRI, fMRI, TMS, and TDCS research (in addition to being pretty familiar with MEG and FNIRS):

+ I don't think getting high quality structural images of the brain is useful from an EA perspective, though it has substantial medical benefits for the people who need brain scans/can afford to get them. This just doesn't strike me as one of the most effective cause areas, the same way a cure for Huntington's disease would be a wonderful thing, but might not qualify as a top EA cause area. 

+ I don't think getting measures of brain activity via EEG or fMRI has yet produced results that I would consider worth funding from an EA perspective. Again, I'm not saying some results aren't useful (I'm especially impressed with how EEG helped us understand sleep). But I don't think any  of this research is substantially relevant to preventing civilizational or existential risks. 

+ I don't think our current brain stimulation methods (e.g., TMS, TDCS) have any EA relevance. The stimulation provided from these procedures (in healthy subjects) just doesn't seem to have huge cognitive effects compared to more robust methods (education, diet, exercise, sleep, etc.).  Brain stimulation might have  much bigger impacts for chronically depressed and Parkinson's patients via DBS. But again I don't think this stuff is relevant to civilizational or existential risks, and I think there are probably much more cost effective ways of improving welfare. 

There may still be useful neurotechnology research to be done. But I think the highest impact will be in computational/algorithmic stuff instead of things that directly probe the human brain. 

I thought this was a surprisingly good article! Many journalists get unreasonably snarky about EA topics (e.g., insinuate that people who work in technology are out of touch awkward nerds who could never improve the world; suggest EA is cult-like; make fun of people for caring about literally anything besides climate change and poverty). This journalist took EA ideas seriously, talked about the personal psychological impact of being an EA, and correctly (imo) portrayed the ideas and mindsets of a bunch of central people in the EA movement. 

Voted, it was surprisingly painless. Fingers crossed for Will, although he was buried in the middle of the pack of names due to unfortunate lack of alphabetical prominence. New cause area: renaming our thought leaders Aaron Aaronson. 

Spicy takes, but I think these are good points people should consider! 

I'm also doing a PhD in Cognitive Neuroscience, and I would strongly agree with your footnote that: 

"Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds."

A bunch of people in my program have gone into research at DeepMind. But these were all people who specifically focused on ML and algorithm development in their research. There's a wide swath of cognitive neuroscience, and other neuro sub-disciplines you list, where you can avoid serious ML research. I've spoken to about a dozen EA neuroscientists who didn't focus on ML and have become pretty pessimistic about how their research is useful to AI development/alignment. This is a bummer for EAs who want to use their PhDs to help with AI safety. So please take this into consideration if you're an early stage student considering different career paths!

This is cool; I often think about how much better the UK system is than the US when it comes to educating doctors. 

I think my biggest quibble with your post is: "I assume the odds of a successful campaign are 50%." 

I would maybe revise that down to 5%? Professional organizations like the American Medical Association have their professions in a stranglehold; they have financial incentives to keep their profession difficult to access (eg allows them to demand higher wages), and they can easily manipulate the public by saying things like "Don't you want a FULLY trained doctor? Not somebody who skipped undergraduate and went straight to medical school?"

A substantially more skeptical campaign success probability obviously lowers the expected ROI of this effort. But I wonder if other people who know more about politics are as skeptical as me. 

All that being said, I would vote for your campaign if it came up on my state's ballot!

Thanks!  I was just curious, didn't expect a super in depth analysis. Although that would be super cool to see too :)

Load More