Alex Mallen

I'm a sophomore Computer Science student at the University of Washington. I'm particularly concerned about long-termism.


Sorted by New

Wiki Contributions


Open Thread: November 2021

Hey everyone, I'm also new to the forum and to EA as of summer 2021. I found EA mostly through Lex Fridman's old podcast with Will MacAskill, which I watched after being reminded of EA by a friend. Then I read some articles on 80,000 hours and was pretty convinced.

I'm a sophomore computer science student at the University of Washington. I'm currently doing research with UW Applied Math on machine learning for science and engineering. It seems like my most likely career is in research in AI or brain-computer interfacing, but I'm still deciding and have an appointment with 80,000 hours advising.

Something else I'm interested in is joining (and possibly building) an EA community at UW. To my knowledge, the group has mostly died away since COVID, but there may still be some remaining UW EAs to link up with.

Looking forward to engaging in discussion on the forum!

A New X-Risk Factor: Brain-Computer Interfaces

I wonder if the probability L = 90% is an overestimate of the likelihood of lasting indefinitely. It seems reasonable that the regime could end because of a global catastrophe that is not existential, reverting us to preindustrial society.  For example, nuclear war could end regimes if/while there are multiple states, or climate change could cause massive famine. On the other hand, is it reasonable to think that BCI would create so much stability that even the death of a significant proportion of its populace would not be able to end it?