Ben Stewart

1036Sydney NSW, AustraliaJoined Feb 2020


Hi, I'm Ben. I just graduated as a medical doctor from the University of Sydney. I'm currently working on forecasting bioterrorist groups, supported by Open Phil, and going through the Charity Entrepreneurship Incubation Program.

 I studied an undergraduate double-degree (BA, BSc) triple-majoring in philosophy,  international relations, and neuroscience. I've spent my MD doing bits and bobs in global health and health security. I've also conducted some research projects at the Future of Humanity Institute, the Stanford Existential Risk Initiative, the vaccine patch company Vaxxas, and the Lead Exposure Elimination Project. 


I appreciate this point, but personally I am probably more like 70-30 for general thinking, with variance depending on the topic. So much of thinking about the world is trust-based. My views on historical explanations virtually never depend on my reading of primary documents - they depend on my assessment of what the proportional consensus of expert historians thinks. Same with economics, or physics, or lots of things. 

When I'm dealing directly with an issue, like biosecurity, it makes sense to have a higher split - 80-20 or 90-10 - but it's still much easier to navigate if you know the landscape of views. For something like AI, I just don't trust my own take on many arguments - I really rely a lot on the different communities of AI experts (such as they are). 

I think most people most of the time don't know enough about an issue to justify a 90-10 split in issue vs. view thinking. However,  I should note all this is regarding the right split of personal attention; for public debate, I can understand wanting a greater focus on the object level (because the view-level should hopefully be served well by good object-level work anyway).

Sure, but my impression of the number of them and their competence has decreased. It’s still moderately high. And meritocracy cuts both ways - meritocracy would push harder on judging current leaders by their past success - Ie harshly - and not be as beholden to contingent or social reasons for believing they’re competent

Personally, this solidifies my negative update over the past 6 months  on the judgment and trustworthiness of the bulk of senior EAs. I mean trustworthiness on the basis of competence, not motive. 

I think a common maladaptive pattern is to assume that the rationality community and/or EA is unusually good at "increasing our rationality, comprehending big problems", and I really, really, really doubt  that "the most "epistemically rigorous" people are writing blog posts". 

Thanks, I appreciate this kind of public review! And congratulations on the impressive growth. I was wondering, do you have figures for how many people altered their career plans or were successful in a related job opportunity due to your work? This is much harder to measure, but is much more closely connected to endline goals and what's of value, including potential cost-effectiveness. Apologies if this is in the full report which I only glanced at (though if it is there I would suggest adding it to the summary).

Thanks for writing this! It's a well-written introduction and it's an approach that should be more widely known + highly rated in EA. 

Another useful application of the capability approach I've encountered is in health. While saving lives is simple to value via lots of approaches, it's more difficult to know how to weigh disability and disease. The QALY/DALY approach is a useful measurement tool, but I find it helpful to have a theory for why we should care about disability/disease beyond just the lens of QALY/DALYs. Venkatapuram (2011) defends a notion of health as a cluster of basic capabilities - and I find that a really useful foundation to think from.

What proportion of the incidents described was the team unaware of?

I can try! But apologies as this will be vague - there'll be lots of authors this doesn't apply to, and this is my gestalt impression given I avoid reading much of it.  And as I say, I don't know how beneficial LW was to EA's development, so am not confident on how future exchange should go.

I tend to be frustrated by the general tendencies towards over-confidence,  in-group jargon,  and overrating the abilities or insights of their community/influences vs. others (esp. expert communities and traditional academic sources).  Most references I see to 'epistemics' seem under-specified and not useful, and usually a short-hand way to dismiss a non-conforming view.  I find it ironic that denigrating others’ ‘epistemics’ is a common LW refrain given my impression that the epistemic quality of LW seems poor. 

There's a kind of Gell-Mann amnesia effect I get where the LW discourse on things I know about decently well (medical advice, neuroscience, global health) I can easily see as  wrong, poorly conceived and argued, and over-confident. I don't have a clear personal view on the LW discourse on things I don't know well like AI, but have occasionally seen ~similar takes to mine from some people who know AI well.  

There are definitely writers/thinkers I admire from LW, but I usually admire them despite their LW-like tendencies. Losing their input would be a true loss. But for overall effect on EA, I doubt (with weak confidence) these exemplars outweigh the subpar majority.


Personally, I'm not a fan of LessWrong's thinking style, writing style, or intellectual products. As such, I think  EA would be better off with less LW influence in the near-medium term. 
However, I'm not familiar enough with EA's intellectual history to judge how useful LW was to it; I certainly can't predict EA's intellectual future. It seems possible that future exchange would be useful, if only for viewpoint diversity. On balance though I'd lean against heavy exchange.

Load More