Ben Stewart

468Sydney NSW, AustraliaJoined Feb 2020

Bio

Hi, I'm Ben, currently a final year medical student at the University of Sydney. In 2023 I'll be going through the Charity Entrepreneurship Incubation Program.

 I studied an undergraduate double-degree (BA, BSc) triple-majoring in philosophy,  international relations, and neuroscience. I've spent my MD doing bits and bobs in global health. I've also conducted some research projects at the Future of Humanity Institute, the Stanford Existential Risk Initiative, the vaccine patch company Vaxxas, and the Lead Exposure Elimination Project. 

Comments
65

Prioritisation should consider potential for ongoing evaluation alongside expected value and evidence quality

Nice. I think we could model this to see how ease/cost of evaluation interacts with other terms when assessing overall choice-worthiness. In your example the intuition sails through because A is only marginally cheaper to implement, while B is much cheaper to evaluate. I'd like to figure out  precisely when lower evaluative costs outweigh lower implementation costs, and what that depends on. 

Your post is also akin to a preference for good feedback loops when evaluating projects, which some orgs value highly. 

How do independent researchers get access to resources?

And if that fails, you can usually contact the authors directly. Most academics are happy to have people interested in their work, and papers will have a corresponding author with an email address. Though obviously this method is only worth the bother if it's a really valuable paper for you.

Cause area: Developmental Cognitive Neuroepidemiology

This is excellent! I wrote an entry for the competition  focused on the organophosphate pesticides you mention,  here. In that report I gesture vaguely and briefly at a wider cause area of developmental neurotoxicants. However, your proposed cause area of 'developmental cognitive neuroepidemiology' is much more systematic and ambitious. It strikes me as an excellent balance between precision of approach while remaining agnostic as to areas of focus and intervention. 

Really well done!

 

Cause: Reducing Judicial Delay in India

Really cool, well done! I really like the explicit and quick labelling of level of evidence for each citation - I haven't seen it used outside of clinical guidelines but it seems like a nice feature given most readers are not going to look into citations.

Improving incentives in research - could we topple the h-index?

I think secondary citations would be easier like you say. And you wouldn't have to stop there - once you have the citation data, you could probably do a lot of creative things analysing the resulting graphs (graphs in the mathematical sense). I expect it's where the input data is harder to reach and scrape (like whole text) that logistical worries enter.

Yeah I don't know! I'm sure there some folks who have thought about meta-science/improving science etc. that might have good ideas.

New cause area: maternal morbidity

This is interesting, thanks! It's surprising that morbidity hasn't changed much despite progress on mortality, given significant overlap in their prevention/treatment. I think progress on maternal mortality could increase morbidity estimates because women are surviving with near-miss or chronic complications rather than dying. How big/real do you think this effect is?

Improving incentives in research - could we topple the h-index?

Thanks, this is interesting! 2 questions and a comment:

1) Would a novelty-focused metric trade off against  replication work?

2) Would resource constraints matter for choice of metric? I'm thinking that some metrics are computationally/logistically easier to gather and maintain (e.g. pre-existing citation databases), and the cost/bother of performing textual analysis to some depth of the volumes of relevant literature might be non-negligible. 

Comment: 
My impression from reading some Wikipedia articles (https://en.wikipedia.org/wiki/H-index , https://en.wikipedia.org/wiki/Citation_impact , https://en.wikipedia.org/wiki/Citation_analysis ) is that there are lots of proposals for different metrics, but a common theme of criticism is the difficulty of comparing between disciplines, where field-dependent factors are critical to a metric being meaningful/useful. If this is the case, maybe a smaller version of this project would be to pick a particularly important field to EAs, and see if targeted analysis/work can propose a more relevant metric for it. 

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I don't have a deep model of AI - I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Karnofsky/Ord/surveys).

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I'm currently involved in the UPenn tournament so can't communicate my forecasts or rationales to maintain experimental conditions, but it's at least substantially higher than 1/10,000.

And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general. 

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I'm not sure what you mean - I agree the aggregate probability of collapse is an important parameter, but I was talking about the kinds of bio-risk scenarios that simeon_c was asking for above? 
Do I understand you right that overall risk levels should be estimated/communicated even though their components might involve info-hazards? If so, I agree, and it's tricky. They'll likely be some progress on this over the next 6-12 months with Open Phil's project to quantify bio-risk, and (to some extent) the results of UPenn's hybrid forecasting/persuasion tournament on existential risks. 
 

Load More