Madhav Malhotra

Teaching Assistant @ Centre for AI Safety
641 karmaJoined Jul 2021Pursuing an undergraduate degreeWorking (0-5 years)Toronto, ON, Canada
madhavmalhotra.com

Bio

Participation
2

Is helpful/friendly :-) Loves to learn. Wants to solve neglected problems. See website for current progress.

How others can help me

I'm very interested in talking to biosecurity experts about neglected issues with: microneedle array patches, self-spreading animal/human vaccines, paper-based (or other cheap) microfluidic diagnostics, and/or massively-scalable medical countermeasure production via genetic engineering.

Also, interested in talking to experts on early childhood education and/or positive education!

How I can help others

Reach out if you have questions about: 

I'll respond to Linkedin the fastest :-)

Comments
113

Here's a summary of the report from Claude-1 if someone's looking for an 'abstract':

There are several common misconceptions about biological weapons that contribute to underestimating the threat they pose. These include seeing them as strategically irrational, not tactically useful, and too risky for countries to pursue.

In reality, biological weapons have served strategic goals for countries in the past like deterrence and intimidation. Their use could also provide tactical advantages in conflicts.

Countries have historically taken on substantial risks in pursuing risky weapons programs when they believe the strategic benefits outweigh the costs. Accidents and blowback would not necessarily deter programs.

Decisions around biological weapons activities are not always top-down and known to all national leaders. Bureaucratic and individual interests can influence programs apart from formal policy.

International norms and laws alone are insufficient to deter or discover clandestine biological weapons work given lack of verification. COVID has shown existing vulnerabilities.

Dispelling these misconceptions is important for strengthening defenses against the real biological weapons threat, which pandemic has shown remains serious despite decades of effort. More investment is needed. 

"There are many other things that could have been done to prevent Russia’s unprovoked, illegal attack on Ukraine. Ukraine keeping nuclear weapons is not one of them."

  • Could you explain your thinking more for those not familiar with the military strategy involved? What about having nuclear weapons makes an invasion more viable? Which specific alternatives would be more useful in preventing the attacks and why?

Context: I'm hoping to learn lessons in nuclear security that are transferable to AI safety and biosecurity. 

Question: Would you have any case studies or advice to share on how regulatory capture and lobbying was mitigated in US nuclear security regulations and enforcement?

Are there any misconceptions, stereotypes, or tropes that you commonly see in academic literature around nuclear security or biosecurity that you could correct given your perspective inside government?

Could you share the top 3 constraints and benefits you had in improving global nuclear security while you were working for the US DoD compared to now, when you're working as an academic?  

Context: I'm hoping to find lessons from nuclear security that are transferable to the security of bioweapons and transformative AI. 

Question: Are there specific reports you could recommend on prevening these nuclear security risks:

  • Insider threats (including corporate/foreign espionage)
  • Cyberattacks
  • Arms races
  • Illicit / black market proliferation
  • Fog of war

Any updates on how the event went? :-) Any cause priorities or research questions identified to mitigate existential cybersecurity risks?

A lot of people have gotten the message: "Direct your career towards AI Safety!" from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others' comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots). 

What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?

For what it's worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven't observed any of the points you mentioned in my experience with the EA group:

  • We don't run intro to EA fellowships because we're a smaller group. We're not trying to convert more students to be 'EA'. We more so focus on supporting whoever's interested in working on EA-relevant projects (ex: a cheap air purifier, a donations advisory site, a cybersecurity algorithm, etc.). Whether they identify with the EA movement or not. 
  • Since we're not trying to get people to become EA members, we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else. 
  • No one's getting paid here. We have grant money that we've used for things like hosting an AI governance hackathon. But that money gets used for things like marketing, catering, prizes, etc. - not salaries. 

Which university EA groups specifically did you talk to before proclaiming "University EA Groups Need Fixing"? Based only on what I read in your article, a more accurate title seems to be "Columbia EA Needs Fixing" 

Out of curiosity @LondonGal, have you received any followups from HLI in response to your critique? I understand you might not be at liberty to share all details, so feel free to respond as you feel appropriate.

Load more