All of Madhav Malhotra's Comments + Replies

The UX has so much improvement since the 2022 version of this :-) It feels concise and the scrolling to each new graph makes it interesting to learn each new thing. Kudos to whoever designed it this way!

3
Sarah Cheng
4mo
The credit goes to our super talented designer @agnestenlund!

Just to play devil's advocate (without harmful intentions :-), what are the largest limitations or disclaimers that we should keep in mind regarding your results or methods?

2
zdgroff
6mo
See my reply to Neil Dullaghan—I think that gives somewhat of a sense here. Some other things: * I don't have a ton of observations on any one specific policy, so I can't say much about whether some special policy area (e.g., pollution regulation) exhibits a different pattern. * I look at whether this policy, or a version of it, is in place. This should capture anything that would be a direct and obvious substitute, but there might be looser substitutes that end up passing if you fail to pass an initial policy. The evidence I do have on this suggests it's small, but I still wonder about it. * My method is about close votes. I try to think about what it means for things that are less close, and I think it basically generalizes, but it gets tricky to think about the impact of, e.g., funding a campaign to move a policy from being unpopular and neglected to popular and on the ballot.

Sorry if I missed this in your post, but how many policies did you analyse that were passed via referendum vs. by legislation? How many at the state level vs. federal US vs. international?

@trevor1 Thank you for the detailed response!

RE: Crossposting to LessWrong

  • I've crossposted it now. If there are other forums relevant to cybersecurity topics in EA in particular, I'd appreciate suggestions :-)

RE: Personal Cybersecurity and IoT

  • Yes, I agree that the best way to improve cybersecurity with personal IoT devices is to avoid them. I'll update the wording to be more clear about that. 

Here's a summary of the report from Claude-1 if someone's looking for an 'abstract':

There are several common misconceptions about biological weapons that contribute to underestimating the threat they pose. These include seeing them as strategically irrational, not tactically useful, and too risky for countries to pursue.

In reality, biological weapons have served strategic goals for countries in the past like deterrence and intimidation. Their use could also provide tactical advantages in conflicts.

Countries have historically taken on substantial risks in p

... (read more)

"There are many other things that could have been done to prevent Russia’s unprovoked, illegal attack on Ukraine. Ukraine keeping nuclear weapons is not one of them."

  • Could you explain your thinking more for those not familiar with the military strategy involved? What about having nuclear weapons makes an invasion more viable? Which specific alternatives would be more useful in preventing the attacks and why?
1
Andy Weber
7mo
See my comments above on Iran. A tougher response to Putin’s attack on Georgia in 2008 and the illegal occupation of Crimea and Eastern Ukraine in 2014 might have prevented Putin’s terrible decision to invade in 2022. We could have provide more military assistance and training to Ukraine after 2014. Perhaps we should have been more receptive to Ukraine and Georgia’s NATO membership aspirations in the late 1990’s and early 2000’s.

Context: I'm hoping to learn lessons in nuclear security that are transferable to AI safety and biosecurity. 

Question: Would you have any case studies or advice to share on how regulatory capture and lobbying was mitigated in US nuclear security regulations and enforcement?

Are there any misconceptions, stereotypes, or tropes that you commonly see in academic literature around nuclear security or biosecurity that you could correct given your perspective inside government?

7
Andy Weber
7mo
Some of our amazing former Council on Strategic Risks Ending Bioweapons Fellows wrote this outstanding paper debunking common misconceptions about biological weapons: https://councilonstrategicrisks.org/wp-content/uploads/2020/12/Common-Misconceptions-About-Biological-Weapons_BRIEFER-12_2020_12_7.pdf

Could you share the top 3 constraints and benefits you had in improving global nuclear security while you were working for the US DoD compared to now, when you're working as an academic?  

1
Andy Weber
7mo
I believe significant changes to U.S. nuclear weapons policy and posture only occur when the President personally intervenes. This was also true of the U.S. decision to eliminate its biological weapons program in 1969. President Nixon demanded it.

Context: I'm hoping to find lessons from nuclear security that are transferable to the security of bioweapons and transformative AI. 

Question: Are there specific reports you could recommend on prevening these nuclear security risks:

  • Insider threats (including corporate/foreign espionage)
  • Cyberattacks
  • Arms races
  • Illicit / black market proliferation
  • Fog of war

Any updates on how the event went? :-) Any cause priorities or research questions identified to mitigate existential cybersecurity risks?

A lot of people have gotten the message: "Direct your career towards AI Safety!" from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others' comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots). 

What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?

It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it's often a good move to try, if you have the right personal fit. I'd also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via 'EA fellowshi... (read more)

9
Huon Porteous
7mo
My guess is that in a lot of cases, the root cause of negative feelings here is going to be something like perfectionism. I certainly felt disenchanted when I wasn’t able to make as much progress on AI as I would have liked. But I also felt disenchanted when I wasn’t able to make much progress on ethics, or being more conscientious, or being a better dancer. I think EA does some combination of attracting perfectionists, and exacerbating their tendencies. My colleagues have put together some great material on this, and other mental health issues: 1. Howie’s interview on having a successful career with depression and anxiety 2. Tim Lebon on how altruistic perfectionism is self-defeating 3. Luisa on dealing with career rejection and imposter syndrome That said, even if you have a healthy relationship with failure/rejection, feeling competent is really important for most people. If you’re feeling burnt out, I’d encourage you to explore more and focus on building aptitudes. When I felt AI research wasn’t for me, I explored research in other areas, community building, earning to give, and others. I also kept building my fundamental skills, like communication, analysis and organisation. I didn’t know where I would be applying these skills, but I knew that they’d be useful somewhere.  
1[comment deleted]7mo
2
alex lawsen (previously alexrjl)
7mo
Hey, it's not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it's one that frequently came up in some form or other when I was advising), in particular, I'd recommend skimming the sections on 'trying to have an impact right now', 'needing to work on AI immediately', and 'ignoring conventional career wisdom'.

For what it's worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven't observed any of the points you mentioned in my experience with the EA group:

  • We don't run intro to EA fellowships because we're a smaller group. We're not trying to convert more students to be 'EA'. We more so focus on supporting whoever's interested in working on EA-relevant projects (ex: a cheap air purifier, a donations advisory site, a cybersecurity algorithm, etc.). Whether they identify with the EA movement or not. 
  • Since we're
... (read more)

...we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else. 

I feel it is important to mention that this isn't supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.

Out of curiosity @LondonGal, have you received any followups from HLI in response to your critique? I understand you might not be at liberty to share all details, so feel free to respond as you feel appropriate.

4
LondonGal
9mo
Nope, I've not heard from any current HLI members regarding this in public or private.

Context: I work as a remote developer in a government department. 
 

Practices that help:

  • Show up at least 3 minutes early to every meeting. Change your clocks to run 3 minutes ahead if you can't discipline yourself to do it. Shows commitment.
    • On a related note, take personal time to reflect before a meeting. Think of questions you want to ask or what you want to achieve, even if you're not hosting the meeting and you just do it for 5 minutes. 
    • Try scheduling a calendar reminder with an intention before the meeting. Ex: Say back what others said
... (read more)

It takes courage to share such detailed stories of goals not going right! Good on you for having the courage to do so :-) 

It seems that two kinds of improvements within EA might be helpful to reduce the probability of other folks having similar experiences. 

Proactively, we could adjust the incentives promoted (especially by high-visibility organisations like 80K hours). Specifically, I think it would be helpful to: 

  • Recommend that early-career folks try out university programs with internships/coops in the field they think they'd enjoy. This
... (read more)
3
zekesherman
1y
Thanks for the kind words Madhav, but I do disagree: I imagine that's already suggested somewhere in the career guides, anyway it's exactly what I did - as I pivoted my goals in the final year of undergrad I became a computer science research assistant and took courses like linear algebra and intro to machine learning, then did data science bootcamp over the summer. I believed I knew from experience that these were tough but survivable experiences. I think most people would have error corrected in the same situation; few people would be as stubborn/selfless as I was.  My impression of public EA career advice is that it is mostly fine. At the time, I sometimes derided it for being too meek, and consciously held myself to a stricter standard than the vibe of 80k Hours. Had I read your rewrites I would have ignored them. I believed in utilitarianism long before I read 80k Hours.

Thank you for your thoughtful questions! 

RE: "I guess the goal is to be able to run models on devices controlled by untrusted users, without allowing the user direct access to the weights?"

You're correct in understanding that these techniques are useful for preventing models from being used in unintended ways where models are running on untrusted devices! However, I think of the goal a bit more broadly; the goal is to add another layer of defence behind a cybersecure API (or another trusted execution environment) to prevent a model from being stolen a... (read more)

Hi! 

As I mentioned in the post, I'd delete the database in a month from the post for privacy reasons. My apologies for the inconvenience :/

1
Tristan Williams
1y
Ah understood, sorry for missing. I think you might want to consider deleting the post in that case though

This is certainly a useful resource for those who live in areas without the effective altruism groups around them! Thank you for sharing :-)

Could you please share more details on which parts of the curriculum would be inaccessible to recent graduates? From the outline of the book alone, it's hard to estimate the level of technical depth needed.

6
Jason Clinton
1y
Unfortunately, all of it. The discussion will be fast-moving and talk about reifying the abstract ideas into concrete, production systems and organization structure. It will be out of anyone's skill set who hasn't had worked with real production systems and technical orgs for a few years.

I'd look forward to seeing you post the results of the in-depth survey on the forum :-) 

I'm not sure this is a good idea. 

  • It seems possible that the individual interventions you're linking to research on are not representative of every possible intervention about skill development. 
  • Also, it seems possible that future interventions may integrate both building human and economic capital to enable recipients to make changes in their lives. Ie. Skill-building  + direct cash transfers. 
  • Also, it's generally uncertain whether GiveDirectly will continue to be the most effective or endorsed donation recommendation. I say this given
... (read more)
8
NickLaing
1y
Thanks Madhav - you make some good points  hadn't thought about it that way!  There's even mixed evidence already that cash transfers + skills training might be just as good as cash itself so your point has current not only future evidence. I think the media world moves so fast though that I doubt Givedirectly will damage future ideas through this campaign.  Personally being in the development world the "teach a man to fish" mantra drives me crazy so I'm broadly in support of it getting dismantled even if it does hold some truth.... "Who cares if you give the fish, the fishing class, the rod or the boat - what matters is that it works" This is Givedirectly crowdsourcing free advertising

I'm surprised to see how the book giveaway is more expensive than the costs of actually placing the ads to get eyes on the sites! Why did you decide to give away a physical book? What do you think the cost-effectiveness of that is compared to ebooks or not having a giveaway?

Nice, thanks for your question!

One relevant thing here is that I'm not thinking about the book giveaway as just (or even primarily) an incentive to get people to view our site — I think most of the value is probably in getting people in our target audience to read the books we give out, because I think the books contain a lot of important ideas. I think I'd be potentially excited about doing this without the connection to 80k at all (though the connection to 80k/incentive to engage with us seems like an added bonus).

Re: physical books versus ebook:

  • We do of
... (read more)

If you're interested in supporting education, scholarships to next generation education companies might be worth supporting (example - disclaimer, I've gone through the program of this particular company). 

 

Regarding investments in environmental causes, more neglected causes are more valuable to invest in. For instance, supporting NOVEL carbon capture companies (ie. not tree planting). 

 

Given the high-tech industry in Canada, it might be relatively advantageous to support neglected research priorities. 

  • For instance, you might be ab
... (read more)

It would be helpful to hear more details (including sources) about the problem you've found:

  • What has the NSA publicly announced in its position on AGI? 
  • What has the external academic community or relevant nonprofits assessed their likely plans to be? 
  • Which decision-makers are involved in determining the NSA's policies on AGI development and/or safety?

Also, please add a more specific call to action describing:

  • The action you want to be taken
  • Which kinds of people are best suited to do this 
3
JonCefalu
1y
Wonderful questions Madhav, thank you.  The primary issue is simply lack of knowledge and wisdom -  the people who allocate military funding have never heard of AGI x-risk as a serious thing beyond the movies.  The person whose video I linked laughing was the head of AI R&D for the entire Pentagon. I will endeavor to answer all of your questions whenever I wrote the next post on this topic.  Thank you very much for the kind advice & interest.

"I'm not sure I buy the fourth point - while there will be some competition between plant-based and cell-based meat, they also both compete with the currently much larger traditional meat market, and I think there are some consumers who would eat plant-based but not cell-based and vice versa."

  • How confident are you in your reasoning here? 
  • What kind of empirical evidence do you think would disprove/prove this argument? 

The evidence I've seen (Source) suggests that consumers are largely confused about the difference between cell-based and lab-based ... (read more)

3
Brad West
1y
I do not! But thanks for thinking of me.

I'm curious, how do you think about the relative importance of promoting cell-based (cultivated) vs. plant-based meat? 

  • From an animal suffering perspective, they both displace animals that might suffer. 
  • From an environmental perspective, plant-based meat is currently much better. (Source
  • Economically, one could argue that more competition will lead to more product choice, winning over more consumers. 
  • But one could also argue that the competition between plant-based and animal-based meats will keep traditional meats being consumed for l
... (read more)
1
Zoe Williams
1y
Interesting question, thanks for adding this! I don't have any background in animal welfare research or the plant/cell based meat area beyond reading & chatting with people, but popped some thoughts below regardless: My leaning would be that having both is better than just one, to provide increased choice and options to move away from traditional meats. I'm not sure I buy the fourth point - while there will be some competition between plant-based and cell-based meat, they also both compete with the currently much larger traditional meat market, and I think there are some consumers who would eat plant-based but not cell-based and vice versa. Not only taste, look, feel, and cost are relevant but also the optics and cultural connotations of each, which are quite different. In terms of proportion of promotion efforts to each, I'm really not sure. A strategy there should probably look at how developed each tech is (so more plant-based meat promotion earlier on), uptake rates and effect of promotion (and if there's a ceiling hit where we struggle to get further uptake in a population, suggesting a new option is needed for those remaining), populations promoted to and their unique concerns / likelihood to uptake one or the other, and any tipping points or opposition that needs to be countered in a timely way for something to remain viable in a location or to get past legislative hurdles. (Also sorry for the late reply! I was on vacation last week)

I appreciate you formatting the post summary with brevity in mind :-) Makes it easy to quickly understand the main points and I can see you put in deliberate thought into formatting as a table. 

I'd be interested in hearing someone from Anthropic discuss the upsides or downsides of this arrangement. From an entirely personal standpoint, it seems odd that Anthropic gave up equity AND had restrictions in how the investment could be used. That said, I imagine there are MANY other details about I'm not aware of since I wasn't involved in the decision. 

5
Erich_Grunewald
1y
My assumption is (and I'm definitely not sure about this) that restricting funding to compute is not very restrictive at all, given that (a) Anthropic probably does and will spend large sums of money on compute, likely more than this investment covers, and (b) the money they're currently spending on compute can easily be shifted to other areas now that it's freed up. (If Anthropic aren't currently using Google Cloud Platform, I guess it's more restrictive in that it forces Anthropic to migrate to another cloud service provider.) But yeah, I'd also be curious to hear an insider's view on this.

For anyone seeking more information on this, feel free to search up the key terms 'data poisoning' and 'Trojans.' The Centre for AI Safety has somewhat accessible content and notes on this under lecture 13 here.

Key takeaway: "He preferred to be good, rather than to seem so."

Where can we get more information on projects done 

in the past fellowship?

1
Dušan D. Nešić (Dushan)
1y
The penultimate link shows the retrospective on the last year. Mostly fellows are still working on publishable results, and without their permission we do not want to share what they worked on in specifics beyond what is in the retrospective. We are hoping in the long term to have a page on our website showing all the published works of our Alumni that started during PIBBSS.
3
Victor Warlop
1y
Hey Ben, thank you for noticing this. The issue has been fixed now!

Appreciate you summarising these resources! Still helping people years later :-)

Update: lesson learned - read the fine print effectively

  • My donation wasn't matched. I didn't do enough due diligence to read a final clause on their website that said we had to email our donation receipt somewhere.
  • One 'little' mistake on my part is the difference between 50 people not stuck in poverty vs. 18 people not stuck in poverty. 
  • Doing good effectively = reading the fine print effectively.
2
Jason
1y
Worth emailing anyway, I say -- the instructions say "for the best chance" which is not a conclusive statement that the matching donor won't be flexible with a technical fault.

I'm interested in building a career around technological risks. Sometimes, I think about all the problems we're facing. From biosecurity risks to AI safety risks to cybersecurity risks to ... And it all feels so big and pressing and hopeless and whatnot. 

Clearly, me being existential about these issues wasn't helping anyone. So I've had to accept that I have to pick a smaller, very specific problem that my skillset could be useful in. Even if it's not solving everything, I won't solve anything if I don't specialise in that way. 

Maybe some spirit ... (read more)

Thank you for clarifying :-) I wasn't trying to be pedantic, I was just choosing between donating to StrongMinds with a 100% match or here. 

Update: the Double Up Drive donation matching is no longer available. 

Donations to Animal Charity Evalutators, Hellen Keller International, and StrongMinds are still being matched. 

For Canadian donors, donations to GiveDirectly are being matched to at least 50%.

Sorry, does the 1.5x match mean we donate $X and 50% of that will be matched? Or 150% of our donation will be added to the amount we donated? 

3
Jendayi
1y
Hey Madhav! The 1.5X match means that for every dollar donated, the match will provide an additional $1.50. So if I donate $10, the match fund will provide an additional $15 for a total of $25 donated to GiveDirectly. Hope this helps!

@Tessa - thank you for introducing me to Dr. Millet's course in your reading list on biosecurity!

Good on you for taking on more work and trying to figure out how you can best contribute to the world :-) It might be easier for us to share opportunities with you if we know what cause areas are important to you and which skills/prior experience you have. Feel free to let us know!

1
Fabien Le Guillarm
2y
Hello! Somehow I missed that. Apologies! I have skills in: team building, remote work, startup building, software development, and a bit on mental health / men's groups. My important cause areas are mental health and food-related impact. Thanks! 🙏

Tell us more :-) There's lots of people on the forum that can help triage through them to find the most effective ones to work on :-)

Ex: @Tessa might have thoughts

4
Elika
2y
Ooh sorry! I meant to add more and not be so vague and then forgot I published this and didn't edit it. I'll update it over the new few days hopefully

Brief comment, but it is GREAT to see Kurzgesagt making more EA-aligned videos! I just watched their videos. on how helping others lead prosperous lives is good for your own interest. 

  • It's great to see EA content in other languages. When I watched the videos, they weren't yet released in English though I'll comment a link to the English video later.
  • The simple explanations and cute visuals are quite a relief compared to complex/endless posts on the forum. I'd never heard of this line of reasoning on the forum and I'm pretty glad I got to learn it like
... (read more)

To elaborate on the point that I think Arjun is making, the general tip seems self-evidently good. It's not very valuable to state it, relative to the value of precise tips on HOW to get a mentor or how good this is relative to other good things (to figure out how much it should be prioritised compared to something else). 

3
Larks
2y
At a previous job, HR would tell all the new hires to try to find a mentor. However, what they did not mention was that going up to a random senior person and saying 'hello would you be my mentor' was seen as cringe and annoying by many such people!

Useful context: I'm 19. I stopped reading after the "Use your brainspace wisely." 

Overall impression: boring as stated :D

More specific feedback: 

  • The tips seem very diverse (tips on relationships, mental health, physical environment, and learning skills were all under the "Use your brainspace wisely". It's unclear how they relate together. Thus it's confusing to read / figure out where you can find what tip. 
    • This could be addressed by having very clear headings. Ex: "Tips on Where You Live." Ex: "Tips on the Relationships You Develop." Ex: "T
... (read more)

I appreciate you taking the time to read and encourage :-)

My aim in this article wasn't to be technically precise. Instead, I was trying to be as simple as possible. 

If you'd like to let me know the technical errors, I can try to edit the post if: 

  1. The correction seems useful for a beginner trying to understand AI Safety. 
  2. I can find feasible ways to explain the technical issues simply.

Again, I agree with you regarding the reality that every civilisation has eventually collapsed. I personally also agree that it doesn't currently seem likely that our 'modern globalised' civilisation won't collapse, though I'm no expert on the matter. 

I have no particular insight about how comparable the collapse of the Roman Empire is to the coming decades of human existence.

I agree that amidst all the existential threats to humankind, the content of this article is quite narrow. 

2
Phil Tanny
2y
Apologies, it's really not my intent to hijack your thread.   I do hope that others will engage you on the subject you've outlined in your article.  I agree I should probably write my own article about my own interests. I can't seem to help it.  Every time I see an article about managing AI or genetic engineering etc I feel compelled to point out that the attempt to manage such emerging technologies one by one by one is a game of wack-a-mole that we are destined to lose.    Not only is the scale of powers involved in these technologies vast, but ever more of them, of ever greater power, will come online at an ever accelerating pace.   What I hope to see more of are articles which address how we might learn to manage the machinery creating all these multiplying threats, the ever accelerating knowledge explosion. Ok, I'll leave it there, and get to work writing my own articles, which I hope you might challenge in turn.  
Load more