Madhav Malhotra

Research Fellow @ Happier Lives Institute
Pursuing an undergraduate degree
Working (0-5 years experience)

Bio

Participation
1

Is helpful/friendly :-) Loves to learn. Wants to solve neglected problems. See website for current progress.

How others can help me

I'm very interested in talking to biosecurity experts about neglected issues with: microneedle array patches, self-spreading animal/human vaccines, paper-based (or other cheap) microfluidic diagnostics, and/or massively-scalable medical countermeasure production via genetic engineering.

Also, interested in talking to experts on early childhood education and/or positive education!

How I can help others

Reach out if you have questions about: 

I'll respond to Linkedin the fastest :-)

Comments
76

@Tessa - thank you for introducing me to Dr. Millet's course in your reading list on biosecurity!

Good on you for taking on more work and trying to figure out how you can best contribute to the world :-) It might be easier for us to share opportunities with you if we know what cause areas are important to you and which skills/prior experience you have. Feel free to let us know!

Tell us more :-) There's lots of people on the forum that can help triage through them to find the most effective ones to work on :-)

Ex: @Tessa might have thoughts

Brief comment, but it is GREAT to see Kurzgesagt making more EA-aligned videos! I just watched their videos. on how helping others lead prosperous lives is good for your own interest. 

  • It's great to see EA content in other languages. When I watched the videos, they weren't yet released in English though I'll comment a link to the English video later.
  • The simple explanations and cute visuals are quite a relief compared to complex/endless posts on the forum. I'd never heard of this line of reasoning on the forum and I'm pretty glad I got to learn it like this first :-)
  • My father's blaring state-sponsored war-filled news in the background and I really appreciated a more positive vision of the future from Kurzgesagt's videos for once :D

To elaborate on the point that I think Arjun is making, the general tip seems self-evidently good. It's not very valuable to state it, relative to the value of precise tips on HOW to get a mentor or how good this is relative to other good things (to figure out how much it should be prioritised compared to something else). 

Useful context: I'm 19. I stopped reading after the "Use your brainspace wisely." 

Overall impression: boring as stated :D

More specific feedback: 

  • The tips seem very diverse (tips on relationships, mental health, physical environment, and learning skills were all under the "Use your brainspace wisely". It's unclear how they relate together. Thus it's confusing to read / figure out where you can find what tip. 
    • This could be addressed by having very clear headings. Ex: "Tips on Where You Live." Ex: "Tips on the Relationships You Develop." Ex: "Tips on Skills to Learn."
  •  Tips don't seem valuable without stories/examples. This is most true for a young person who doesn't know of an experience about each tip. Ex: If you say "Get a mentor" - that goes in one ear and out the other. A more helpful way to say that might be: "Get a mentor. When I was working on a startup to do X, my mentor Y helped me figure out that doing Z was better. I was down to my last thousand dollars and changing course helped me save the company."
  • I like when there were links to specific actionables. Ex: You can read this post if you're having mental health troubles, that post if you're looking for friend advice, etc. I'd love to see these links wherever you're aware of resources :-)
  • I don't know why you're telling me these things. That is to say, the intention seems unclear. It's worth putting some kind of statement about the purpose of each category of tips under the headings. Ex: Before a heading on "Mental Health Tips," you might say "Young people are the most vulnerable to mental health problems. If we learn to work on these problems early, it makes them a lot less severe later in life. Here are some helpful actions you could take if you're experiencing mental health issues:"

I hope this feedback is constructive enough to give practical ideas on how to improve this post. Please feel free to let me know if something seems unclear. I'll do my best to give a timely response :-)

I appreciate you taking the time to read and encourage :-)

My aim in this article wasn't to be technically precise. Instead, I was trying to be as simple as possible. 

If you'd like to let me know the technical errors, I can try to edit the post if: 

  1. The correction seems useful for a beginner trying to understand AI Safety. 
  2. I can find feasible ways to explain the technical issues simply.

Again, I agree with you regarding the reality that every civilisation has eventually collapsed. I personally also agree that it doesn't currently seem likely that our 'modern globalised' civilisation won't collapse, though I'm no expert on the matter. 

I have no particular insight about how comparable the collapse of the Roman Empire is to the coming decades of human existence.

I agree that amidst all the existential threats to humankind, the content of this article is quite narrow. 

I agree with what you've said about how AI safety principles could give us a false sense of security. Risk compensation theory has shown how we reduce our cautiousness with several technologies when we think we've created more safety mechanisms. 

I also agree with what you've said about how it's likely that we'll continue to develop more and more technological capabilities, even if this is dangerous. That seems pretty reasonable given the complex economic system that funds these technologies. 

That said, I don't agree with the dystopian/doomsday connotation of some of your words: "Given that we've proven incapable of understanding this in the seventy years since Hiroshima, it's not likely we will learn  it" or "Human beings are not capable of successfully managing ever more power at an ever accelerating rate without limit forever.   Here's why.  We are not gods."

In particular, I don't believe that communicating with such tones has very useful implications. Compared to more specific analysis (for example) of the safety benefits vs. risk compensation cost of particular AI safety techniques. 

Load More