Just to play devil's advocate (without harmful intentions :-), what are the largest limitations or disclaimers that we should keep in mind regarding your results or methods?
Sorry if I missed this in your post, but how many policies did you analyse that were passed via referendum vs. by legislation? How many at the state level vs. federal US vs. international?
@trevor1 Thank you for the detailed response!
RE: Crossposting to LessWrong
RE: Personal Cybersecurity and IoT
Here's a summary of the report from Claude-1 if someone's looking for an 'abstract':
...There are several common misconceptions about biological weapons that contribute to underestimating the threat they pose. These include seeing them as strategically irrational, not tactically useful, and too risky for countries to pursue.
In reality, biological weapons have served strategic goals for countries in the past like deterrence and intimidation. Their use could also provide tactical advantages in conflicts.
Countries have historically taken on substantial risks in p
"There are many other things that could have been done to prevent Russia’s unprovoked, illegal attack on Ukraine. Ukraine keeping nuclear weapons is not one of them."
Context: I'm hoping to learn lessons in nuclear security that are transferable to AI safety and biosecurity.
Question: Would you have any case studies or advice to share on how regulatory capture and lobbying was mitigated in US nuclear security regulations and enforcement?
Are there any misconceptions, stereotypes, or tropes that you commonly see in academic literature around nuclear security or biosecurity that you could correct given your perspective inside government?
Could you share the top 3 constraints and benefits you had in improving global nuclear security while you were working for the US DoD compared to now, when you're working as an academic?
Context: I'm hoping to find lessons from nuclear security that are transferable to the security of bioweapons and transformative AI.
Question: Are there specific reports you could recommend on prevening these nuclear security risks:
Any updates on how the event went? :-) Any cause priorities or research questions identified to mitigate existential cybersecurity risks?
A lot of people have gotten the message: "Direct your career towards AI Safety!" from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others' comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
It would be awesome if there were more mentorship/employment opportunities in AI Safety! Agree this is a frustrating bottleneck. Would love to see more senior people enter this space and open up new opportunities. Definitely the mentorship bottleneck makes it less valuable to try to enter technical AI safety on the margin, although we still think it's often a good move to try, if you have the right personal fit. I'd also add this bottleneck is way lower if you: 1. enter via more traditional academic or software engineer routes rather than via 'EA fellowshi...
For what it's worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven't observed any of the points you mentioned in my experience with the EA group:
...we're not hosting any discussions where a group organiser could convince people to work on AI safety over all else.
I feel it is important to mention that this isn't supposed to happen during introductory fellowship discussions. CEA and other group organizers have compiled recommendations for facilitators (here is one, for example), and all the ones I have read quite clearly state that the role of the facilitator is to help guide the conversation, not overly opine or convince participants to believe in x over y.
Out of curiosity @LondonGal, have you received any followups from HLI in response to your critique? I understand you might not be at liberty to share all details, so feel free to respond as you feel appropriate.
Context: I work as a remote developer in a government department.
Practices that help:
It takes courage to share such detailed stories of goals not going right! Good on you for having the courage to do so :-)
It seems that two kinds of improvements within EA might be helpful to reduce the probability of other folks having similar experiences.
Proactively, we could adjust the incentives promoted (especially by high-visibility organisations like 80K hours). Specifically, I think it would be helpful to:
Thank you for your thoughtful questions!
RE: "I guess the goal is to be able to run models on devices controlled by untrusted users, without allowing the user direct access to the weights?"
You're correct in understanding that these techniques are useful for preventing models from being used in unintended ways where models are running on untrusted devices! However, I think of the goal a bit more broadly; the goal is to add another layer of defence behind a cybersecure API (or another trusted execution environment) to prevent a model from being stolen a...
Hi!
As I mentioned in the post, I'd delete the database in a month from the post for privacy reasons. My apologies for the inconvenience :/
This is certainly a useful resource for those who live in areas without the effective altruism groups around them! Thank you for sharing :-)
Could you please share more details on which parts of the curriculum would be inaccessible to recent graduates? From the outline of the book alone, it's hard to estimate the level of technical depth needed.
I'm not sure this is a good idea.
I'm surprised to see how the book giveaway is more expensive than the costs of actually placing the ads to get eyes on the sites! Why did you decide to give away a physical book? What do you think the cost-effectiveness of that is compared to ebooks or not having a giveaway?
Nice, thanks for your question!
One relevant thing here is that I'm not thinking about the book giveaway as just (or even primarily) an incentive to get people to view our site — I think most of the value is probably in getting people in our target audience to read the books we give out, because I think the books contain a lot of important ideas. I think I'd be potentially excited about doing this without the connection to 80k at all (though the connection to 80k/incentive to engage with us seems like an added bonus).
Re: physical books versus ebook:
If you're interested in supporting education, scholarships to next generation education companies might be worth supporting (example - disclaimer, I've gone through the program of this particular company).
Regarding investments in environmental causes, more neglected causes are more valuable to invest in. For instance, supporting NOVEL carbon capture companies (ie. not tree planting).
Given the high-tech industry in Canada, it might be relatively advantageous to support neglected research priorities.
It would be helpful to hear more details (including sources) about the problem you've found:
Also, please add a more specific call to action describing:
"I'm not sure I buy the fourth point - while there will be some competition between plant-based and cell-based meat, they also both compete with the currently much larger traditional meat market, and I think there are some consumers who would eat plant-based but not cell-based and vice versa."
The evidence I've seen (Source) suggests that consumers are largely confused about the difference between cell-based and lab-based ...
I'm curious, how do you think about the relative importance of promoting cell-based (cultivated) vs. plant-based meat?
I appreciate you formatting the post summary with brevity in mind :-) Makes it easy to quickly understand the main points and I can see you put in deliberate thought into formatting as a table.
I'd be interested in hearing someone from Anthropic discuss the upsides or downsides of this arrangement. From an entirely personal standpoint, it seems odd that Anthropic gave up equity AND had restrictions in how the investment could be used. That said, I imagine there are MANY other details about I'm not aware of since I wasn't involved in the decision.
Update: lesson learned - read the fine print effectively
I'm interested in building a career around technological risks. Sometimes, I think about all the problems we're facing. From biosecurity risks to AI safety risks to cybersecurity risks to ... And it all feels so big and pressing and hopeless and whatnot.
Clearly, me being existential about these issues wasn't helping anyone. So I've had to accept that I have to pick a smaller, very specific problem that my skillset could be useful in. Even if it's not solving everything, I won't solve anything if I don't specialise in that way.
Maybe some spirit ...
Thank you for clarifying :-) I wasn't trying to be pedantic, I was just choosing between donating to StrongMinds with a 100% match or here.
Update: the Double Up Drive donation matching is no longer available.
Donations to Animal Charity Evalutators, Hellen Keller International, and StrongMinds are still being matched.
For Canadian donors, donations to GiveDirectly are being matched to at least 50%.
Sorry, does the 1.5x match mean we donate $X and 50% of that will be matched? Or 150% of our donation will be added to the amount we donated?
@Tessa - thank you for introducing me to Dr. Millet's course in your reading list on biosecurity!
Good on you for taking on more work and trying to figure out how you can best contribute to the world :-) It might be easier for us to share opportunities with you if we know what cause areas are important to you and which skills/prior experience you have. Feel free to let us know!
Tell us more :-) There's lots of people on the forum that can help triage through them to find the most effective ones to work on :-)
Ex: @Tessa might have thoughts
Brief comment, but it is GREAT to see Kurzgesagt making more EA-aligned videos! I just watched their videos. on how helping others lead prosperous lives is good for your own interest.
To elaborate on the point that I think Arjun is making, the general tip seems self-evidently good. It's not very valuable to state it, relative to the value of precise tips on HOW to get a mentor or how good this is relative to other good things (to figure out how much it should be prioritised compared to something else).
Useful context: I'm 19. I stopped reading after the "Use your brainspace wisely."
Overall impression: boring as stated :D
More specific feedback:
My aim in this article wasn't to be technically precise. Instead, I was trying to be as simple as possible.
If you'd like to let me know the technical errors, I can try to edit the post if:
Again, I agree with you regarding the reality that every civilisation has eventually collapsed. I personally also agree that it doesn't currently seem likely that our 'modern globalised' civilisation won't collapse, though I'm no expert on the matter.
I have no particular insight about how comparable the collapse of the Roman Empire is to the coming decades of human existence.
I agree that amidst all the existential threats to humankind, the content of this article is quite narrow.
The UX has so much improvement since the 2022 version of this :-) It feels concise and the scrolling to each new graph makes it interesting to learn each new thing. Kudos to whoever designed it this way!