New & upvoted

Customize feedCustomize feed
46
· · 2m read

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more
Potential Megaproject: 'The Cooperation Project' (or the like) This is a very loose idea, based on observations like these: * We have ongoing geopolitical tensions (e.g. China-US, China-Taiwan, Russia-Ukraine) and a lot of resources and attention spent on those. * We have (increasing?) risks from emerging technology that potentially threaten everyone. It's difficult to estimate the risk levels, but there seems to be an emerging consensus that we are on a reckless path, even from perspectives concerned purely with individual or national self-interest. The project would essentially seek to make a clear case for broad cooperation toward avoiding widely agreed-upon bad outcomes from emerging technologies — outcomes that are in nobody's interest. The work could, among other things, consist in reaching out to key diplomats as well as doing high-visibility public outreach that emphasizes cooperation as key to addressing risks from emerging technologies. Reasons it might be worth pursuing: * The degree of cooperation between major powers, especially wrt tech development, is plausibly a critical factor in how well the future will go. Even marginal improvements might be significant. * A strong self-interested case can seemingly be made for increasing cooperation, but a problem might be its relatively low salience as well as primitive status and pride psychology preventing this case from being acted on. * Even if the case is fairly compelling to people, other motivations might nevertheless feel more compelling and motivating; slight pushes in terms of how salient certain considerations are, both in the minds of the public and leaders, could potentially tip the scales in terms of which paths end up being pursued. * The broader goal seems quite commonsensical and like something few people would outright oppose (though see the counter-considerations below). * The work might act as a lever or catalyst of sorts: one can make compelling arguments regarding specific tec
  I've now spoken to  ~1,400 people as an advisor with 80,000 Hours, and if there's a quick thing I think is worth more people doing, it's doing a short reflection exercise about one's current situation.  Below are some (cluster of) questions I often ask in an advising call to facilitate this. I'm often surprised by how much purchase one can get simply from this -- noticing one's own motivations, weighing one's personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc.   A long list of semi-useful questions I often ask in an advising call   1. Your context: 1. What’s your current job like? (or like, for the roles you’ve had in the last few years…) 1. The role 2. The tasks and activities 3. Does it involve management? 4. What skills do you use? Which ones are you learning? 5. Is there something in your current job that you want to change, that you don’t like? 2. Default plan and tactics 1. What is your default plan? 2. How soon are you planning to move? How urgently do you need to get a job? 3. Have you been applying? Getting interviews, offers? Which roles? Why those roles? 4. Have you been networking? How? What is your current network? 5. Have you been doing any learning, upskilling? How have you been finding it? 6. How much time can you find to do things to make a job change? Have you considered e.g. a sabbatical or going down to a 3/4-day week? 7. What are you feeling blocked/bottlenecked by? 3. What are your preferences and/or constraints? 1. Money 2. Location 3. What kinds of tasks/skills would you want to use? (writing, speaking, project management, coding, math, your existing skills, etc.) 4. What skills do you want to develop? 5. Are you interested in leadership, management, or individual contribution? 6. Do you want to shoot for impact? H
As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same. Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact. Of course, seeking funding from organizations like OpenPhil remains highly valuable—they've dedicated extensive thought to effective community building. Yet, don't underestimate the power and efficiency of utilizing your intimate knowledge of your group's immediate requirements. Your direct donations can streamline processes, empower quick responses to pressing needs, and ultimately enhance the impact of your local EA community.
👋 I have joined the modern world and am writing a Substack about research on ending factory farming 😃 Here's a post on a strong study about the effects of watching an especially upsetting documentary.
Would a safety-focused breakdown of the EU AI Act be useful to you? The Future of Life Institute published a great high-level summary of the EU AI Act here: https://artificialintelligenceact.eu/high-level-summary/ What I’m proposing is a complementary, safety-oriented summary that extracts the parts of the AI Act that are most relevant to AI alignment researchers, interpretability work, and long-term governance thinkers.  It would include: * Provisions related to transparency, human oversight, and systemic risks * Notes on how technical safety tools (e.g. interpretability, scalable oversight, evals) might interface with conformity assessments, or the compliance exemptions available for research work. * Commentary on loopholes or compliance dynamics that could shape industry behavior * What the Act doesn't currently address from a frontier risk or misalignment perspective Target length: 3–5 pages, written for technical researchers and governance folks who want signal without wading through dense regulation. If this sounds useful, I’d love to hear what you’d want to see included, or what use cases would make it most actionable.  And if you think this is a bad idea, no worries. Just please don’t downvote me into oblivion, I just got to decent karma :). Thanks in advance for the feebdack!