I guess orgs need to be more careful about who they hire as forecasting/evals researchers in light of a recently announced startup.
Sometimes things happen, but three people at the same org...
This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.
However, this only works if those less worried about AI risks who join such a collaboration don't use the knowledge they gain to cash in on the AI boom in an acceleratory way. Doing so undermines the very point of such a project, namely, to try to make AI go well. Doing so is incredibly damaging to trust within the community.
Now let's suppose you're an x-risk funder considering whether to fund their previous org. This org does really high-quality work, but the argument for them being net-positive is now significantly weaker. This is quite likely to make finding future funding harder for them.
This is less about attacking those three folks and more just noting that we need to strive to avoid situations where things like this happen in the first place. This requires us to be more careful in terms of who gets hired.
There's been some discussions on the EA forum along the lines of "why do we care about value alignment shouldn't we just hire who can best do the job". My answer to that is that it's myopic to only consider what happens whilst they're working for you. Hiring someone or offering them an opportunity empowers them, you need to consider whether they're someone who you want to empower[1].
Admittedly, this isn't quite the same as value alignment. Suppose someone were diligent, honest, wise and responsible. You might want to empower them even if their views were extremely different from yours. Stronger: even if their views were the opposite in many ways. But in the absence of this, value alignment matters.
I'd like to suggest a little bit more clarity here. The phrases you use refer to some knowledge that isn't explicitly stated here. "in light of a recently announced startup" and "three people at the same org" make sense to someone who already knows the context of what you are writing about, but it is confusing to a reader who doesn't have the same background knowledge that you do.
Once upon a time, some people were arguing that AI might kill everyone, and EA resources should address that problem instead of fighting Malaria. So OpenPhil poured millions of dollars into orgs such as EpochAI (they got 9 million). Now 3 people from EpochAI created a startup to provide training data to help AI replace human workers. Some people are worried that this startup increases AI capabilities, and therefore increases the chance that AI will kill everyone.
I tend to agree; better to be explicit especially as the information is public knowledge anyway.
It refers to this: https://forum.effectivealtruism.org/posts/HqKnreqC3EFF9YcEs/
Also, it is worrying if the optimists easily find financial opportunities that depend on them not changing their minds. Even if they are honest and have the best of intentions, the disparity in returns to optimism is epistemically toxic.
I agree that we need to be careful about who we are empowering.
"Value alignment" is one of those terms which has different meanings to different people. For example, the top hit I got on Google for "effective altruism value alignment" was a ConcernedEAs post which may not reflect what you mean by the term. Without knowing exactly what you mean, I'd hazard a guess that some facets of value alignment are pretty relevant to mitigating this kind of risk, and other facets are not so important. Moreover, I think some of the key factors are less cognitive or philosophical than emotional or motivational (e.g., a strong attraction toward money will increase the risk of defecting, a lack of self-awareness increases the risk of motivated reasoning toward goals one has in a sense repressed).
So, I think it would be helpful for orgs to consider what elements of "value alignment" are of particular importance here, as well as what other risk or protective factors might exist outside of value alignment, and focus on those specific things.
If you only hire people who you believe are intellectually committed to short AGI timelines (and who won’t change their minds given exposure to new evidence and analysis) to work in AGI forecasting, how can you do good AGI forecasting?
One of the co-founders of Mechanize, who formerly worked at Epoch AI, says he thinks AGI is 30 to 40 years away. That was in this video from a few weeks ago on Epoch AI’s YouTube channel.
He and one of his co-founders at Mechanize were recently on Dwarkesh Patel’s podcast (note: Dwarkesh Patel is an investor in Mechanize) and I didn’t watch all of it but it seemed like they were both arguing for longer AGI timelines than Dwarkesh believes in.
I also disagree with the shortest AGI timelines and found it refreshing that within the bubble of people who are fixated on near-term AGI, at least a few people expressed a different view.
I think if you restrict who you hire to do AGI forecasting based on strong agreement with a predetermined set of views, such as short AGI timelines and views on AGI alignment and safety, then you will just produce forecasts that re-state the views you already decided were the correct ones while you were hiring.
I wasn't suggesting only hiring people who believe in short-timelines. I believe that my original post adequately lays out my position, but if any points are ambiguous, feel free to request clarification.
I don’t know how Epoch AI can both "hire people with a diversity of viewpoints in order to counter bias" and ensure that your former employees won’t try to "cash in on the AI boom in an acceleratory way". These seem like incompatible goals.
I think Epoch has to either:
or
Is there a third option?
Presumably there are at least some people who have long timelines, but also believe in high risk and don't want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But maybe try to see it a bit from the point of view of people who think X-risk is real enough and raised enough by acceleration that acceleration is bad. It's hardly going to escape their notice that projects at least somewhat framed as reducing X-risk often end up pushing capabilities forward. They don't have to be raging dogmatists to worry about this happening again, and it's reasonable for them to balance this risk against risks of echo chambers when hiring people or funding projects.
*I'm less surely merely catastrophic biorisk from human misuse is low sadly.
If this were a story, there'd be some kind of academy taking in humanity's top talent and skilling them up in alignment.
Most of the summer fellowships seem focused on finding talent that is immediately useful. And I can see how this is tempting given the vast numbers of experienced and talented folks seeking to enter the space. I'd even go so far as to suggest that the majority of our efforts should probably be focused on finding people who will be useful fairly quickly.
Nonetheless, it does seem as though there should be at least one program that aims to find the best talent (even if they aren't immediately useful) and which provides them with the freedom to explore and the intellectual environment in which to do so.
I wish I could articulate my intuition behind this clearer, but the best I can say for now is that my intuition is that continuing to scale existing fellowships would likely provide decreasing marginal returns and such an academy wouldn't be subject to this because it would be providing a different kind of talent.
For the record, I see the new field of "economics of transformative AI" as overrated.
Economics has some useful frames, but it also tilts people towards being too "normy" on the impacts of AI and it doesn't have a very good track record on advanced AI so far.
I'd much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.
(I'd be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it's pretty late in the game now, so I'm less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).
Things in AI have been moving fast, most economists seem to have expected it to have moved slower. Sorry, I don't really want to get into more detail as writing a proper response would end up taking me more time than I want to spend defending this "Quick take".
As an example, I expect political science and international relations to be better for looking at issues related to power distribution rather than economics (though the economic frame adds some value as well). Historical studies of coups seems pretty relevant as well.
When it comes to predicting future progress, I'd be much more interested in hearing the opinions of folks who combine knowledge of economics with knowledge of ML or computer hardware, rather than those who are solely economists. Forecasting seems like another relevant discipline, as is future studies and history of science.
I think "economics of transformative AI" only matters in the narrow slice of worlds (maybe 20% of my probability?) where AI is powerful enough to transform the economy, but not powerful enough to kill everyone or to create a post-scarcity utopia. So I think you're right.
It has some relevance to strategy as well, such as in terms of how fast we develop the tech and how broadly distributed we expect it to be, however there's a limit to how much additional clarity we can expect to gain over short time period.
EA needs more communications projects.
Unfortunately, the EA Communications Fellowship and the EA Blog prize shut down[1]. Any new project needs to be adapted to the new funding environment.
If someone wanted to start something in this vein, what I'd suggest would be something along the lines of AI Safety Camp. People would apply with a project to be project leads and then folk could apply to these projects. Projects would likely run over a few months, part-time remote[2].
Something like this would be relatively cheap as it would be possible for someone to run this on a volunteer basis, but it might also make sense for there to be a paid organiser at a certain point.
I’m pretty bullish on having these kinds of debates. While EA is doing well at having an impact in the world, the forum has started to feel intellectually stagnant in some ways. And I guess I feel that these debates provide a way to move the community forward intellectually. That's something I've been feeling has been missing for a while.
Let Manifest be Manifest.
Having a space that is intellectually-edgy, but not edge-lord maxing seems extremely valuable. Especially given how controversial some EA ideas were early on (and how controversial wild animal welfare and AI welfare still are).
In fact, I'd go further and suggest that it would be great if they were to set up their own forum. This would allow us to nudge certain discussions into an adjacent, not-explicitly EA space instead of discussing it here.
Certain topics are a poor fit for the forum because they rate high on controversy + low-but-non-zero on relevance to EA. It's frustrating having these discussions on the forum as it may turn some people off, but at the same time declaring these off-topic risks being intellectually stiffling. Sometimes things turn out to be more important than you thought when you dive into the details. So I guess I'd really love to see another non-EA space end up being the first port of call for such discussions, with the hope that only the highest quality and most relevant ideas would make it over to the EA forum.
Although I have mixed feelings on the proposal, I'm voting insightful because I appreciate that you are looking toward an actual solution that at least most "sides" might be willing to live with. That seems more insightful than what the Forum's standard response soon ends up as: rehashing fairly well-worn talking points every time an issue like this comes up.
Considering how much skepticism there is in EA about forecasting being a high priority cause area anyway, this seems like an ok idea :)
In fact, I'd go further and suggest that it would be great if they were to set up their own forum.
Manifold already has a highly active discord, where they can discuss all the manifold-specific issues. This did not prevent the EA Forum from discussing the topic, and I doubt it would be much different if Manifold had a proper forum instead of a discord.
This is annoying because many of these discussions rate high on controversy but low on importance for EA.
It might seem low on importance for EA to you, but I suspect some people who are upset about Manifest inviting right-wing people do not consider it low-importance.
Oh, I wasn't referring to redirecting the discussions about Manifest onto a new forum. More discussions about pro-natalism or genetic engineering to improve welfare. To be clear, I was suggesting a forum associated with Manifest rather than one more narrowly associated with Manifold.
I'd love to see the EA forum add a section titled "Get Involved" or something similar.
There is the groups directory, but it's one of only many ways that folks can get more involved, from EAGx Conferences, to Virtual Programs, 80,000 Hours content/courses to donating.
Thanks for the suggestion Chris! I'd be really excited for the Forum (or for EA.org) to have a nice page like that, and I think others at CEA agree. We did a quick experiment in the past by adding the "Take action" sidebar link that goes to the Opportunities to take action topic page, and the link got very few clicks. We try not to add clutter to the site without good reason so we removed that link for logged in users (it's still visible for logged out users since they're more likely to get value from it). Since then we've generally deprioritized it. I would like us to pick it back up at some point, though first we'd need to decide where it should live (EA.org or here) and what it should look like, design-wise.
For now, I recommend people make updates to the Opportunities to take action wiki text to help keep it up-to-date! I've done so myself a couple times but I think it would be better as a team effort. :)
Have the forum team considered running an online event to collaborate on improving wikis? I think wikis are a deeply underrated forum feature and a fantastic way for people who aren't new but aren't working in EA to directly contribute to the EA project.
I wrote a quick take a while ago about how it's probably too hard for people to edit wikis atm - I actually can't link to it but here are my quick takes: Gemma Paterson's Quick takes — EA Forum (effectivealtruism.org)
I'm glad that you like the wiki! ^^ I agree that it's a nice way for people in the community to contribute.
I believe no one on the team has focused on the wiki in a while, and I think before we invest time into it we should have a more specific vision for it. But I do like the idea of collaborative wiki editing events, so thanks for the nudge! I'll have a chat with @Toby Tremlett🔹 to see what he thinks. For reference, we do have a Wiki FAQ page, which is a good starting point for people who want to contribute.
About your specific suggestion, thank you for surfacing it and including detailed context — that's quite helpful. I agree that ideally people could contribute to the wiki with lower karma. I'll check if we can lower the minimum at least. Any more substantive changes (like making a "draft" change and getting it approved by someone else) would take more technical work, so I'm not sure when we would prioritize it.
(It looks like your link to a specific quick take did work, but if you think there's a bug then let me know!)
Interesting. I still think it could be valuable even with relatively few clicks. You might only even need someone to click on it once.
Yeah I agree, it does feel like a thing that should exist, like there's some obvious value to it even though I got some evidence that there was low demand for it on the Forum. I think it would be faster to add to EA.org instead so perhaps we should just add a static page there.
I like that we have a list in the wiki, so that people in the EA community can help us keep the info up-to-date by editing it, but practically speaking people don't spend much time doing that.
I'll post some extracts from the commitments made at the Seoul Summit. I can't promise that this will be a particularly good summary, I was originally just writing this for myself, but maybe it's helpful until someone publishes something that's more polished:
Frontier AI Safety Commitments, AI Seoul Summit 2024
The major AI companies have agreed to Frontier AI Safety Commitments. In particular, they will publish a safety framework focused on severe risks: "internal and external red-teaming of frontier AI models and systems for severe and novel threats; to work toward information sharing; to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights; to incentivize third-party discovery and reporting of issues and vulnerabilities; to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated; to publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use; to prioritize research on societal risks posed by frontier AI models and systems; and to develop and deploy frontier AI models and systems to help address the world’s greatest challenges"
"Risk assessments should consider model capabilities and the context in which they are developed and deployed" - I'd argue that the context in which it is deployed should account take into account whether it is open or closed source/weights as open-source/weights can be subsequently modified.
"They should also be accompanied by an explanation of how thresholds were decided upon, and by specific examples of situations where the models or systems would pose intolerable risk." - always great to make policy concrete"
In the extreme, organisations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds." - Very important that when this is applied the ability to iterate on open-source/weight models is taken into account
https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024
Seoul Declaration for safe, innovative and inclusive AI by participants attending the Leaders' Session
Signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America.
"We support existing and ongoing efforts of the participants to this Declaration to create or expand AI safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations" - guess we should now go full-throttle and push for the creation of national AI Safety institutes
"We recognise the importance of interoperability between AI governance frameworks" - useful for arguing we should copy things that have been implemented overseas.
"We recognize the particular responsibility of organizations developing and deploying frontier AI, and, in this regard, note the Frontier AI Safety Commitments." - Important as Frontier AI needs to be treated as different from regular AI.
https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-declaration-for-safe-innovative-and-inclusive-ai-by-participants-attending-the-leaders-session-ai-seoul-summit-21-may-2024
Seoul Statement of Intent toward International Cooperation on AI Safety Science
Signed by the same countries.
"We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems" - similar to what we listed above, but more specifically focused on AI Safety Institutes which is a great.
"We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to AI safety" - Really good! We don't just want AIS Institutes to run current evaluation techniques on a bunch of models, but to be actively contributing to the development of AI safety as a science.
"We articulate our shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety" - very important for them to share research among each other
https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-statement-of-intent-toward-international-cooperation-on-ai-safety-science-ai-seoul-summit-2024-annex
Seoul Ministerial Statement for advancing AI safety, innovation and inclusivity
Signed by: Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America, and the representative of the European Union
"It is imperative to guard against the full spectrum of AI risks, including risks posed by the deployment and use of current and frontier AI models or systems and those that may be designed, developed, deployed and used in future" - considering future risks is a very basic, but core principle
"Interpretability and explainability" - Happy to interpretability explicitly listed
"Identifying thresholds at which the risks posed by the design, development, deployment and use of frontier AI models or systems would be severe without appropriate mitigations" - important work, but could backfire if done poorly
"Criteria for assessing the risks posed by frontier AI models or systems may include consideration of capabilities, limitations and propensities, implemented safeguards, including robustness against malicious adversarial attacks and manipulation, foreseeable uses and misuses, deployment contexts, including the broader system into which an AI model may be integrated, reach, and other relevant risk factors." - sensible, we need to ensure that the risks of open-sourcing and open-weight models are considered in terms of the 'deployment context' and 'foreseeable uses and misuses'
"Assessing the risk posed by the design, development, deployment and use of frontier AI models or systems may involve defining and measuring model or system capabilities that could pose severe risks," - very pleased to see a focus beyond just deployment
"We further recognise that such severe risks could be posed by the potential model or system capability or propensity to evade human oversight, including through safeguard circumvention, manipulation and deception, or autonomous replication and adaptation conducted without explicit human approval or permission. We note the importance of gathering further empirical data with regard to the risks from frontier AI models or systems with highly advanced agentic capabilities, at the same time as we acknowledge the necessity of preventing the misuse or misalignment of such models or systems, including by working with organisations developing and deploying frontier AI to implement appropriate safeguards, such as the capacity for meaningful human oversight" - this is massive. There was a real risk that these issues were going to be ignored, but this is now seeming less likely.
"We affirm the unique role of AI safety institutes and other relevant institutions to enhance international cooperation on AI risk management and increase global understanding in the realm of AI safety and security." - "Unique role", this is even better!
"We acknowledge the need to advance the science of AI safety and gather more empirical data with regard to certain risks, at the same time as we recognise the need to translate our collective understanding into empirically grounded, proactive measures with regard to capabilities that could result in severe risks. We plan to collaborate with the private sector, civil society and academia, to identify thresholds at which the level of risk posed by the design, development, deployment and use of frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France" - even better than above b/c it commits to a specific action and timeline
https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024
I just created a new Discord server for generated AI safety reports (ie. using Deep Research or other tools). Would be excited to see you join (ps. Open AI now provides uses on the plus plan 10 queries per month using Deep Research).
https://discord.gg/bSR2hRhA
There is a world that needs to be saved. Saving the world is a team sport. All we can do is to contribute our part of the puzzle, whatever that may be and no matter how small, and trust in our companions to handle the rest. There is honor in that, no matter how things turn out in the end.
What could principles-first EA look like?
Zachary Robinson recently stated that CEA would choose to emphasize a principles-first approach to EA. Here are my thoughts on the kinds of strategic decisions that naturally synergies with this high-level strategy:
Additional comments:
I'm not really focused on animal rights nor do I spend much time thinking about it, so take this comment with a grain of salt.
However, if I wanted to make the future go well for animals I'd be offering free vegan meals in the Bay Area or running a conference on how to ensure that the transition to advanced AI systems goes well for animals in the Bay Area.
Reality check: Sorry for being harsh, but you're not going to end factory farming before the transition to advanced AI technologies. Max 1-2% chance of that happening. So the best thing to do is to ensure that this goes well for animals and not just humans.
Anyway, that concludes my hot-take.
There is an AI, Animals, & Digital Minds conference that's being planned in the Bay Area for earlyish 2025! Updates will be announced in the AI & Animals newsletter.
Maybe I'm missing something, but I think it's a negative sign that mirror bacteria seems to have pretty much not been discussed within the EA community until now (that said, what really matters is the percent of biosecurity folk in the community who have heard of this issue).
To Community Build or Not
One underrated factor in whether to engage in community-building[1] is how likely you are to move to a hub.
I suspect that in most cases people can achieve more when they are part of a group, rather than when they are by themselves. Let's assume that your local community doesn't already provide what you need. Let's further assume that an online community isn't sufficient for your needs either:
Then you have two main options:
• If there's already a hub that provides the community that you need, then you could move there
• You could try to build up the local community
There are a lot of advantages to the former. It can be quicker than trying to build up a community yourself and being in the hub will probably lead to you having more direct impact than you could have even if you managed to build up your local community quite a bit. So while either option could end up being more impactful, there's a lot of reasons why it might make sense for people who are willing to move to just focus on figuring out how to set themselves up in a hub as soon as possible.
However, there are some people who are just not going to move to a hub, because they're too rooted in their current location. My suspicion is that more of these people should be focusing on building up the community.
Since there are less opportunities outside of the hub, the opportunity cost is lower, but more importantly, someone who is planning to stay in the same location over the longer term is likely to capture more of the value from their own community-building efforts.
Obviously, this doesn't apply to everyone and there are definitely people who can have far more impact through direct work, even whilst outside of a hub, than through community building. I would just like to see more people who are planning to stay put pick up this option.
Here I'm using community-building in a broad sense.
If we run any more anonymous surveys, we should encourage people to pause and consider whether they are contributing productively or just venting. I'd still be in favour of sharing all the responses, but I have enough faith in my fellow EAs to believe that some would take this to heart.
Is anyone doing broad AI Safety outreach to techies in the Bay Area?
It seems very important to have a group doing this given how much opinions within Bay Area tech influence how AI is developed.
If SB 1047 doesn't pass, this ball being dropped may be partially to blame.
Maybe EA should try to find a compromise on the unpaid internship issue? For example, unpaid internships up to a maximum of 2 days/week being considered acceptable with the community?
This would provide additional opportunities for people to skill up, whilst ensuring that these opportunities would still be broadly accessible.
(In countries where this is legally allowed)
You say "find a compromise" as if this is a big and contentious issue, but I... don't really see it coming up a lot? I know Kat Woods has recently posted elsewhere about how lots of unpaid internships are being suppressed because random bystanders on the internet object to them, but I just don't actually see that happening. I would imagine that often management capacity is more of a bottleneck than pay anyway?
I think I posted in one of the threads that I have no knowledge of what private evidence Nonlinear may have, but I just realised that I actually do. I don't think it's a big enough deal for me to go back and try to track down the actual comments and edit them, but I thought it was good practise to note this on short form nonetheless.
One of the vague ideas spinning around in my head is that maybe in addition to EA which is a fairly open, loosely co-ordinated, big-tent movement with several different cause areas, there would also in value in a more selective, tightly co-ordinated, narrow movement focusing just on the long term future. Interestingly, this would be an accurate description of some EA orgs, with the key difference being that these orgs tend to rely on paid staff rather than volunteers. I don't have a solid idea of how this would work, but just thought I'd put this out there...
Oh, I would've sworn that was already the case (with the understanding that, as you say, there is less volunteering involved, because with the "inner" movement being smaller, more selective, and with tighter/more personal relationships, there is much less friction in the movement of money, either in the form of employment contracts or grants).
I suspect that it could be impactful to study say a masters of AI or computer science even if you don't really need it. University provides one of the best opportunities to meet and deeply connect with people in a particular field and I'd be surprised if you couldn't persuade at least a couple of people of the importance of AI safety without really trying. On the other hand, if you went in with the intention of networking as much as possible, I think you could have much more success.
Someone needs to be doing mass outreach about AI Safety to techies in the Bay Area.
I'm generally more of a fan of niche outreach over mass outreach, but Bay Area tech culture influences how AI is developed. If SB 1047 is defeated, I wouldn't be surprised if the lack of such outreach ended up being a decisive factor.
There's now enough prominent supporters of AI Safety and AI is hot enough that public lectures or debates could draw a big crowd. Even though a lot of people have been exposed to these ideas before, there's something about in-person events that make ideas seem real.
There is an AI, Animals, & Digital Minds conference that's being planned in the Bay Area for earlyish 2025! Updates will be announced in the AI & Animals newsletter.