All of MichaelA's Comments + Replies

Hi Richard, quick reactions without having much context:

  • If you mean this is all one company, this sounds like putting too many eggs in one basket, and insufficiently exploring. 
    • I think it's generally good to apply to many different types of roles and organizations
    • Sometimes it makes sense to focus in mostly on one role type or one org. But probably not entirely. And not once one has already gotten some evidence that that's not the right fit. (Receiving a few rejections isn't much negative info, but if it's >5 for one particular org or type of
... (read more)
1
Richard_Leyba_Tejada
1mo
One company. You are right about too many eggs in one basket. I'm expanding my search to more companies and focusing on operations roles. I learned recently my resume is too generic, not targeted enough to the roles and needs quantifiable accomplishments.   I'm updating...Thank you.

The AI Safety Fundamentals opportunities board, filtered for "funding" as the opportunity type, is probably also useful. 

Oh wow, thanks for flagging that, fixed! Amazing that a whole extra word in the title itself survived a whole year, and survived me copy-pasting the title in various other places too 😬

Thanks for making this!

What do the asterisks before a given resource mean? (E.g. before "Act of Congress: How America’s Essential Institution Works, and How It Doesn’t".) Maybe they mean you're especially strongly recommending that? 

AI Safety Support have a list of funding opportunities. I'm pretty sure all of them are already in this post + comments section, but it's plausible that'll change in future. 

Yeah, the "About sharing information from this report" section attempts to explain this. Also, for what it's worth, I approved all access requests, generally within 24 hours.

That said, FYI I've now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in "About sharing information from this report". (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)

AI Safety Impact Markets

Description provided to me by one of the organizers: 

This is a public platform for AI safety projects where funders can find you. You shop around for donations from donors that already have a high donor score on the platform, and their donations will signal-boost your project so that more donors and funders will see it. 

See also An Overview of the AI Safety Funding Situation for indications of some additional non-EA funding opportunities relevant to AI safety (e.g. for people doing PhDs or further academic work). 

FYI, if any readers want just a list of funding opportunities and to see some that aren't in here, they could check out List of EA funding opportunities.

(But note that that includes some things not relevant to AI safety, and excludes some funding sources from outside the EA community.)

$20 Million in NSF Grants for Safety Research

After a year of negotiation, the NSF has announced a $20 million request for proposals for empirical AI safety research.

Here is the detailed program description.

The request for proposals is broad, as is common for NSF RfPs. Many safety avenues, such as transparency and anomaly detection, are in scope

Announcing Manifund Regrants

Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.

Yeah, this seems to me like an important question. I see it as one subquestion of the broader, seemingly important, and seemingly neglected questions "What fraction of importance-adjusted AI safety and governance work will be done or heavily boosted by AIs? What's needed to enable that? What are the implications of that?"

I previously had a discussion focused on another subquestion of that, which is what the implications are for government funding programs in particular. I wrote notes from that conversation and will copy them below. (Some of this is also re... (read more)

Palisade Research

"At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems."

Jeffrey Ladish is the Executive Director.

Admond

"Admond is an independent Danish think tank that works to promote the safe and beneficial development of artificial intelligence."

"Artificial intelligence is going to change Denmark. Our mission is to ensure that this change happens safely and for the benefit of our democracy."

Senter for Langsiktig Politikk

"A politically independent organisation aimed at creating a better and safer future"

A think tank based in Norway.

Tentative suggestion: Maybe try to find a way include info about how much karma the post has near the start of the episode description, in the podcast feed?

Reasoning:

  • This could help in deciding what to listen to, at least for the "all audio" feed. (E.g. I definitely don't have time for even just all AI-related episodes in there.) 
  • It could also led to herd-like behavior or ignoring good content that didn't get lots of karma right away. But I think that that is outweighed by the above benefit.
  • OTOH this may just be infeasible to do in a non-misleading wa
... (read more)
1
Sharang Phadke
11mo
Thanks Michael, karma and author name do seem reasonable to add if we can easily keep episodes up to date from a technical perspective. Will put this on our list and work out how to prioritize it.

Thanks! This seems valuable.

One suggestion: Could the episode titles, or at least the start of the descriptions, say who the author is? 

Reasoning:

  • I think that's often useful context for the post, and also useful info for deciding whether to read it (esp. for the feed where the bar is "just" >30 karma). 
  • I guess there are some upsides to nudging people to decide just based on topic or the start of the episode rather than based on the author's identity. But I think that's outweighed by the above points.
2
peterhartree
10mo
Thanks Michael! This was a strange oversight on our part—now fixed.

Confido Institute

Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.

We design tools, workshops and materials to support this mission. This is the first in

... (read more)

Epistea

We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.

Our current projects are FIXED POINTPrague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biologica

... (read more)

This seems like a useful topic to raise. Here's a pretty quickly written & unsatisfactory little comment: 

  • I agree that there's room to expand and improve the pipeline to valuable work in AI strategy/governance/policy. 
  • I spend a decent amount of time on that (e.g. via co-leading RP AI Governance & Strategy team, some grantmaking with EA Infrastructure Fund, advising some talent pipeline projects, and giving lots of career advice).
  • If a reader thinks they could benefit from me pointing you to some links or people to talk to, or via us having
... (read more)

One specific thing I'll mention in case it's relevant to some people looking at this post: The AI Governance & Strategy team at Rethink Priorities (which I co-lead) is hiring for a Compute Governance Researcher or Research Assistant. The first application stage takes 1hr, and the deadline is June 11. @readers: Please consider applying and/or sharing the role! 

We're hoping to open additional roles sometime around September. One way to be sure you'd be alerted if and when we do is filling in our registration of interest form.  

Nonlinear Support Fund: Productivity grants for people working in AI safety

Get up to $10,000 a year for therapy, coaching, consulting, tutoring, education, or childcare

[...]

You automatically qualify for up to $10,000 a year if:

  • You work full time on something helping with AI safety
    • Technical research
    • Governance
    • Graduate studies
    • Meta (>30% of beneficiaries must work in AI safety)
  • You or your organization received >$40,000 of funding to do the above work from any one of these funders in the last 365 days
... (read more)

SaferAI

SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.

EffiSciences

EffiSciences is a collective of students founded in the Écoles Normales Supérieures (ENS) acting for more involved research in the face of the problems of our world. [translated from French]

See also this detailed breakdown of potential funding options for EA (community-building-type) groups specifically.

Apart Research

A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.

We run AISI, the Alignment Hackathons, and an AI safety research update series.

Also the European Network for AI Safety (ENAIS)

TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!
 

...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!

The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we're able to hire in most countries.

People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you'... (read more)

Riesgos Catastróficos Globales

Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world. 

There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos

... (read more)

Epoch

We’re a team of researchers investigating and forecasting the development of advanced AI.

International Center for Future Generations

The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.

As of today, their website lists their priorities as:

  • Climate crisis
  • Technology [including AI] and democracy
  • Biosecurity

I appreciate you sharing this additional info and reflections, Julia. 

I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have war... (read more)

Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)

These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.

Labour for the Long Term

Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.

Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becomin

... (read more)

Policy Foundry

an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]

The Collective Intelligence Project

We are an incubator for new governance models for transformative technology.

Our goal: To overcome the transformative technology trilemma.

Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.

Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.

Collective fl

... (read more)

Just remembered that Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration was written and published after I initially drafted this, so Will and I's post doesn't draw on  or reference this, but it's of course relevant too.

Glad to hear that!

Oh also, just noticed I forgot to add info on how to donate, in case you or others are interested - that info can be found at https://rethinkpriorities.org/donate 

An in-our-view interesting tangential point: It might decently often be the case that a technological development initially increases risks but then later increases risk by a smaller margin or even overall reduces risks. 

  • One reason this can happen is that developments may be especially risky in the period before states or other actors have had time to adjust their strategies, doctrine, procedures, etc. in light of the development.
  • Another possible reason
... (read more)

Rethink Priorities' AI Governance & Strategy team (which I co-lead) has room for more funding. There's some info about our work and the work of RP's other x-risk-focused team* here and elsewhere in that post. One piece of public work by us so far is Understanding the diffusion of large language models: summary. We also have a lot of work that's unfortunately not public, either because it's still in progress or e.g. due to information hazards. I could share some more info via a DM if you want.

We also have yet to release a thorough public overview of the... (read more)

4
vincentweisser
1y
Big fan of RT, thanks for sharing!

Thanks - I only read this linkpost and Haydn's comment quoting your summary, not the linked post as a whole, but this seems to me like probably useful work.

One nitpick: 

It seems likely to me that the US is currently much more likely to create transformative AI before China, especially under short(ish) timelines (next 5-15 years) - 70%.

I feel like it'd be more useful/clearer to say "It seems x% likely that the US will create transformative AI before China, and y% likely if TAI is developed in short(ish) timelines (next 5-15 years)". Because:

  • At the mome
... (read more)
4
JulianHazell
1y
Yeah, fair point. When I wrote this, I roughly followed this process: * Write article * Summarize overall takes in bullet points * Add some probabilities to show roughly how certain I am of those bullet points, where this process was something like “okay I’ll re-read this and see how confident I am that each bullet is true” I think it would’ve been more informative if I wrote the bullet points with an explicit aim to add probabilities to them, rather than writing them and thinking after “ah yeah, I should more clearly express my certainty with these”.

Thanks, this seems right to me.

Are the survey results shareable yet? Do you have a sense of when they will be? 

4
Sam Clarke
1y
Finally posted
6
Sam Clarke
1y
Will get them written up this month—sorry for the delay!

Also Cavendish Labs:

Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.

We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.

projectspapersprizes

Load more