https://www.openphilanthropy.org/focus/global-aid-policy/
“Build right-of-center support for aid, such as Civita’s work to create and discuss development policy recommendations with conservative Norwegian lawmakers.”
What you linked to is a Norwegian Think Tank.
Yes, it has "right of center" in the text of the article you linked, but of course my commentary was about US politics, and a Norwegian think tank doesn't interface with that. What is "right of center" in Norway is completely different from what is "right of center" in the US.
Commenting on the broader topic brought up by the top-level comment, I sent over the spreadsheet of all grants from Open Philanthropy in 2024 to GPT-o1-preview asking the following question:
...Here is a spreadsheet of all of Open P
I love seeing posts from people making tangible progress towards preventing catastrophes—it's very encouraging!
I know nothing about this area, so excuse me if my question doesn't make sense or was addressed in your post. I'm curious what the returns are on spending more money on sequencing, e.g. running the machine more than one a week or running it on more samples. If we were spending $10M a year instead of $1.5M on sequencing, how much less than 0.2% of people would have to be infected before an alert was raised?
Some other questions:
I'd love to hear his thoughts on defensive measures for "fuzzier" threats from advanced AI, e.g. manipulation, persuasion, "distortion of epistemics", etc. Since it seems difficult to delineate when these sorts of harms are occuring (as opposed to benign forms of advertising/rhetoric/expression), it seems hard to construct defenses.
This is a related concept mechanisms for collective epistemics like prediction markets or community notes, which Vitalik praises here. But the harms from manipulation are broader, and could route through "superstimuli", addictiv...
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
What sorts of personal/career development does the PA role provide? What are the pros and cons of this path over e.g. technical research (which has relatively clear professional development in the form of published papers, academic degrees, high-status job titles that bring public credibility)?
For me personally, research and then grantmaking at Open Phil has been excellent for my career development, and it's pretty implausible that grad school in ML or CS, or an ML engineering role at an AI company, or any other path I can easily think of, would have been comparably useful.
If I had pursued an academic path, then assuming I was successful on that path, I would be in my first or maybe second year as an assistant professor right about now (or maybe I'd just be starting to apply for such a role). Instead, at Open Phil, I wrote less-academic re...
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
How inclined are you/would the OP grantmaking strategy be towards technical research with theories of impact that aren’t “researcher discovers technique that makes the AI internally pursue human values” -> “labs adopt this technique”. Some examples of other theories of change that technical research might have:
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
How much do the roles on the TAIS team involve engagement with technical topics? How do the depth and breadth of “keeping up with” AI safety research compare to being an AI safety researcher?
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
What does OP’s TAIS funding go to? Don’t professors’ salaries already get paid by their universities? Can (or can't) PhD students in AI get no-strings-attached funding (at least, can PhD students at prestigious universities)?
Disclaimer: I joined OP two weeks ago in the Program Associate role on the Technical AI Safety team. I'm leaving some comments describing questions I wanted to know to assess whether I should take the job (which, obviously, I ended up doing).
Is it way easier for researchers to do AI safety research within AI scaling labs (due to: more capable/diverse AI models, easier access to them (i.e. no rate limits/usage caps), better infra for running experiments, maybe some network effects from the other researchers at those labs, not having to deal with all the log...
Sampled from my areas of personal interest, and not intended to be at all thorough or comprehensive:
AI researchers (in no particular order):
Artir Kel (aka José Luis Ricón Fernández de la Puente) at Nintil wrote an essay broadly sympathetic to AI risk scenarios but doubtful of a particular step in the power-seeking stories Cotra, Gwern, and others have told. In particular, he has a hard time believing that a scaled-up version of present systems (e.g. Gato) would learn facts about itself (e.g. that it is an AI in a training process, what its trainers motivations would be, etc) and incorporate those facts into its planning (Cotra calls this "situational awareness"). Some AI safety researchers I'v...
Some common failure modes:
"Research expeneses" does not include stipends, but you can apply f... (read more)