Girish_Sastry

Posts

Sorted by New

Topic Contributions

Comments

The case for becoming a black-box investigator of language models

I'd also be interested in funding activities like this. This could  inform how much we can learn about models without distributing weights.

The Future Fund’s Project Ideas Competition

A center applying epistemic best practices to predicting & evaluating AI progress

Artificial  Intelligence and  Epistemic Institutions

Forecasting and evaluating AI progress is difficult and important. Current work in this area is distributed  across multiple organizations or individual researchers, not all of whom possess (a) the technical expertise, (b) knowledge & skill in applying epistemic best practices,  and (c) institutional legitimacy  (or otherwise suffer from cultural constraints). Activities of the center could include providing services to AI groups (e.g. offering superforecasting training or prediction services), producing bottom-line reports on "How capable is AI system X?", hosting adversarial collaborations, pointing out deficiencies in academic AI evaluations, and generally pioneering "analytic tradecraft" for AI progress.

Policy and International Relations - What Are They? (Primer for EAs, Part 2)

On policy analysis, you write:

I will argue that despite the fact that there is overlap, and many of the ideas are well known, the knowledge and experience of policy analysis has much to offer effective altruism in achieving the goals of improving the world. Not only that, but it offers a paradigm for how to reasonably pull from multiple disciplines in helping make decisions - exactly what this series of posts is trying to help with.

 

Did you ever end up writing up those thoughts? I skimmed the rest of the posts in the series but didn't find it.

On AI and Compute

I don't think I quite follow your criticism of FLOP/s; can you say more about why you think it's not a useful unit? It seems like you're saying that a linear extrapolation of FLOP/s isn't accurate to estimate the compute requirements of larger models. (I know there are a variety of criticisms that can be made, but I'm interested in better understanding your point above)

SHOW: A framework for shaping your talent for direct work

How'd you decide to go focus on going into research, even before you decided that developing technical skills would be helpful for that path?

SHOW: A framework for shaping your talent for direct work

Thanks for the great post. Ryan, I'm curious how you figured this at an early stage:

I figured that in the longer term, my greatest chance at having a substantial impact lay in my potential as a researcher, but that I would have to improve my maths and programming skills to realize that.

Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy

What key metrics do research analysts pay attention to in the course of their work? More broadly, how do employees know that they're doing a good job?

Ask MIRI Anything (AMA)

By (3), do you mean the publications that are listed under "forecasting" on MIRI's publications page?

Why I'm donating to MIRI this year

I agree that this makes sense in the "ideal" world, where potential donors have better mental models of this sort of research pathway, and have found this sort of thinking useful as a potential donor.

From an organizational perspective, I think MIRI should put more effort into producing visible explanations of their work (well, depending on their strategy to get funding). As worries about AI risk become more widely known, there will be a larger pool potential donations to research in the area. MIRI risks becoming out-competed by others who are better at explaining how their work decreases risk from advanced AI (I think this concern applies both to talent and money, but here I'm specifically talking about money).

High-touch, extremely large donors will probably get better explanations, reports on progress, etc from organizations, but the pool of potential $ from donors who just read what's available online may be very large, and very influenced by clear explanations about the work. This pool of donors is also more subject to network effects, cultural norms, and memes. Given that MIRI is running public fundraisers to close funding gaps, it seems that they do rely on these sorts of donors for essential funding. Ideally, they'd just have a bunch of unrestricted funding to keep them secure forever (including allaying the risk of potential geopolitical crises and macroeconomic downturns).

Load More