All of ts's Comments + Replies

ts
2y1
0
0

Good question. A few possible strategies:

(1) Make it really easy. Have accessible software tools out there, so labs don't have to build everything from scratch.
(2) Sponsor relevant technical research. I'm especially thinking of research falling under "AI security". E.g. how easy is model-stealing, given different forms of access?
(3) Have certain labs act as early adopters. They experiment with the best setup and set an example for other labs.
(4) More public advocacy in favour of structured access.
(5) Set up a conference track where there's a specific role ... (read more)

1
machinaut
2y
(1) seems worth funding to the extent that it's fund-able (like if it were an open source software project) I'm less optimistic about public advocacy.  As ML models have had a greater impact on peoples lives, there's already been more of a public movement looking for more transparency and accountability for these models (which could include structured access).  It seems like this isn't a very strong incentive to existing companies' products. (5) I like a lot, and would fit well with structured evaluation programmes, like BIG-Bench
ts
3y2
0
0

Thanks for the caveats Jan, I think that's helpful.

It's true that my views have been formed from within the field of AI governance, and I am open to the idea that they won't fully generalise to other fields. I have inserted a line in the introduction that clarifies this.

ts
3y11
0
0

Thanks for the comments!

Speaking from my experience in AI governance: There are some opportunities to work on projects that more experienced people have suggested. At GovAI we have recently made a list of ideas people should work on. People on the GovAI fellowship program have been given suggestions.

Overall, yes, I do think there are fewer such opportunities than it sounds like there are in technical areas. That makes sense to me, because for AI governance research projects, the vast majority of junior people don't yet have the skills necessary to execute ... (read more)

8
Linch
3y
Hmm taking a step back, I wonder if the crux here is that you believe(?) that the natural output for research is paper-shaped^, whereas I would guess that this would be the exception rather than the norm, especially for a field that does not have many very strong non-EA institutions/people (which I naively would guess to be true of EA-style TAI governance). This might be a naive question, but why is it relevant/important to get papers published if you're trying to do impactful research? From the outside, it seems unlikely that all or most good research is in paper form, especially in a field like (EA) AI governance where (if I understand it correctly) the most important path to impact (other than career/skills development) is likely through improving decision quality for <10(?) actors.  If you are instead trying to play the academia/prestige game, wouldn't it make more sense to optimize for that over direct impact? So instead of focusing on high-quality research on important topics, write the highest-quality (by academic standards) paper you can in a hot/publishable/citable topic and direction.  ^ This is a relevant distinction because originality is much more important in journal articles than other publication formats, you absolutely can write a blog post that covers the same general idea as somebody else but better, and AFAIK there's nothing stopping a think tank from "revising" a white paper covering the same general point but with much better arguments.