Have many people committed to come? What background do the organizers have with AI safety or research events?
This sounds really great in principle, and I'm tentatively interested in joining, but this looks worryingly vague and last-minute from an initial look, so I'd want to see more evidence that there'll be a critical mass of interested people there before I commit.
This is great! Agree that this looked like an extremely promising idea based on what was publicly knowable in spring, and that it's probably not the right move now.
+1! I updated on this a lot over the past few months from working with Surge, and it's really great to see this reflected so quickly in others' thinking here
My colleagues have often been way too nice about reading group papers, rather than the opposite. (I’ll bet this varies a ton lab-to-lab.)
I like the TruthfulQA idea/paper a lot, but I think incentivizing people to optimize against it probably wouldn't be very robust, and non-alignment-relevant ideas could wind up making a big difference.
Just one of several issues: The authors selected questions adversarially against GPT-3—i.e., they oversampled the exact questions GPT-3 got wrong—so, simply replacing GPT-3 with something equally misaligned but different, like Gopher, should yield significantly better performance. That's really not something you want to see in an alignment benchmark.
Related question: EAG now requires you to have a lateral flow test result within 48h of the start of the event. Am I correct in understanding that lateral flow tests in the UK are often DIY kits, where you don't get any formal documentation of the results? If so, does anyone know what kind of documentation/evidence the EAG staff will be looking for?
This is great!
Naïve question: What's the deal with the cheapest CO2 offset prices?
It seems, though, that the current price of credible offsets is much lower than the social cost of carbon, and possibly so low that just buying offsets starts to look competitive with GiveWell top charities.
I'm not an expert on this. (I run an offsetting program for a small organization, but that takes about 4h/year. Otherwise I don't think about this much.) I'm also not anywhere near advocating that we should sink tons of money into offsets. But this observation strikes me as unintuitive ...
Update: It seems like the new VOTE ETF could be significantly cheaper and significantly higher-impact than the alternatives I'd mentioned, though still not overtly EA-oriented. Any thoughts?
My quick notes, in tweet form: https://twitter.com/sleepinyourhat/status/1438571062967611405
Riffing on this, there's an academic format that I've seen work well that doesn't fit too neatly into this rubric:
At each meeting, several people give 15-30m critical summaries of papers, with no expectation that the audience looks at any of the papers beforehand. If the summaries prompt anyone in the audience to express interest or ask good questions, the discussion can continue informally afterward.
This isn't optimized at all for producing new insights during the meeting, but I think it works well in areas (like much of AI) where (i) there's an extremely...
This is probably overstated—at most major US research universities, tenure outcomes are fairly predictable, and tenure is granted in 80-95% of cases. This obviously depends on your field and your sense of your fit with a potential tenure-track job, though.
That said, it is much easier to do research when you're at an institution that is widely considered to be competitive/credible in your field and subfield, and the set of institution...
Academic here:
Thanks, Wayne!
This looks like a good starting point for further research, but it's hard to take much that's actionable from this without more background in finance. Is there anything you'd take away as advice to a smallish-scale individual investor?
Thanks! This is helpful, and nudging me away from this approach.
Do you know of any good primers to get a better sense of how/when these levers get used on socially relevant issues?
Hrm, this is useful context, but I think you may be getting at a different issue. For the mutual funds that I'm looking at, they seem to be viewing shareholder activism as a potential avenue to have prosocial (ESG) impact on the companies that they invest in, such that their activism strategy likely increases fees a bit without impacting returns either way.
Personal update: The flight that I'd need in order to make the timing work is sold out, so I can't make it in any case. :(