S

sbowman

107 karmaJoined Aug 2020

Bio

NYU faculty member, working on AI/cognitive science/crowdsourcing issues involving language understanding. Newish to EA.

Posts
2

Sorted by New
2
· 3y ago · 1m read

Comments
15

Personal update: The flight that I'd need in order to make the timing work is sold out, so I can't make it in any case. :(

Have many people committed to come? What background do the organizers have with AI safety or research events?

This sounds really great in principle, and I'm tentatively interested in joining, but this looks worryingly vague and last-minute from an initial look, so I'd want to see more evidence that there'll be a critical mass of interested people there before I commit.

This is great! Agree that this looked like an extremely promising idea based on what was publicly knowable in spring, and that it's probably not the right move now.

My colleagues have often been way too nice about reading group papers, rather than the opposite. (I’ll bet this varies a ton lab-to-lab.)

I like the TruthfulQA idea/paper a lot, but I think incentivizing people to optimize against it probably wouldn't be very robust, and non-alignment-relevant ideas could wind up making a big difference. 

Just one of several issues: The authors selected questions adversarially against GPT-3—i.e., they oversampled the exact questions GPT-3 got wrong—so, simply replacing GPT-3 with something equally misaligned but different, like Gopher, should yield significantly better performance. That's really not something you want to see in an alignment benchmark.

Related question: EAG now requires you to have a lateral flow test result within 48h of the start of the event.  Am I correct in understanding that lateral flow tests in the UK are often DIY kits, where  you don't get any formal documentation of the results? If so, does anyone know what kind of documentation/evidence the EAG staff will be looking for?

This is great!

  • Why the Randox test in particular?
  • Does it seem viable to use the Day 2 test as the US return test? (I'll only be there Thursday to Monday, so a test on Saturday satisfies both requirements, if there's no other catch.)

Naïve question: What's the deal with the cheapest CO2 offset prices?

It seems, though, that the current price of credible offsets is much lower than the social cost of carbon, and possibly so low that just buying offsets starts to look competitive with GiveWell top charities.

I'm not an expert on this. (I run an offsetting program for a small organization, but that takes about 4h/year. Otherwise I don't think about this much.) I'm also not anywhere near advocating that we should sink tons of money into offsets. But this observation strikes me as unintuitive enough that I suspect I'm missing something.

Cost of offsetting:
I've generally seen the UN FCCC's carbon offset market presented as credible, if not the biggest or most scalable. They make it pretty easy direct money to specific projects, and most of them pass the smell test as being both verifiable and additive. One project that popped up last year involved converting the operations of a platinum mining company in Bihar from burning coal to another burning another fossil fuel in a slightly-lower-emissions way. That's easy to verify, and there was a clear argument for why it wouldn't make economic sense for them to transition without the offset money.

Current prices on that market are around $1.75/tonne, and at other times of year recently, I've seen numbers dip as low as ~$0.33/tonne.

Value of offsetting:
My impression is that the social cost of carbon is somewhere in the $30–300/tonne range, suggesting that cheap credible offsets are plausibly pretty high-leverage, but I don't have good framework for thinking about this kind of impact.

I found this easier to visualize in light of this line from a recent Future Perfect piece:

[...] Bressler found that adding 4,434 metric tons of carbon dioxide into the atmosphere would result in one heat-related death this century.

Combining that with the numbers above yields a price-per-life-saved in the $1k-10k range, which is within the same order of magnitude as charities like AMF, IIRC.

So?
What am I missing? Is there something fishy with that use of of dollars per life saved? Are these cheap UN-monitored offsets actually bullshit? Or is it actually just very impactful to buy this kind of offset while they're still cheap (...even if it's still not the most effective way to spend money, or even the most effective way to spend money on climate impacts)?

Update: It seems like the new VOTE ETF could be significantly cheaper and significantly higher-impact than the alternatives I'd mentioned, though still not overtly EA-oriented. Any thoughts?

My quick notes, in tweet form: https://twitter.com/sleepinyourhat/status/1438571062967611405

Riffing on this, there's an academic format that I've seen work well that doesn't fit too neatly into this rubric:

At each meeting, several people give 15-30m critical summaries of papers, with no expectation that the audience looks at any of the papers beforehand. If the summaries prompt anyone in the audience to express interest or ask good questions, the discussion can continue informally afterward.

This isn't optimized at all for producing new insights during the meeting, but I think it works well in areas (like much of AI) where (i) there's an extremely large/dense literature, (ii) most papers make a single point that can be summarized relatively briefly, and (iii) it's possible to gather a fairly large group of people with very heavily overlapping interests nad vocabulary.

Load more