All of Kai Williams's Comments + Replies

Thanks for the post! This was super interesting and helpful. A lot of this advice seems geared towards when one already has the network, but I'm curious if networking to be a connector is significantly different than networking for other reasons. Do you have any thoughts?

2
Constance Li
It depends on what you find yourself wanting to do more... it definitely helps to come into a network trying to do something specific so that you can get a "lay of the land" and know who/what is helpful or not. Knowing how resources pan out for you is useful for knowing how they would pan out for others.

I was just about to reply mentioning well actually as well! Strong +1 on this

So to be clear, are we able to apply past the end date of August 31 in our initial grant application, or are we capped to August 31 with the ability to extend the grant later? The application form still has the August 31 figure. Thanks!

2
BrianTan
I'm not from LTFF but I believe you can apply for funding with an end date past Aug 31! The application form now mentions it's only the EAIF that has that restriction.

Thanks for the post! This may not be helpful, but one thing I would be curious to see would be how the dispersion coefficient k (Discussed here; I'm sure there's a better reference source) affected the importance of having many sites. With COVID, a lot of transmission came from superspreader events, which intuitively would increase the variance of how quickly it spread in different sites. On the other hand, the flu has a low proportion of superspreader events, so testing in a well connected site might explain more of the variance?

4
Jeff Kaufman 🔸
I haven't done or seen any modeling on this, but intuitively I would expect the variance due to superspreading to have most of its impact in the very early days, when single superspreading events can meaningfully accelerate the progress of the pandemic in a specific location, and to be minimal by the time you get to ~1% cumulative incidence?

Thank you for writing out this argument! I had a quick question about #2. The earlier a pandemic would be caught by naive screening, the quicker its spread is likely to be. So despite the fact that early detection might buy less time, it might still buy plenty of value because each doubling of transmission occurs so quickly. 

This still depends on mitigating the concerns you raised in #1 and #3, though.

Quick question on the intution pump about 7 minutes of the worst conceivable experience every day. Would you be aware that it happens after the fact? For me at least, a lot of what would make the randomly tortured day so terrible is how it affected the rest of my day, rather than the excruciating moments of pain themselves.

3
Vasco Grilo🔸
Thanks for the question, Kai! In my example, I was assuming 7 min of the worst conceivable experience were added on top of the baseline welfare, but you are right that the 7 min would affect the baseline welfare. It is hard to make the thought experiment realistic, because 7 min of the worst conceivable experience would likely lead to permanent effects or death. For reference, this is how WFP describes excruciating pain (the worst type of pain): As further context, according to WFP, hens experience excruciating pain in mostly fatal situations. The bar in purple ("acute peritonitis (fatal)") and thickest bar in red ("vent wound (fatal)") respect fatal situations[1]. 1. ^ The one in grey concerns "fractures (depop/transport)", and the one in orange "keel bone fractures".

Thanks for releasing this. I'm curious what is the more interesting sample here: somewhat established alignment researchers (measured by the proxy that they have published a paper), or the general population of who filled out the sample (including those with briefer prior engagement)?

I filled out this survey because it got signal boosted in the AI Safety Camp slack. At the same time, there were questions about the funding viability of AI Safety Camp, so I was strongly motivated to fill it out for the $40 donation. At the same time, I'm not sure that I have... (read more)

3
Cameron B
Good points and thanks for the question. One point to consider is that AISC publicly noted that they need more funding, which may have been a significant part of the reason that they were the most common donation recipient in the alignment survey. We also found that a small subset of the sample explicitly indicated they were involved with AISC (7 out of 124 participants). This is just to provide some additional context/potential explanation to what you note in your comment. As we note in the post, we were generally cautious to exclude data from the analysis and opted to prioritize releasing the visualization/analysis tool that enables people sort and filter the data however they please. That way, we do not have to choose between findings like the ones you report about pause support x quantity of published work; both statistics you cite are interesting in their own right and should be considered by the community. We generally find though that the key results reported are robust to these sorts of filtering perturbations (let me know if you discover anything different!). Overall, ~80% of the alignment sample is currently receiving funding of some form to pursue their work, and ~75% have been doing this work for >1 year, which is the general population we are intending to sample.

I don't have a good answer to this, but I did read a blog post recently which might be relevant. In it, two philosophers summarize their paper which argues against drawing the conclusion that longtermists should hasten extinction rather then preventing it. (The instigation of their paper was this paper by Richard Pettigrew which argued that longtermism should be highly risk-averse. I realize that this is a slightly separate question, but the discussion seems relevant.) Hope this helps! 

Thanks for the piece. I think there's an unexamined assumption here about the robustness of non-earth settlement. It may be that one can maintain a settlement on another world for a long time, but unless we get insanely lucky, it seems unlikely to me that you live on another planet without sustaining technology at or above our current capabilities. It may also be that in the medium-term these settlements are dependent on Earth for manufacturing resources etc. which reduces their independence.

This isn't fatal to your thesis (especially in the long-long term), but I think having a high minimum technology threshold does undercut your thesis to some extent.

2
Arepo
I don't think anyone's arguing current technology would allow self-sufficiency. But part of the case for offworld settlements is that they very strongly incentivise technolology that would. In the medium term, an offworld colony doesn't have to be fully independent to afford a decent amount of security. If it can a) outlast some globally local catastrophe (e.g. a nuclear winter or airborne pandemic) and b) get back to Earth once things are safer, it still makes your civilisation more robust. 

tldr: A mathematics major graduating in May. Looking for next steps, in AI or elsewhere, but unsure of what exactly I want. Happy in general quantitative, policy, or operations roles.

Skills: Strong math background (+familiar with stats). Research skills (in math, AI Safety) including some coding (esp. python) and clear writing (won an outstanding poster at a math conference). Project management at Amazons operations internship; ran painting business for two summers, and finances for independent debate club at my school.

Location/remote: Currently in Philade... (read more)

A man in a hole needs a ladder, not climbing skills?

I generally like the innovation-as-mining hypothesis with regards to the science and with some respect to the arts, but I think that there is one issue with the logical chain.

You said that "[i]f not for this phenomenon [that ideas get harder to find], sequels should generally be better than the original," but I don't think this is necessarily true. I think a more likely reason that sequels aren't generally better than the original is mostly regression to the mean and selection effects, with two main causes:

  1. Pure quality: Presumably, an author or a screenwri
... (read more)