This should be possible by adding noindex meta tags. That would indicate to search engines that a page shouldn’t appear in their results. They don’t have to honor that, but the major ones do, which is probably all we’d care about. I’m not sure how quickly stuff that is already in their index would be removed but there might be a way to manually trigger that.
I like the idea, but it probably wouldn’t/ shouldn’t change how much one should self-censor based on the possibility of things being quoted out of context by journalists. Any journalist worth their salt would have no trouble coming here to use the forum search, or creating an account.
Awesome! Let's keep in touch and when you guys are up and running we can provide you a proper welcome :)
Thank you for the feedback! I agree that it's not the best one I've reposted. I haven't had much time for digging in the LW archives lately though and I came across it, and it actually helped me make some concrete improvements to productivity, so I thought it could possibly help others. I am realizing now that I may be more excited about the productivity-hack genre than most, so I will keep that in mind moving forward.
If you haven't checked out the ~30 earlier reposts, you can find them by clicking on the tag. I would be surprised if you didn't find that stuff higher quality as they are mostly older and higher karma. Feel free to tag your own reposts as well. I think it would be great to have a collection of stuff that doesn't just reflect my tastes/interests, and I am not sure how frequently I will be able to keep posting at this point.
I have been using Focusmate* for a while now and this post helped me realize that one of my biggest failure modes with it was not having sessions setup to begin each day. I would use it a lot for a while and then get out of the habit and my productivity would gradually start to suffer and it took me some time to realize and get back into it.
Now I have been booking first sessions out for two weeks. It's flexible in that I can always cancel it if something comes up, but I usually won't and it's elastic in that even if I bail on it for the good part of a day, my first session is setup by default for me for the next day. Usually that's enough to get and keep me on track.
*A service that matches you with video co-working partners for accountability. More about it and EA group details here.
Otherwise, they will be effectively alone in the middle of nowhere, totally dependent on the internet to exchange and verify ideas with other EA-minded people (and all the risks entailed by filtering most of your human connection through the internet).
The first part of this sentence seems fine to me, and living in the country can be isolating and is not for everyone, but just because there aren't other EAs around, doesn't mean you have to get all your human connection through the internet. Having interests and relationships outside of EA/AI Safety circles is probably beneficial for mental health.
FWIW I live in Vermont, on the border of NH and it’s about 2 hours to Boston. Not sure where this group house is, but Burlington is 90 minutes from here on the opposite side of the state, so 3.5 hours from Boston - not that it couldn’t take longer if you aren’t near a highway.
I know of 2 or 3 (not sure if one of them is still there) people in Burlington working on AI safety and there are ~8 of us Vermont EAs that have been getting together sporadically for the last year or so. Would love to expand that group if anyone wants to get in touch. Don’t be a stranger :)
Is it not possible that, if it became public knowledge that grants were insured against clawback, lawyers would try harder to get them? If the money is already spent and it’s a bunch of broke individuals, it may not be worth the expense of trying to claw it back. I guess that would just be something Bill would have to account for.
I agree with most of this - clusters probably not very accurate, divisive religious terminology, him identifying with one of the camps while describing them.
Can you elaborate a bit more on why you think binary labels are harmful for further progress? Would you say they always are? How much of your objection here is these particular labels and how Scott defines them, and how much of it is that you don't think the shape can be usefully divided into two clusters?
I find that, on topics that I understand well, I often object intuitively to labels on the grounds that they aren't very accurate, or don't describe enough nuance, but for topics I am not expert on, I sometimes find it useful to be able to gesture at the general shape of things.
I guess I'm still interested in possible paths to understanding AI risk that don't require accepting some of the "weirder" arguments up front, but might eventually get you there.