G

gwern

639 karmaJoined Apr 2017

Posts
1

Sorted by New

Comments
45

Maybe? I can't easily appreciate such a usecase because I always want to save any excerpts I find worth excerpting. Are there a lot of people who want that? If that's the idea, I guess the "About Highlights" dialogue needs a bit of documentation to explain the intended (and unintended) uses. At least, anyone who doesn't realize that the annotations are ephemeral, because they aren't enough of a web dev to understand 'What you save is stored only on your specific browser locally' is as much of a bug as it is a feature, is in for a bad time when their annotations inevitably get deleted...

I like it overall.

But I have a lot of questions about the 'highlight' feature: aside from the many teething problems Said has already documented, which bugs doubtless will be fixed, I don't understand what the usecase is, compared to other web annotation systems like Hypothesis - so it stores arbitrary ranges of text, saving them to, I assume, dangerously ephemeral browser LocalStorage where they will be unpredictably erased in a few hours / days / weeks, and unavailable on any other device presumably. Why do I want this? They aren't clipped to something like Evernote, they aren't synced to my phone, they aren't saved anywhere safe, they aren't posted to social media or visible to anyone else in any way... Is the idea to highlight various parts and then manually copy-paste them?

I agree. I've already started to develop a bit of an instinctive 'ugh' reaction to random illustrations, even ones without obvious generative model tell-tales or the DALL-E watermark.

It's comparable to how you feel when you notice that little '© Getty Images' or a Memphis style image, and realize your time & (mental) bandwidth was wasted by the image equivalent of an emoji. It's not that they look bad, necessarily, but they increasingly signify 'cheap' and 'tacky'. (After all, if this monkey pox image can be generated by a prompt of 12 redundant words, then that's another of saying that the image is worth far less than a thousand words - it's worth less than 12...)

It gets worse if you can spot any flaws or they do look bad overall: "so you couldn't even take another minute to generate a variant without blatant artifacting? That's how slovenly and careless your work is, and how little you respect your readers?"

For those wondering why we needed a stylish magazine for provocative rationalist/EA nonfiction when Works In Progress is pretty good too, Scott Alexander says

Works In Progress is a Progress Studies magazine, I'm sure these two movements look exactly the same to everyone on the outside, but we're very invested in the differences between them.

What disease would you seek FDA approval for? "I sleep more than 4 hours a day" is not a recognized disease under the status quo. (There is the catch-all of 'hypersomnia', but things like sleep apnea or neurodegenerative disorders or damage to clock-keeping neurons would not plausibly be treated by some sort of knockout-mimicking drug.)

One downside you don't mention: having a Wikipedia article can be a liability when editors are malicious, for all the reasons it is a benefit when it is high-quality like its popularity and mutability. A zealous attacker or deletionist destroying your article for jollies is bad, but at least it merely undoes your contribution and you can mirror it; an article being hijacked (which is what a real attacker will do) can cause you much more damage than you would ever have gained as it creates a new reality which will echo everywhere.

My (unfortunately very longstanding) example of this is the WP article on cryonics: you will note that the article is surprisingly short for a topic on which so much could be said and reads like it's been barely touched in half a decade. Strikingly, while having almost no room for any information on minor topics like how cryonics works or how current cryonics orgs operate or the background on why it should be possible in principle or remarkable research findings like the progress on bringing pigs back from the dead, instead, the introduction, and an entire section, harp on how corporations go bankrupt and it is unlikely that a corporation today will be around in a century and how ancient pre-1973 cryonics companies have all gone bankrupt and so on. These claims are mostly true, but you will then search the article in vain for any mention that the myriad of cryonics bankruptcies alluded to is like 2 or 3 companies, that cryonics for the past 50 years isn't done solely by corporations precisely because of that when it became apparent that cryonics was going to need to be a long-term thing & families couldn't be trusted to pay, they are structured as trusts (the one throwaway comma mentioning trusts is actively misleading by implying that they are optional and unusual, rather than the status quo), and that there have been few or no bankruptcies or known defrostings since. All attempts to get any of this basic information into the article is blocked by editors. Anyone who comes away with an extremely negative opinion of cryonics can't be blamed when so much is omitted to put it in the worst possible light. You would have to be favorably disposed to cryonics already to be reading this article and critically thinking to yourself, "did cryonicists really learn nothing from the failures? how do cryonicists deal with these criticisms when they are so obvious, it doesn't seem to say? if the cryonics orgs go bankrupt so often, why doesn't it name any of the many bankruptcies in the 49 years between 1973 and 2022, and how are any of these orgs still around?" etc.

More recently, the Scott Alexander/NYT fuss: long-time WP editor & ex-LWer David Gerard finally got himself outright topic-banned from the SA WP article when he overreached by boasting on Twitter how he was feeding claims to the NYT journalist so the journalist could print them in the article in some form and Gerard could then cite them in the WP article (and safe to say, any of the context or butt-covering caveats in the article version would be sanded away and simplified in the WP version to the most damaging possible version, which would then be defended as obviously relevant and clearly WP:V to an unimpeachable WP:RS). Gerard and activists also have a similar 'citogenesis' game going with Rational Wiki and friendly academics laundering into WP proper: make allegations there, watch them eventually show up in a publication of some sort, however tangential, and now you can add to the target article "X has been described as a [extremist / white supremacist / racist / fringe figure / crackpot] by [the SPLC / extremism researchers / the NYT / experts / the WHO]<ref></ref>". Which will be true - there will in fact be a sentence, maybe even two or three about it in the ref. And there the negative statements will stay forever if they have anything to say about it (which they do), while everything else positive in the article dies the death of a thousand cuts. This can then be extended: do they have publications in some periodicals? Well, extremist periodicals are hardly WP:RSes now are they and shouldn't be cited (WP:NAZI)... Scott's WP article may not be too bad right now, but one is unlikely to be so lucky to get such crystal-clear admissions of bad faith editing, a large audience of interested editors going beyond the usual suspects of self-selected activist-editors who are unwilling to make excuses for the behavior, and despite all that, who knows how the article will read a year or a decade from now?

Note: most of the discussion of this is currently on LW.

The Wall Street Journal article How a Public School in Florida Built America’s Greatest Math Team (non-paywalled version) describes how a retired Wall Street bond trader built a math team that has won 13 of the last 14 national math championships at an otherwise unremarkable high school. His success is not based on having a large budget, but rather on thinking differently and building an ecosystem.

The otherwise unremarkable high school has pick of the litter from everyone living around one of the largest universities in the country which is <5 miles away. ("Many of the gifted kids in his program have parents who work at the nearby University of Florida and push to get on Mr. Frazer’s radar.") That the school has unremarkably low average scores says little about their tails. (Note all the Asian names.)

The above seems voluminous and I believe this is the written output with the goal of defending a person.

Yes, much like the OP is voluminous and is the written output with the goal of criticizing a person. You're familiar with such writings, as you've written enough criticizing me. Your point?

Yeah, no, it's the exact opposite.

No, it's just as I said, and your Karnofsky retrospective strongly supports what I said. (I strongly encourage people to go and read it, not just to see what's before and after the part He screenshots, but because it is a good retrospective which is both informative about the history here and an interesting case study of how people change their minds and what Karnofsky has learned.)

Karnofsky started off disagreeing that there is any problem at all in 2007 when he was introduced to MIRI via EA, and merely thought there were some interesting points. Interesting, but certainly not worth sending any money to MIRI or looking for better alternative ways to invest in AI safety. These ideas kept developing, and Karnofsky kept having to engage, steadily moving from 'there is no problem' to intermediate points like 'but we can make tool AIs and not agent AIs' (a period in his evolution I remember well because I wrote criticisms of it), which he eventually abandons. You forgot to screenshot the part where Karnofsky writes that he assumed 'the experts' had lots of great arguments against AI risk and the Yudkowsky paradigm and that was why they just bother talking about it, and then moved to SF and discovered 'oh no', that not only did those not exist, the experts hadn't even begun to think about it. Karnofsky also agrees with many of the points I make about Bostrom's book & intellectual pedigree ("When I'd skimmed Superintelligence (prior to its release), I'd felt that its message was very similar to - though more clearly and carefully stated than - the arguments MIRI had been making without much success." just below where you cut off). And so here we are today, where Karnofsky has not just overseen donations of millions of dollars to MIRI and AI safety NGOs or the recruitment of MIRI staffers like ex-MIRI CEO Muehlhauser, but it remains a major area for OpenPhil (and philanthropies imitating it like FTX). It all leads back to Eliezer. As Karnofsky concludes:

One of the biggest changes is the one discussed above, regarding potential risks from advanced AI. I went from seeing this as a strange obsession of the community to a case of genuine early insight and impact. I felt the community had identified a potentially enormously important cause and played a major role in this cause's coming to be taken more seriously. This development became - in my view - a genuine and major candidate for a "hit", and an example of an idea initially seeming "wacky" and later coming to seem prescient.

Of course, it is far from a settled case: many questions remain about whether this cause is indeed important and whether today's preparations will look worthwhile in retrospect. But my estimate of the cause's likely importance - and, I believe, conventional wisdom among AI researchers in academia and industry - has changed noticeably.

That is, Karnofsky explicitly attributes the widespread changes I am describing to the causal impact of the AI risk community around MIRI & Yudkowsky. He doesn't say it happened regardless or despite them, or that it was already fairly common and unoriginal, or that it was reinvented elsewhere, or that Yudkowsky delayed it on net.

I'm really sure even a median thought leader would have better convinced the person written this.

Hard to be convincing when you don't exist.

Load more