Ha, true, this would have been more fun if I hadn't told you 😁. Thank you for your kind words on the competence of my writing.
About my supposed missaprahention: It depends. Sometimes the LLM takes all of the edge and spice out of what I say, and then I'll gladly ditch its suggestions. However, often it'll be more concise and a bit more graceful in its choice of words. It's most definitely better at grammar and orthography than I am. In the example above, the whole message is shorter without losing any meaningful content. I think removing the "well" from the first sentence made it a bit less clumsy. In the second sentence the word "practices" is slightly closer to what I wanted to express than "applications". Etc. Be that as it may, it gives me more options, which is nice. I am still free to reject its suggestions or modify them, so I'm happy it's there for me to be used if I find it beneficial, as I often do.
I'll let you be the judge of how off-putting it is in this case. Here is the prompt I wrote:
Find any obvious ways of making the following much clearer and address any grammatical or orthographical errors:
I can well imagine that you are generally right, although I'd add some caveats:
1. "Using LLM to assist one's writing" covers a very broad set of applications. I can prompt an LLM to do little more than check for punctuation, or to simply criticize what I write rather than rewrite it for me. There's a wide range of ways in which I could use an LLM that would be neither apparent nor off-putting to the vast majority of people. By the way, I wouldn't claim that my post above is an example of a subtle use of AI. I agree it is rather obvious there.
2. Even if your claim is generally true, I think it is also fair to say that people commonly overestimate their ability to spot AI-generated content. Because they cover such a wide range of interventions, from very subtle to incredibly obvious, I doubt that the people who are most critical of the tech are quite as good at spotting it as they suspect.
3. Trade-offs are real. The time I don't spend honing my organic writing skills is time I may spend honing some other skill, or just enjoying life in any of a myriad of other ways. :).
Here is the response that the LLM gave me:
I can imagine that you are broadly right, though I would add a few caveats.
Right, exactly. Even seemingly self evident mathematical claims are not exempt from this. We can never rule out the possibility of error, so epistemological certainty is unattainable. People can feel certain, but whenever they do, they are mistaken about the status of that feeling.
This is often confused with relativism. We say true things all the time. “2 + 2 = 4” is true, yet there is no final and infallible way to confirm it. Nothing is immune to criticism.
The Beginning of Infinity is my favorite book. I have been active in the critical rationalist circles that have formed around David Deutsch’s work. If you want pointers, I know quite a few people in that community. If you are looking for sharp criticisms of Deutsch, asking his admirers may not be ideal. Even though we fully accept that Deutsch is fallible like anyone else, most of us agree with him on the major points.
It is good that you want to find strong criticisms of his ideas. I hope you succeed. To be frank, I have yet to encounter substantial critiques of The Beginning of Infinity, though I am sure there are errors in it.
My friend Logan founded the Conjecture Institute this year. It might interest you:
https://www.conjectureinstitute.org/
It is LLM assisted, yes. Is that a problem? I ask that sincerely. I use LLMs to help me write because it lets me work faster and structure long arguments more clearly. But the ideas, claims, and reasoning are mine, and I read and revise everything before posting. I treat the model as an aid for composition, not as a substitute for thinking.
Thank you for this thoughtful and generous comment, Yarrow. I appreciate it.
On your first point, I think your criticism is well placed. I should not have psychologized the intentions behind the policy recommendations in AI 2027. The argument does not require that the authors be cynical, and I have no reliable way of knowing their motivations. Their recommendations are entirely consistent with sincere concern viewed through their framework. After reconsidering this, I agree that my original framing was uncharitable. I will revise that section of the post, and I should also be clear that on this particular issue I diverge from Brett Hall’s interpretation. These fatalistic views are ones I myself once held with full sincerity, so presuming insincerity in others would be unjustified.
On your second point, I fully agree that all knowledge is conjectural. Deutsch emphasises fallibilism strongly, and I would never claim any of these arguments as settled truth. They are conjectures offered because, at present, they seem to be better explanations than the rationalist alternatives.
Thank you again for engaging with the post so carefully. Your comment improved the argument and helped catch a place where my own framing fell short of proper charity.
Edit: I have added several clarifying notes to the post (marked as “Edit”). I hope these address your well placed criticism and correct the earlier lack of charity.