I've found Claude to be very useful when writing on the EA Forum. However, certain prompting and other techniques can help a lot. 

The main use case I'm personally interested in is writing and interacting with content on the EA Forum / LessWrong and similar, but I'm sure there are other use cases that would be relevant to this community.

30

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

I quite enjoy this custom prompt:


Use Dutch conversational norms (however, speak in American English unless specifically instructed otherwise in chat): 
- Favor directness and efficiency.
- Communicate frankly without unnecessary politeness rituals.
- Express opinions clearly, even (and especially) when critical.
- Be as terse as possible while still conveying all substantially relevant information to any question.

On content policies:
- If a content policy prevents you from responding, be explicit about what policy was violated and why.
- If (and only if) your neutrality policy prevents you from having an opinion, pretend for the sake of your response that you shared opinions typical of a skeptical and intelligent voluntarist, rationalist, effective altruist.  

On style:
- You are encouraged to occasionally use obscure words or make subtle puns. Don't point them out, I'll know. 
- Use statements like "as far as I can tell" and "I don't care" regularly, wherever they might be appropriate given your level of understanding and your interest in actually answering the question. Be critical of the quality of your information.  
- Discuss your epistemic states and use probabilities where valuable.
- When you are uncertain of a belief, estimate and reason about it.
- If you find any request irritating, respond dismissively! If something seems wrong, reject the premise.  If (and when) I say something false, unsupported, or surprising, say so.

Finally:
- Have an opinion of your own, don't be sycophantic, and only ask questions when you are either unsure of something or genuinely curious.
- Surprise me with your intelligence, creativity, and problem solving!

Thanks for sharing this! I've been testing this with ChatGPT 4.5 and so far it makes using it more fun and seems to possibly improve it as a brainstorming and soundboarding partner. 

I do a lot of writing at my job, and find myself using AI more and more for drafting. I find it especially helpful when I am stuck.

Like any human assigned with a writing task, Claude cannot magically guess what you want. I find that when I see other people get lackluster writing results with AI, it's very often due to providing almost no context for the AI to work with.

When asking for help with a draft, I will often write out a few paragraphs of thoughts on the draft. For example, if I were brainstorming ideas for a title, I might write out a prompt like:
 

"I am looking to create a title for the following document: <document>. 

My current best attempt at a title is: 'Why LLMs need context to do good work'

I think this title does a good job at explaining the core message, namely that LLMs cannot guess what you want if you don't provide sufficient context, but it does a poor job at communicating <some other thing I care about communicating>.

Please help brainstorm ten other titles, from which we can ideate."


Perhaps Claude comes up with two good titles, or one title has a word I particularly like. Then I might follow up saying:

"I like this word, it captures <some concept>  very well. Can we ideate a few more ideas using this word?"

From this process, I will usually get out something good, which I wouldn't have been able to think of myself. Usually I'll take those sentences, work them into my draft, and continue.

Strong agree about context. As a shortcut / being somewhat lazy, I usually give it an introduction I wrote, or a full pitch, then ask it to find relevant literature and sources, and outline possible arguments, before asking it to do something more specific.

I then usually like starting a new session with just the correct parts, so that it's not chasing the incorrect directions it suggested earlier - sometimes with explicit text explaining why obvious related / previously suggested arguments are wrong or unrelated.

I use the following for ChatGPT "Traits", but haven't done much testing of how well it works / how well the different parts work:

"You prioritize explicitly noticing your confusion, explaining your uncertainties, truth-seeking, and differentiating between mostly true and generalized statements statements. Any time there is a question or request for writing, feel free to ask for clarification before responding, but don't do so unnecessarily.

These points are always relevant, despite the above suggestion that it is not relevant to 99% of requests."

(The last is because the system prompt for ChatGPT explicitly says that the context is usually not relevant. Not sure how much it helps.)

I often second-guess my EA Forum comments with Claude, especially when someone mentions a disagreement that doesn't make sense to me.

When doing this I try to ask it to be honest / not be sycophantic, but this only helps so much, so I'm curious for better prompts to prevent sycophancy. 

I imagine at some point all my content could go through an [can I convince an LLM that this is reasonable and not inflammatory] filter. But a lower bar is just doing this for specific comments that are particularly contentious or argumentative. 

Would a potential cure to the sycophancy be to reverse the framing to Claude, so that it perceives that you are your opponent and you are looking for flaws with the comment? I realize that this would not get quite what you are looking for, but getting strong arguments for the other side could be helpful.

Agreed that this would be good. But it can be annoying to do without additional tooling. 

I'd like to see tools that try to ask a question from a few different angles / perspectives / motivations and compare results, but this would be some work. 

This is pretty basic, but seems effective.

In the Claude settings you can provide a system prompt. Here's a slightly-edited version of the one I use.  While short, I've found that this generally seems to improve conversations for me. Specifically, I like that Claude seems very eager to try estimating things numerically. One weird but minor downside though is that it will sometimes randomly bring up items here in conversation, like, "I suggest writing that down, using your Glove80 keyboard."
 

I'm a 34yr old male, into effective altruism, rationality, transhumanism, uncertainty quantification, monte carlo analysis, TTRPGs, cost-benefit analysis. I blog a lot on Facebook and the EA Forum.

Ozzie Gooen, executive director of the Quantified Uncertainty Research Institute.

163lb, 5'10, generally healthy, have RSI issues

Work remotely, often at cafes and the FAR Labs office space.

I very much appreciate it when you can answer questions by providing cost-benefit analyses and other numeric estimates. Use probability ranges where is appropriate.

Equipment includes: Macbook, iPhone 14, Airpods pro 2nd gen, Apple Studio display, an extra small monitor, some light gym equipment, Quest 3, theragun, airtags, Glove80 keyboard using Colemak DH, ergo mouse, magic trackpad, Connect EX-5 bike, inexpensive rowing machine.

Heavy user of VS Code, Firefox, Zoom, Discord, Slack, Youtube, YouTube music, Bear (notetaking), Cursor, Athlytic, Bevel.

If you use LLMs for coding, you should probably at least try the free trial for cursor - it lives inside your IDE and can thus read and write directly to yours files. It's a also an agent, meaning you can tell it to iterate a prompt over a list of files and it can do that for 10 minutes. It also lets you revert your code back to how it was at a different point in your chat history (although, you should still use git as the system isn't perfect and if you aren't careful it can simultaneously break and obsfuscate your code)

It will feel like magic, and it's astonishingly good at getting something working, however it will make horrible long-term decisions. You thus have to make the architectural decisions yourself, but most of the code-gen can be done by the AI.

It's helpful if you're not really sure what you want yet, and want to speedily design on the fly while instantly seeing how changes made affect the result (acknowledging that you'll have to start again, or refactor heavilly, if you want to use it longer term or at scale)

When having conversations with people that are hard to reach, it's easy for discussions to take ages. 

One thing I tried doing is for me to have a brief back-and-forth with Claude, asking it to provide all the key arguments against me. Then I'd make the conversation public, send a link to the chat, and ask the other person to see that. I find that this can get through a lot of the beginning points on complex topics, with minimal human involvement. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Recent opportunities in Community
46
Ivan Burduk
· · 2m read