I agree with you that people should be much more willing to disagree, and we need to foster a culture that encourages this. No disagreement is a sign of insufficient debate, not a well-mapped landscape. That said, I think EA's in general should think way less about who said what and focus much more on whether the arguments themselves hold water.
I find it striking that all the examples in the post are about some redacted entity, when all of them could just as well have been rephrased to be about object level reality itself. For example:
[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,
Could to me be rephrased to
Why I believe <stance on topic> is incorrect.
To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is org's thinking on topic sloppy>. Thinking a lot about the views of specific persons or organizations has its time and place, but the right split of thinking about reality versus social reality is probably closer to 90/10 than 10/90.
Sure, unfortunately GPT-4 doesn't seem to save the chat histories properly, but the most recent three by memory (topics obfuscated):
Write out paragraph showing how <intervention> will help <target country> <target org's priorities>.
Failure: GPT replies bloated text that makes the argument, but is too weasle-worded. Would be more work to rewrite than just do from scratch.
Format following into list with:
[messy content I had copied from website including the names and occupations along with other html stuff between]
Success: GPT replied with all names in the right format easy to copy paste into google sheets.
What are top ten newspapers in <target country> ranked by political influence
Success: GPT replied with reasonable looking top ten list including a description of their political orientation
One I often find myself asking and getting great answers to is:
Write sheets function that <does thing I need to do>
I also often use gpt to get brainstorms started.
My org is trying to achieve <thing>, list ten ways we could go about this.
As a side note, I've just written a shortform about how I believe more people should be integrating new AI tools into their workflows. for people worried about giving data and money to microsoft, I think offsetting is likely a great way to ensure you capture the benefits which I expect to be higher than the price of the offset
Spreadsheets are in many ways a force-multiplier of all other work that one does. For that reason I am very happy to have invested significant time into becoming good at utilizing spreadsheets in the work I do.
Over the past months, I’ve increasingly started using GPT in my workflow and am starting to see it as a tool that similarly to spreadsheets can make one better across a vide variety of tasks.
It wasn’t immediately useful however! It was only with continuous practice that it started generating actual value.
It took me a while to get good at noticing when some task I was doing could be sped up by involving GPT, but especially for brainstorming or listing things it does in seconds what would take me hours. I highly recommend investing time it takes to get it into your workflow. It takes time to build an intuition of what it can and cannot do well.
For example, my org spent some hours creating a list of organizations that currently attempt to influence aid spending in our target country. I asked GPT what organizations we had missed and in seconds was able to add an additional 15 organizations onto the list we had overlooked.
The amount of tasks we can outsource to AI will only increase going forward, and I think those who invest time into getting good at utilizing the new wave of AI tools will be able to multiply productivity significantly and will be at an advantage over those who don't.
Thank you for writing this, even after hearing your perspective I still can’t let go of the same feeling you initially described, that surely people wouldn’t just make up arbitrary lies to hate someone.
I wonder to what extent untruths are exacerbated by telephone games. The whole Elon musk emerald mine nonsense, for example, seems to be uttered mostly by people who don’t know any better rather than by people intentionally trying to distort the truth.
Yes to some variant of this.
a simple directory of services and prices seems sufficient, no need to a platform which charges commission and unnecessarily complicates things. These features are needed for non-ea work to make up for a lack of trust, but unnecessary here
I think there's two separate dynamics at play here:
I think we could do more to avoid punishing opinions perceived wrong. An example of punishing behavior at play is my own comment two days ago. I made it while being too upset for my own good and lashed out at someone for making a completely reasonable point.
I don't blame the user I replied to for wanting an anonymous account when that is the response they can expect.
Secondly, I suspect that people are vastly overrating just how much anybody cares about who says what on the forum.
While I understand why someone making a direct attack on some organization or person might want to stay anonymous, most things I see posted from anonymous accounts seem to just be regular opinions that are phrased with more hostility than the mean.
It's a bit weird to me why somebody would think that a few comments on the EA forum would do all that much to ones reputation. At ea globals, at most, I've had a few people say "oh you are the one who wrote that snakebite post? Cool!" and that's about it.
It all feels very paranoid to me. I'm way too busy worrying about whether I look fat in this t-shirt to notice or care that somebody wrote a comment that was too woke/anti-woke.
Maybe there's a bunch of people smarter than me who think my opinions are mid and now think less of me, but like, they were going to realize that after speaking to me for five minutes anyways.
Very happy with the changes, especially with the performance improvements on mobile.