I agree with most of what you wrote here, but think that the pledge, as a specific high resolution effort, is not helpful.
This is quite possible, but that's why we will have M&E and are committing bounded amounts of time to this project. - Although neither of these are much help if there's a distinct externality/direct harm to the wider community
Would you be able to explain why you think so? I can see you've linked to a post but it would take me >15 minutes to read and I thi...
starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA
This seems like very unfortunate zero-sum framing to me. Speaking personally, I've taken the 10% pledge, been heavily involved in Giveffektivt.dk, pushed for GWWC to have (the first) pledge table at EAGxNordics '24, and excited to support 10% pledge communities.
When I work on expanding the 10% pledge community, that does not mean I am disparaging using one's career to do good, and vice versa.
...commitment by young adults into pledges to con
I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.
The idea of an Minimal Viable Product is you're unsure what part of your product provides value and what parts are sticking points. After you release the MVP the sticking points are much clearer, and you have a much better idea on where to focus your limited time and money.
This is such a great question. We considered a very limited pool of ideas, for a very limited amount of time. I think the closest competitor was Career for Good.
The thinking being, that we can always get something up, test if there's actually interest in this, before actually spending significant resources into the branding side of things.
One con of the current name is that it could elicit some reactions
I agree that seems to being played out here! This could pose a good reason to change the name
...It might be largely down to whet
I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.
For comparison, you could see when GWWC was considering changing the wording of its pledge (though I recognize it was in a different position as an existing pledge rather than a new one): Should Giving What We Can change its pledge?
Does the pledge commit me to pursuing high-impact work specifically, or could it also include earning to give if that turns out to be my best option for impact later down the line?
This is such a great question, and a vitally important consideration. With the current wording of the pledge, it states:
I commit to using my skills, time, and opportunities to maximize my ability to make a meaningful difference
I take this wording to include Earning To Give, when it's the most impactful option available to you.
I would be curious to hear what you ...
Part of what I think is so unique and inspiring about EA is that it's not just an approach to doing good, but also a community that helps others do good on their own journey. When we face setbacks—whether in animal welfare campaigns or in our own institutions—we have a choice. We can stay defeated by these difficulties, or we can choose to learn from our failures and help the community as a whole learn and improve.
I really do like when the EA-Community, and posts like this, discuss this. On the current margin I think it increases my likelihood of embodying a growth mindset.
canonical arguments for focusing on cost-effectiveness involve GHW-specific examples, that don't clearly generalize to the GCR space.
I am not sure I understand the claim being made here. Do you believe this to be the case, because of a tension between hits based and cost-effective giving?
If so, I may disagree with the point. Fundamentally if you're an "hit" grant-maker, you still care about (1) The amount of impact as a result of a hit (2) the odds on getting a hit (3) Indicators which may lead up to getting a hit (4) The marginal impact of your gran...
Good job on highlighting this. While I very much understand GWWC's angle of approach here, I can see that there's essentially a dynamic that could be playing out whereby some areas (Animal Welfare and Global Development) get increasingly rigorous, while other areas (Longtermist problem-areas and Meta-EA) don't receive the same benefit.
I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all.
I recognise it's easy to stumble into these dynamics, but we must acknowledge that this is epistemically destructive.
Strictly speaking your salary is the wrong number here.
I don't think we should dismiss empirical data so quickly when it's brought to the table - that sets a bad precedent.
...other costs of employing you (and I've seen estimates of the other costs at 50-1
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need t...
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
potential employers, neighbors, and others might come across it
I think saying "I am against scientific racism" is within the overton window, and it would be extraordinarily unlikely to be"cancelled" as a result of that. This level of risk aversion is straightforwardly deleterious for our community and wider society.
While I'm cognizant of the downsides of a centralized authority deciding what events can and cannot be promoted here, I think the need to maintain sufficient distance between EA and this sort of event outweighs those downsides.
Can I also nudge people to be more vocal when they perceive there to a problem? I find it's extremely common that when a problem is unfolding nobody says anything.
Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don't feel comfortable expressing their viewpoint openly, which I think is not super healthy.
Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don't feel comfortable expressing their viewpoint openly, which I think is not super healthy.
I can't speak for the original poster, but the Forum is on the public internet. I can't blame someone in the OP's shoes for not wanting their name anywhere near a discussion of “scientific racism” where potential employers, neighbors, and others might come across it -- even if their post is critical of the concept.
Sentient AI ≠ AI Suffering.
Biological life forms experience unequal (asymmetrical) amounts of pleasure and pain. This asymmetry is important. It's why you cannot make up for starving someone for a week by giving them food for a week.
This is true for biological life, because a selection pressure was applied (evolution by natural selection). This selection pressure is necessitated by entropy, because it's easier to die than it is to live. Many circumstances result in death, only a narrow band of circumstances results in life. Incidentally, this ...
...you claim that it's relevant when comparing lifesaving interventions with life-improving interventions, but it's not quite obvious to me how to think about this: say a condition C has a disability weight of D, and we cure it in some people who also have condition X with disability weight Y. How many DALYs did we avert? Do they compound additively, and the answer is D? Or multiplicatively, giving D*(1-Y)? I'd imagine they will in general compound idiosyncratically, but assuming we can't gather empirical information for every single combination of conditions
Disclosure: I discussed this with OP (Mikołaj) previous to it being posted.
Low confidence in what I am saying being correct, I am brand new to this area and trying to get my head around it.
Yes, we can fix this fairly easily. We should decrease the number of DALYs gained from interventions (or components of interventions) that saves lives by roughly 10%.
I agree this is not a bad way to fix post-hoc. One concern I would have using this model going forward, is that you may overweight interventions that leave the beneficiary with some sort of long ...
As you write:
The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we've entered an event horizon where the output is almost entirely unforeseeable.
If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000
If, a...
I feel this claim is disconnected with the definition of the singularity given in the paper:
...The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity
Intelligence Explosion: For a sustained period
[...]
Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period [Emphasis mine]
Just to help nail down the crux here, I don't see why more than a few days of an intelligence explosion is require...
Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]
I'm not sure I understand this claim, and I can't see that it's supported by the cited paper.
Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect.
To identify one crux with the idea of using morality to motivate behaviour (e.g. "abolitionism"), is the assumption it needs to be completely grassroots. The argument often becomes: did slavery end because everyone found it to be morally bad, or because economic factors ect. changed the country fundamentally.
It becomes much more plausible that morality played an important role, when you modify the claim: Slavery ended because a group of important people realised it was morally wrong, and displayed moral leadership in changing laws.
I would generally view reaching out to a reasonable number of active Forum participants individually as not brigading. This is less likely to create a sufficient mass effect to mislead observers about the community's range of views.
I think about it this way. If a post was written critically about me, I would suspect 5-10% of people that know me in the community to see it, and 0.5% to comment. If I reach out to everyone I have ever been friendly with, I expect these numbers would be 50% and 5%, respectively. In other words, there would be 10x more comments ...
Astroturfing and troll farms are different from friends and people on your side saying their opinion
This is correct. What I am talking about is brigading.
Astroturfing and troll farms are only similar in the mechanism behind their ability to distort public opinion. That mechanism is: People are influenced by the tone and volume of comments they read.
...Are you saying you're against people being allowed to tell their friends and supporters about something they consider to be unethical and encouraging them to vote and comment according to their conscience?
There are some grey areas here:
Why would it be bad if he was given advance warning about this report?
Some people - to be completely frank, like yourself - will use advanced notice to schedule their friends, fans and colleagues to write defensive comments. A high concentration of these types of comments can distort the quality of the conversation. This is commonly referred to as brigading.
This strategy is so effective, that foreign governments have setup "troll-farms", and companies have setup "astroturfing" operations to benefit from degrading the quality of certain conversa...
I would create a distinction between giving someone a read of a draft ahead of time, and actively communicating the date and time something is posted.
Could you say more about that? The Boards' post stated their factual findings and actions without giving much of Owen's side of the story. While I don't think that was inappropriate, it seems fair to give Owen at least some lead time to prepare a statement of his perspective on the matter.
There is a history of people on this Forum veering to one side when a post is published before the respondent has a fair chance to respond, then moving to the other side when the response is filed. It's better to avoid that dynamic when possible.
There's been some complaints from a banned EA Forum user that the timing of this post, and the timing of comments that bolster the character of Owen, have been coordinated. Whilst I think it's unlikely this is the case, I would love to see the following:
- Confirmation from OP (@EV UK Board) that Owen was not given advanced warning on the posting of this report. Or if he was, some discussion around the potential issues with doing so.
- Some further discussion in the EA Forum team, and perhaps rules set, on coordinated posting (AKA "brigading").
I was told approximately when the post would go up. In fact, I asked them to delay a few days so that somebody could write to the people who spoke to the investigation to give them an opportunity to fact-check or object to my detailed responses. (I made some minor updates following feedback there, but of course this shouldn't be taken as saying that everyone involved endorses what I've written; in particular, people may reasonably have chosen not to read it.)
I did not suggest anyone comment in my defence, something I'd regard as inappropriate. Nor did I le...
Why would it be bad if he was given advance warning about this report? There's nothing in here about him being retaliatory. It seems probably good to hear the other side and be given a chance to look at the post before it goes live.
Also, it does say in the document that Owen was given advanced notice. His document says that he saw the draft and disagreed with aspects of it that they didn't address in the post.
In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.
I think this is a spot on analogy, and something we've discussed in our group a lot.
Meta note: I'm not going to spend much more time on nonlinear threads, since I think it's among the poorer uses of my time. With this in mind, I hope people don't take unilateral actions (e.g. deanonymizing Chloe or Alice) after discussing in this thread, because I suspect at this point threads like these filter for specific people and are less representative of the EA community as a whole.
As we later received more screenshots, it seems like we actually received definitive confirmation that the conversation on that date did indeed not result in Alice getting food.
I'm waiting for Ben, or someone else, to make a table of claims, counter claims, and what the evidence shows. Because nonlinear providing evidence that doesn't support their claims seems to be a common occurance. Just to give a new example, Kat screenshots herself replying "mediating! Appreciate people not talking to loud on the way back [...] " here, to provide evidence suppor...
This sounds right, but the counterfactual (no social accountability) seems worse to me, so I am operating on the assumption it's a necessary evil.
I live high trust country, which has very little of this social accountability, i.e. if someone does something potentially rude or unacceptable in public, they are given the benefit of the doubt. However, I expect this works because others are employed, full time, to hold people accountable. I.e. police officers, ticket inspectors, traffic wardens. I don't think we have this in the wider Effective Altruism community right now.
This is awesome, great job!