As you write:
The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we've entered an event horizon where the output is almost entirely unforeseeable.
If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000
If, a...
I feel this claim is disconnected with the definition of the singularity given in the paper:
...The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity
Intelligence Explosion: For a sustained period
[...]
Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period [Emphasis mine]
Just to help nail down the crux here, I don't see why more than a few days of an intelligence explosion is require...
Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]
I'm not sure I understand this claim, and I can't see that it's supported by the cited paper.
Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect.
To identify one crux with the idea of using morality to motivate behaviour (e.g. "abolitionism"), is the assumption it needs to be completely grassroots. The argument often becomes: did slavery end because everyone found it to be morally bad, or because economic factors ect. changed the country fundamentally.
It becomes much more plausible that morality played an important role, when you modify the claim: Slavery ended because a group of important people realised it was morally wrong, and displayed moral leadership in changing laws.
While I don't think that was inappropriate, it seems fair to give Owen at least some lead time to prepare a statement of his perspective on the matter.
I think your right about this, and have changed my mind.
I would generally view reaching out to a reasonable number of active Forum participants individually as not brigading. This is less likely to create a sufficient mass effect to mislead observers about the community's range of views.
I think about it this way. If a post was written critically about me, I would suspect 5-10% of people that know me in the community to see it, and 0.5% to comment. If I reach out to everyone I have ever been friendly with, I expect these numbers would be 50% and 5%, respectively. In other words, there would be 10x more comments ...
I wrote a report for CE on an AMR idea; the cost-effectiveness analyses of which will be released soon and I will post here when they are!
Hey Akhil, is there any update here?
Astroturfing and troll farms are different from friends and people on your side saying their opinion
This is correct. What I am talking about is brigading.
Astroturfing and troll farms are only similar in the mechanism behind their ability to distort public opinion. That mechanism is: People are influenced by the tone and volume of comments they read.
...Are you saying you're against people being allowed to tell their friends and supporters about something they consider to be unethical and encouraging them to vote and comment according to their conscience?
There are some grey areas here:
Why would it be bad if he was given advance warning about this report?
Some people - to be completely frank, like yourself - will use advanced notice to schedule their friends, fans and colleagues to write defensive comments. A high concentration of these types of comments can distort the quality of the conversation. This is commonly referred to as brigading.
This strategy is so effective, that foreign governments have setup "troll-farms", and companies have setup "astroturfing" operations to benefit from degrading the quality of certain conversa...
I would create a distinction between giving someone a read of a draft ahead of time, and actively communicating the date and time something is posted.
Could you say more about that? The Boards' post stated their factual findings and actions without giving much of Owen's side of the story. While I don't think that was inappropriate, it seems fair to give Owen at least some lead time to prepare a statement of his perspective on the matter.
There is a history of people on this Forum veering to one side when a post is published before the respondent has a fair chance to respond, then moving to the other side when the response is filed. It's better to avoid that dynamic when possible.
There's been some complaints from a banned EA Forum user that the timing of this post, and the timing of comments that bolster the character of Owen, have been coordinated. Whilst I think it's unlikely this is the case, I would love to see the following:
- Confirmation from OP (@EV UK Board) that Owen was not given advanced warning on the posting of this report. Or if he was, some discussion around the potential issues with doing so.
- Some further discussion in the EA Forum team, and perhaps rules set, on coordinated posting (AKA "brigading").
I was told approximately when the post would go up. In fact, I asked them to delay a few days so that somebody could write to the people who spoke to the investigation to give them an opportunity to fact-check or object to my detailed responses. (I made some minor updates following feedback there, but of course this shouldn't be taken as saying that everyone involved endorses what I've written; in particular, people may reasonably have chosen not to read it.)
I did not suggest anyone comment in my defence, something I'd regard as inappropriate. Nor did I le...
Why would it be bad if he was given advance warning about this report? There's nothing in here about him being retaliatory. It seems probably good to hear the other side and be given a chance to look at the post before it goes live.
Also, it does say in the document that Owen was given advanced notice. His document says that he saw the draft and disagreed with aspects of it that they didn't address in the post.
In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.
I think this is a spot on analogy, and something we've discussed in our group a lot.
Meta note: I'm not going to spend much more time on nonlinear threads, since I think it's among the poorer uses of my time. With this in mind, I hope people don't take unilateral actions (e.g. deanonymizing Chloe or Alice) after discussing in this thread, because I suspect at this point threads like these filter for specific people and are less representative of the EA community as a whole.
As we later received more screenshots, it seems like we actually received definitive confirmation that the conversation on that date did indeed not result in Alice getting food.
I'm waiting for Ben, or someone else, to make a table of claims, counter claims, and what the evidence shows. Because nonlinear providing evidence that doesn't support their claims seems to be a common occurance. Just to give a new example, Kat screenshots herself replying "mediating! Appreciate people not talking to loud on the way back [...] " here, to provide evidence suppor...
Uh, the word in that screenshot is "meditating". She was asking people to not talk too loudly while she was meditating.
This sounds right, but the counterfactual (no social accountability) seems worse to me, so I am operating on the assumption it's a necessary evil.
I live high trust country, which has very little of this social accountability, i.e. if someone does something potentially rude or unacceptable in public, they are given the benefit of the doubt. However, I expect this works because others are employed, full time, to hold people accountable. I.e. police officers, ticket inspectors, traffic wardens. I don't think we have this in the wider Effective Altruism community right now.
Thanks for providing context here, similar to Vaipan, I wasn't sure why people were disagree/downvoting me.
I've been thinking about whether there is some kind of informal court or arbitration system that would allow the social pressure here to be less driven by people trying to individually enact social enforcement
My model has been there should be social enforcement for both poor epistemic practices and rude/unkind communication.
I have been an active commenter in both posts, with a goal of social pressure in mind (i.e. providing accountability and a social pressure to not behave inappropriately towards/with your employees).
I'd be interested to hear ...
To me it seems like everyone individually applying social pressure is hard to calibrate. Oli seems to be saying that he and Ben did not intend the level of social consequences NL has felt based on what they shared, but rather an update that NL shoudn’t be a trusted EA org. I think that it’s hard to control the impression that people will get when you provide a lot of evidence even if it’s all relatively minor, and almost impossible to control snowballing dynamics in comment sections and on social media when people fear being judged for the wrong reaction, so it just might not be possible for a post like Ben’s to received in a calibrated way.
We show Chloe’s work contract in the third row of the very first table.
Can I confirm I am seeing the correct image. I see a screenshot of a google document. As oppose to contract signed by both parties. Would you be able to confirm this contract was signed by both parties?
The link you share isn't saying that pharmacies are illegal, it’s saying that they sometimes sell counterfeit drugs, and that's illegal.
It indeed looks like the article I linked was related to counterfeit drugs, and not necessarily dispensing drugs without prescription. A...
Perhaps something missed from your list. The lack of moral seriousness regarding the value of the money being spent. I can imagine my global development and animal welfare colleagues, would be pretty displeased to learn that nonlinear has received over 500,000 USD in funding.
From reading into this discussion, including the linked appendix document. There's no reason for me to think that they were ready to receive this amount of money, or likely to use it effectively.
I agree with this though it is unfortunately much the same in lots of longtermism/AI safety. Also, if I am not mistaken, Emerson funds a lot of Nonlinear himself.
I agree with @Habryka that this comment is underspecified and likely written without proper review of the appendix linked. I suspect many readers are likely to conflate disputed with debunked, and this comment plays into that. This works so well, and it's use is so widespread, that it has a name, FUD.
In the comments below, I have asked Spencer Greenberg to specify the most important claims he feels have been repudiated, and why he thinks so. I expect the answer will be genuinely elucidating to me.
ElliotJDavies - I had read earlier versions of the post and the appendix, which is why I felt somewhat confident in commenting on the quality of Ben Pace's fact-checking (or lack thereof).
(3) I didn't do a detailed look at every row in the "Short summary overview table", but for the ones I did look into in more detail, I found Nonlinear's counter evidence to be compelling. That table is organized by claim and is in an easy-to-navigate structure, so I suggest people take a look for themselves at the evidence Nonlinear provided regarding whatever claims they think are important.
I would have loved to hear in your own words the most important claims that you think have been rebutted, and why you think so. When I look through the appendix docume...
We show Chloe’s work contract in the third row of the very first table. We also link to interview transcripts showing that we paid her exactly what she was promised. This is a clear example of Chloe lying.
If you don’t, update based on that, I’m not sure what to say. She knowingly and clearly lied, despite knowing that we had a work contract and interview transcripts showing this. Please consider that you shouldn't trust somebody who has provably lied to you and the community multiple times.
For #2, you are saying you're worried about a peopl...
1) Do you have any concerns the section above on Ben Pace could be considered an ad hominem attack? I.e. attacking someone's character rather than their claims? [1]
2) How long do you think it would have been reasonable for Ben Pace to wait? With the benefit of hindsight, we can see it has taken nonlinear 96 days to write a response to his post. [2]
3) What specific claims do you think have been rebutted? Perhaps you can quote Ben's original piece; link to the evidence which disproves it; and include your interpretation of what said evidence shows....
Hi Elliot. To respond to your questions:
(1) I interpreted the section "Sharing Information About Ben Pace" as making the point that it's quite easy to make very bad-sounding accusations that are not reliable and that are not something people should update to any significant degree on if one applies a one-sided and biased approach. It sounds like some people interpreted it differently, but I thought the point of the section was quite clear (to me, anyway) based on this part of it: "However, this is completely unfair to Ben. It’s written in the style of a hi...
In my opinion, including a photo section was surprising and came across as near completely misunderstanding the nature of Ben's post. It is going to make it a bit hard to read any further with even consideration
In addition to the overall tone of this post being generally unprofessional.
Yeah, I don't necessarily mind an informal tone. But the reality is, I read [edit: a bit of] the appendix doc and I'm thinking, "I would really not want to be managed by this team and would be very stressed if my friends were being managed by them. For an organisation, this is really dysfunctional." And not in an, "understandably risky experiment gone wrong" kind of way, which some people are thinking about this as, but in a, "systematically questionable judgement as a manager" way. Although there may be good spin-off convos around, "how risky orgs should ...
Just to be clear, was this work contract was signed by both parties? If one has made a verbal contract to do X, but before any work is done, a different written contract to do Y is drafted and signed, the written contract will take precedence over the verbal contract (Y>X). I.e. it wouldn't matter what was promised in interviews, as long as you have a written contract agreeing the compensation package. [1]
GPT 4 Tells me this is often referred to as the "parol evidence rule" , and identifies some exc
Wait, but it might actually have opportunity cost? Like those poeple could be doing something other than trying to get more medium sized donors? There is a cost to trying to push on this versus something else.
Most of the people working on giving platforms, are pretty uniquely passionate about giving. The donation platform team we have, isn't that excited about EA-community building in general. This is a good, concrete example of one way a 0 sum model breaks down.
...But, we can also go for those benefits directly without necessarily getting more do
Thanks for clarifying, and I am certainly inclined to defer to you.
One concern I would have, is to what extent are these subjective estimations born-out by empirical data. Obviously you'd never deliberately hire the second best candidate, so we never really test our accuracy. I suspect this is particular bad with hiring, where the hiring process can be really comprehensive and it's still possible to make a bad hire.
In the US teaching jobs are incredibly stable, and stable things pay less. Unless the UK is very different, I expect that EA jobs would need to pay more just to have people end up with the same financial situation over time, because instability is expensive
I agree that there should be a premium for instability.
I'm not sure where CEA falls in this spectrum. In a sister comment Cait says that for CEA their number two candidate is often half as impactful as their top choice
I agree this is a crux. To the extent you're paying more for better candidates, I...
My claim is that your intuitions are the opposite of what they would be if applied to the for-profit economy. You're response (if I understand correctly) is questioning the veracity of the analogy - which seems not to really get at the heart of the efficient market heuristic. I.e. you haven't claimed that bigger donors are more likely to be efficient, you've just claimed efficiency in charitable markets are generally unlikely?
Besides this, shorting isn't the only way markets regulate (or deflate) prices. "Selling" is the more common pathway. In this contex...
I agree with this post. One mechanism that's really missing in this discussion, is the value of marginal grants. Large grant makers claim to appreciate the value of marginal grants, but in practice I have rarely seen it. Instead, for small org at least, a binary decision is made to fund or not fund the project.
For smaller and upstarting projects, these marginal grants are really important (as highlighted in your story). A donor might not be able to fund an office space, but they can fund the deposit. Or they can't fund the deposit, but they can fund the furniture. These are small coordination problem, but they make up most of what it means to get small and medium sized organizations off the ground.
To add to what Caitlin said, my experience as a hiring manager and as a candidate is that this often is not the case.
When I was hired at CEA I took roles on two different teams (Head of US Operations at CEA and the Events Team role at EAO, which later merged into CEA). My understanding at the time is that they didn’t have second choice candidates with my qualifications, and I was told by the EAO hiring manager that they would have not filled the position if I didn’t accept (I don’t remember whether I checked this with the CEA role).
I should note that I was...
Thanks for responding, it has really helped me clarify my understanding of your views.
I do think the comparison to teacher wages is a little unfair. In the US teaching jobs are incredibly stable, and stable things pay less. Unless the UK is very different, I expect that EA jobs would need to pay more just to have people end up with the same financial situation over time, because instability is expensive. But this is maybe a 10% difference, not so not that important in the scheme of things.
I think a big area of contention (in all the salary disc...
If you lose your top choice due to insufficient salary, how good do you expect the replacement to be?
For CEA, I'd likely guess they'd be indistinguishable for most roles most of the time.
I'm CEA's main recruitment person and I've been involved with CEA's hiring for 6+ years. I've also been involved in hiring rounds for other EA orgs.
I don't remember a case where the top two candidates were "indistinguishable." The gap very frequently seems quite large (e.g. our current guess is the top candidate might be twice as impactful, by some definition o...
Getting (e.g.) 5000 new people giving 20k a year seems a huge lift to me. [...] A diffuse 'ecosystem wide' benefits of these additional funders struggles by my lights to vindicate the effort (and opportunity costs) of such a push.
One problem I have with these discussions, including past discussions about why national EA orgs should have fundraising platform, is the reductionist and zero-sum thinking given in response.
I identified above, how an argument stating less donors results in more efficiency, would never be made in the for profit world. ...
I disagree with this (except unilateralist curse), because I suspect something like the efficient market hypothesis plays out when you many medium-small donors. I think it's suspect that one wouldn't make the same argument as the above for the for-profit economy.
I think the crux for me, is using EAG branding for an event that doesn't represent all of Effective Altruism. If, like last year, an event will be run by CEA focusing on a particular area, I wouldn't be too concerned.
I'm a little late to this thread, but I think this is very regrettable. I feel quite strongly that CEA should be building and growing a "big tent" Effective Altruism, around the core principles of EA. I think this announcement is quite corrosive to that goal.
I strongly support cause-specific field building, but this is best suited for sister organisations and not the Centre for Effective Altruism.
A lot of organisations in the EA community building space are underperforming, including CEA and including the organisation that I run. That's okay. W...
Hey Andre, I can see the decks are now unavailable. Is there any chance of putting them back up?
One critique of this piece would be that perhaps Impactful | Attractive should be at 20° rather than 45, since it seems like attractive ideas are more likely to be unimpactful. This is because (1) They fit preconceived biases of the world (2) They're less neglected.
So to some extent, I wonder if schlep ideas exist at all in EA.
I think a good question to ask is, what work do you wish someone else would do?
Horrible career moves e.g. investigating the corrupt practices of powerful EAs / Orgs
Boring to most people e.g. compiling lists and data
Low status outside EA e.g. welfare of animals nobody cares about (e.g. shrimp)
Low status within EA e.g. global mental health
Living in relatively low quality of living areas e.g. fieldwork in many African countries
[Summary by Claude 2] Here is a summary of the key points from the speech:
I thoroughly enjoyed this episode. I am not always sympathetic to tech-utopianism, as I feel enthusiasts don't always "read the room" regarding all of the challenges and suffering that are currently present. But I was impressed by how thoughtful, considerate and elucidating Anders was throughout.
That is preciously what I am looking for. I feel a bit silly now, because I haven't noticed these previously.
FWIW, @80000_Hours I think the formatting "[#Ep] [podcast title]" is a much better title/formatting than how they were linked previous to the last episode.
Episode highlights tend to be shared under this account, which might be a good place to leave thoughts.
That's totally it, thanks for flagging, I have not seen these previously (not sure why).
80K podcast discussion threads.
Feature request: discussion threads on episodes of the 80k podcasts.
I consistently generate a lot of thoughts listening to the 80k pocast. I could imagine leaving these as comments, reading others thoughts, and identifying cruxes. To further steelman the need of podcast threads: the 80k podcast is the most widely consumed, indepth media produced in the EA sphere at the moment.
Quickly skimming the dashboards linked and this post, I feel the post above is hyperbolic & alarmist. At several points it reads like a continuation of a trendline is attributed to FTX, and parsimonious explanations (e.g. there hasn't been as much media outreach post FTX) could relax the reader quite a bit.
They seem potentially important/actionable, if true!
Had the same thought as you OP. It had been rattling around in my head for a few days now, so I appreciate you making this post.
Just to note: I have a COI in commenting on this subject.
I strong downvoted your comment, as it reads to me as making bold claims whilst providing little supporting evidence. References to "lots of people in this area" could be considered to be a use case of the bandwagon fallacy.