All of ElliotJDavies's Comments + Replies

I’ve linked a template at the end with questions I’d find helpful for channeling my short-term motivation into action - maybe you’d find them helpful too!


This is awesome, great job! 

I agree with most of what you wrote here, but think that the pledge, as a specific high resolution effort, is not helpful.
 

This is quite possible, but that's why we will have M&E and are committing bounded amounts of time to this project. - Although neither of these are much help if there's a distinct externality/direct harm to the wider community 
 

> You're confusing what zero-sum does and does not mean

Would you be able to explain why you think so? I can see you've linked to a post but it would take me >15 minutes to read and I thi... (read more)

starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA


This seems like very unfortunate zero-sum framing to me. Speaking personally, I've taken the 10% pledge, been heavily involved in Giveffektivt.dk, pushed for GWWC to have (the first) pledge table at EAGxNordics '24, and excited to support 10% pledge communities. 

When I work on expanding the 10% pledge community, that does not mean I am disparaging using one's career to do good, and vice versa. 
 

commitment by young adults into pledges to con

... (read more)
3
Davidmanheim
I agree with most of what you wrote here, but think that the pledge, as a specific high resolution effort, is not helpful. You're confusing what zero-sum does and does not mean - I agree with the point that a community that acts the way the EA community has is unfortunately exclusionary, but I also think that making more pledges does the opposite of remove those dynamics. I also think that looking at the outcomes for those who made pledges and stuck around is selecting on the outcome variable; the damage that high expectations have may be on-net worthwhile, but it would be unreasonable to come to that conclusion on the basis of talking to who stuck around.

I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.

 

The idea of an Minimal Viable Product is you're unsure what part of your product provides value and what parts are sticking points. After you release the MVP the sticking points are much clearer, and you have a much better idea on where to focus your limited time and money. 

Asking people to try out a minimum viable product, which they can abandon if they don't like it, seems fine. Asking people to take a minimum viable pledge about how they will orient their entire career seems very different to me.

Thanks for your feedback! I appreciate it and agree that maximize it a pretty strong world. Just to clarify the crux here, would you say that this project doesn't make sense over-all or would you say that the text of the pledge be changed to something more manageable?

2
Davidmanheim
I think it's a problem overall, and I've talked about this a bit in two of the articles I linked to. To expand on the concerns, I'm concerned on a number of levels, starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA, to the idea that we should be a community that encourages making often already unhealthy levels of commitment by young adults into pledges to continue that level of dedication for their entire careers. As someone who has spent most of a decade working in EA, I think this is worrying, even for people deciding on their own to commit themselves. People should be OK with prioritizing themselves to a significant extent, and while deciding to work on global priorities is laudable *if you can find something that fits your abilities and skill set*, but committing to do so for your entire career, which may not follow the path you are hoping for, seems at best unwise. Suggesting that others do so seems very bad. So again, I applaud the intent, and think it was a reasonable idea to propose and get feedback about, but I also strongly think it should be dropped and you should move to something else.

Thanks for flagging this Arepo, I will reach out to them!

Thanks for flagging this! I didn't know this was the was the case - I will reach out to them 

This is such a great question. We considered a very limited pool of ideas, for a very limited amount of time. I think the closest competitor was Career for Good. 

The thinking being, that we can always get something up, test if there's actually interest in this, before actually spending significant resources into the branding side of things. 

 

 One con of the current name is that it could elicit some reactions


I agree that seems to being played out here! This could pose a good reason to change the name
 

It might be largely down to whet

... (read more)

Thanks for flagging this Pablo! I added it to the post after I read your comment

Thanks for the feedback Neel! Obviously as noted above, we released this quickly (after <12 hours of work) to get feedback exactly like this. We will focus on rewording the pledge statement to try to reduce or, if we're especially lucky, nullify the concerns you've raised here. 

I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.

For comparison, you could see when GWWC was considering changing the wording of its pledge (though I recognize it was in a different position as an existing pledge rather than a new one): Should Giving What We Can change its pledge?

2
Neel Nanda
Glad to hear it!

Does the pledge commit me to pursuing high-impact work specifically, or could it also include earning to give if that turns out to be my best option for impact later down the line? 

This is such a great question, and a vitally important consideration. With the current wording of the pledge, it states: 

I commit to using my skills, time, and opportunities to maximize my ability to make a meaningful difference

I take this wording to include Earning To Give, when it's the most impactful option available to you. 

I would be curious to hear what you ... (read more)

Thank-you for the kind words Joey! I can confirm that you are the first Better Career Pledger! 

Part of what I think is so unique and inspiring about EA is that it's not just an approach to doing good, but also a community that helps others do good on their own journey. When we face setbacks—whether in animal welfare campaigns or in our own institutions—we have a choice. We can stay defeated by these difficulties, or we can choose to learn from our failures and help the community as a whole learn and improve.

I really do like when the EA-Community, and posts like this, discuss this. On the current margin I think it increases my likelihood of embodying a growth mindset. 

Very interesting! I love the graphs comparing awareness of different orgs/concepts. 

canonical arguments for focusing on cost-effectiveness involve GHW-specific examples, that don't clearly generalize to the GCR space.

I am not sure I understand the claim being made here. Do you believe this to be the case, because of a tension between hits based and cost-effective giving? 

If so, I may disagree with the point. Fundamentally if you're an "hit" grant-maker, you still care about (1) The amount of impact as a result of a hit (2) the odds on getting a hit (3) Indicators which may lead up to getting a hit (4) The marginal impact of your gran... (read more)

Good job on highlighting this. While I very much understand GWWC's angle of approach here, I can see that there's essentially a dynamic that could be playing out whereby some areas (Animal Welfare and Global Development) get increasingly rigorous, while other areas (Longtermist problem-areas and Meta-EA) don't receive the same benefit. 

3
Aidan Whitfield🔸
Thanks for the comment! While we think it could be correct that the quality of evaluations differs between our recommendations in different cause areas, my view is that the evaluating evaluators project applies pressure to increase the strength of evaluations across all cause areas. In our evaluations we communicate areas where we think evaluators can improve. Because we are evaluating multiple options in each cause area, if in future evaluations we find one of our evaluators has improved and another has not, then the latter evaluator is less likely to be recommended in future, which provides an incentive for both evaluators to improve their processes over time.

I see a dynamic playing out here, where a user has made a falsifiable claim, I have attempted to falsify it, and you've attempted to deny that the claim is falsifiable at all. 

I recognise it's easy to stumble into these dynamics, but we must acknowledge that this is epistemically destructive.

Strictly speaking your salary is the wrong number here.


I don't think we should dismiss empirical data so quickly when it's brought to the table - that sets a bad precedent. 
 

other costs of employing you (and I've seen estimates of the other costs at 50-1

... (read more)
6
Ben Millwood🔸
My claim is that the org values your time at a rate that is significantly higher than the rate they pay you for it, because the cost of employment is higher than just salary and because the employer needs to value your work above its cost for them to want to hire you. I don't see how this is unfalsifiable. Mostly you could falsify them by asking orgs how they think about the cost of staff time, though I guess some wouldn't model it as explicitly as this. They do mean that we're forced to estimate the relevant threshold instead of having a precise number, but a precise wrong number isn't better than an imprecise (closer to) correct number. No, if you're comparing the cost of doing 10 minutes of work at salary X and 60 minutes of work compensated by Y, but I argue that salary X underestimates the cost of your work by a factor of 2, your salary now only needs to be more than 3 times larger than the work trial compensation, not 5 times. When it comes to concretising "how much does employee value exceed employee costs", it probably varies a lot from organisation to organisation. I think there are several employers in EA who believe that after a point, paying more doesn't really get you better people. This allows their estimates of value of staff time to exceed employee costs by enormous margins, because there's no mechanism to couple the two together. I think when these differences are very extreme we should be suspicious if they're really true, but as someone who has multiple times had to compare earning to give with direct work, I've frequently asked an org "how much in donations would you need to prefer the money over hiring me?" and for difficult-to-hire roles they frequently say numbers dramatically larger than the salary they are offering. This means that your argument is not going to be uniform across organisations, but I don't know why you'd expect it to be: surely you weren't saying that no organisation should ever pay for a test task, but only that organisa

Completed this, but was difficult! 

It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV). 

 

My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task

This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need t... (read more)

6
Ben Millwood🔸
Strictly speaking your salary is the wrong number here. At a minimum, you want to use the cost to the org of your work, which is your salary + other costs of employing you (and I've seen estimates of the other costs at 50-100% of salary). In reality, the org of course values your work more highly than the amount they pay to acquire it (otherwise... why would they acquire it at that rate) so your value per hour is higher still. Keeping in mind that the pay for work tasks generally isn't that high, it seems pretty plausible to me that the assessment cost is primarily staff time and not money.
4
David_Moss
  Whether or not to use "credentialist and biased methods (like looking at someone's LinkedIn profile or CV)" seems orthogonal to the discussion at hand?  The key issue seems to be that if you raise the screening bar, then you would be admitting fewer applicants to the task (the opposite of the original intention). This will definitely vary by org and by task. But many EA orgs report valuing their staff's time extremely highly. And my impression is that both grading longer tasks and then processing the additional applicants (many orgs will also feel compelled to offer at least some feedback if a candidate has completed a multi-hour task) will often take much longer than 10 minutes total.

I'd be curious to know the marginal cost of an additional attendee - I'd put it between 5-30 USD, assuming they attend all sessions. 

Assuming you update your availability on swapcard, and that you would get value out of attending a conference, I suspect attending is positive EV. 

Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.

5
David_Moss
  My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1]. So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine. I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants) 1. ^ I think that many applicants are highly motivated to complete tasks, in order to have a chance of getting the job.

I read Saul's comment to be discussing two different events. 1 event he was uninvited to, the other he would have been able to attend if he would have so wished. 

potential employers, neighbors, and others might come across it

I  think saying "I am against scientific racism" is within the overton window, and it would be extraordinarily unlikely to be"cancelled" as a result of that. This level of risk aversion is straightforwardly deleterious for our community and wider society. 

The person who sees the post after Googling the commenter's name is still potentially left with the impression of the commenter as part of a community that tolerates "scientific racism." That imposes costs that some of us, especially those with non-EA professional lives, would prefer not to bear.

While I'm cognizant of the downsides of a centralized authority deciding what events can and cannot be promoted here, I think the need to maintain sufficient distance between EA and this sort of event outweighs those downsides.


Can I also nudge people to be more vocal when they perceive there to a problem? I find it's extremely common that when a problem is unfolding nobody says anything. 

Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don't feel comfortable expressing their viewpoint openly, which I think is not super healthy. 

Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don't feel comfortable expressing their viewpoint openly, which I think is not super healthy. 

I can't speak for the original poster, but the Forum is on the public internet. I can't blame someone in the OP's shoes for not wanting their name anywhere near a discussion of “scientific racism” where potential employers, neighbors, and others might come across it -- even if their post is critical of the concept.

Sentient AIAI Suffering.  

Biological life forms experience unequal (asymmetrical) amounts of pleasure and pain. This asymmetry is important. It's why you cannot make up for starving someone for a week by giving them food for a week. 

This is true for biological life, because a selection pressure was applied (evolution by natural selection). This selection pressure is necessitated by entropy, because it's easier to die than it is to live. Many circumstances result in death, only a narrow band of circumstances results in life. Incidentally, this ... (read more)

you claim that it's relevant when comparing lifesaving interventions with life-improving interventions, but it's not quite obvious to me how to think about this: say a condition C has a disability weight of D, and we cure it in some people who also have condition X with disability weight Y. How many DALYs did we avert? Do they compound additively, and the answer is D? Or multiplicatively, giving D*(1-Y)? I'd imagine they will in general compound idiosyncratically, but assuming we can't gather empirical information for every single combination of conditions

... (read more)

Disclosure: I discussed this with OP (Mikołaj) previous to it being posted. 

Low confidence in what I am saying being correct, I am brand new to this area and trying to get my head around it. 

Yes, we can fix this fairly easily. We should decrease the number of DALYs gained from interventions (or components of interventions) that saves lives by roughly 10%.

I agree this is not a bad way to fix post-hoc. One concern I would have using this model going forward, is that you may overweight interventions that leave the beneficiary with some sort of long ... (read more)

Sounds like a very interesting intervention. I'd be keen to give it a try but I am only in the UK for 1-2 weeks a year. 

To a large extent I don't buy this. Academics and Journalists could interview an arbitrary EA forum user on a particular area if they wanted to get up to speed quickly. The fact they seem not to do this, in addition to not giving a right to reply, makes me think they're not truth-seeking. 

5
David Thorstad
I’d like to hope that academics are aiming for a level of understanding above that of a typical user on an Internet forum. All academic works have a right to reply. Many journals print response papers and it is a live option to submit responses to critical papers, including mine. It is also common to respond to others in the context of a larger paper. The only limit to the right of academic reply is that the response must be of suitable quality and interest to satisfy expert reviewers.

Just to note: I have a COI in commenting on this subject. 

I strong downvoted your comment, as it reads to me as making bold claims whilst providing little supporting evidence. References to "lots of people in this area" could be considered to be a use case of the bandwagon fallacy. 

5
Michael St Jules 🔸
In my opinion, a strong downvote is too harsh for a plausibly good faith comment with some potentially valuable criticism, even if (initially) vague. 1. They elaborated on some of their concerns in the replies. 2. You could ask them to elaborate more if they can (without deanonymizing people without their consent) on specific issues instead of strong downvoting.

As you write: 

The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents

The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we've entered an event horizon where the output is almost entirely unforeseeable. 

If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000

If, a... (read more)

I feel this claim is disconnected with the definition of the singularity given in the paper: 

The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity

... (read more)
4
David Thorstad
Ah - that comes from the discontinuity claim. If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.  (The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that's harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement). 

Intelligence Explosion: For a sustained period

[...]

Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period [Emphasis mine]

 

  • I'm not sure why "prolonged period" or "sustained" was used here?
  • I am also not sure what is meant by prolonged period? 5 years? 100 years? 
    • For the answer to the above, why do you believe would this be required? 

Just to help nail down the crux here, I don't see why more than a few days of an intelligence explosion is require... (read more)

4
ElliotJDavies
I feel this claim is disconnected with the definition of the singularity given in the paper:  Further in the paper you write:  [Emphasis mine]. I can't see any reference for either the original definition and later addition of "sustained". 

Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]


I'm not sure I understand this claim, and I can't see that it's supported by the cited paper. 

Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect. 

3
David Thorstad
Here's a gentle introduction to the kinds of worries people have (https://spectrum.ieee.org/power-problems-might-drive-chip-specialization). Of the cited references "the chips are down for moore's law" is probably best on this issue, but a little longer/harder. There's plenty of literature on problems with heat dissipation if you search the academic literature. I can dig up references on energy if you want, but with Sam Altman saying we need a fundamental energy revolution even to get to AGI, is there really much controversy over the idea that we'll need a lot of energy to get to superintelligence? 

The joy in righteousness

 

This is a new one to me! Interesting!

To identify one crux with the idea of using morality to motivate behaviour (e.g. "abolitionism"), is the assumption it needs to be completely grassroots. The argument often becomes: did slavery end because everyone found it to be morally bad, or because economic factors ect. changed the country fundamentally.

It becomes much more plausible that morality played an important role, when you modify the claim: Slavery ended because a group of important people realised it was morally wrong, and displayed moral leadership in changing laws. 

While I don't think that was inappropriate, it seems fair to give Owen at least some lead time to prepare a statement of his perspective on the matter. 

I think your right about this, and have changed my mind. 

I would generally view reaching out to a reasonable number of active Forum participants individually as not brigading. This is less likely to create a sufficient mass effect to mislead observers about the community's range of views.

I think about it this way. If a post was written critically about me, I would suspect 5-10% of people that know me in the community to see it, and 0.5% to comment. If I reach out to everyone I have ever been friendly with, I expect these numbers would be 50% and 5%, respectively. In other words, there would be 10x more comments ... (read more)

4
Jason
I think "a reasonable number of active Forum participants individually" is doing some real work here -- "everyone I have ever been friendly with" would not count. I think there is usually value in having people who know the subject well participating in the comments, and by your math there is a good chance that zero or one of the ten people best positioned to provide a sympathetic perspective would even see the post organically. A reasonable number would depend on the circumstances, but I was thinking more ~3 acceptances? One could argue that these individuals should disclose their status as solicited commenters. But people comment on the Forum for any number of reasons, obvious and inscrutable, so I can't find a sufficient rationale for singling out a few solicited commenters. There's no norm, for instance, for friends of a post's author to self-identify themselves as such. I am relatively less worried about a few commenters skewing the course of discussion (as opposed to strongvoters) for two reasons. The first is that comments have substance that can be evaluated as convincing or non-convincing. The popularity of that substance can be evaluated via up/down and agree/disagree voting, which provides some check on unrepresentative comments appearing to be consensus. Second, at least regulars have a decent sense of who is who; if someone who is an infrequent commenter starts on a commenting spree defending person X, we have a pretty good idea that they are motivated by some sort of external reason and can adjust accordingly.
2
Jason
I think the qualifier "a reasonable number of active Forum participants" in my comment is doing some real work and wouldn't be met if you asked "everyone [you] have ever been friendly with" -- even if we add in an implied limitation to current Forum participants. Let's take a case in which I invited my hypothetical friends Abel, Baker, and Charlie to participate in a thread that was critical of me. I think there is value in having some people who know the person well who is subject to the controversy present in the conversation. An invitation increases the likelihood of having those voices present; if the base rate of people even seeing the post is 5-10% per your example above, there's a good chance that zero of the ten community members best situated to provide a favorable perspective on the subject will even see the post -- much less decide to comment.  On the whole, I think the presence of Abel, Baker, and Charlie in the comments would be net positive. I'm sure it is exhausting to feel the need to respond to a post that is critical of you and all the comments thereunto, and asking for help can be appropriate. Even if all accept, it's only three voices, and the community is capable of evaluating the substance of what they say and reacting accordingly. In contrast, with votes there is no ability to evaluate whether the votes are based on solid reasoning or instead represent a voter's predisposition toward the subject of the post. I see the point that Abel, Baker, and Charlie could say that I asked them to comment. However, I think they should be part of the conversation, and expecting them to flag themselves gives the impression that they are true brigadiers. People have all sorts of incentives and motivations for posting, and I'm not convinced this motivation should be singled out for per se special disfavor. In this particular case, most active Forum participants would have seen the post given the prominence of the Owen situation and the Time article. And par

I wrote a report for CE on an AMR idea; the cost-effectiveness analyses of which will be released soon and I will post here when they are!

Hey Akhil, is there any update here? 

Astroturfing and troll farms are different from friends and people on your side saying their opinion

This is correct. What I am talking about is brigading

Astroturfing and troll farms are only similar in the mechanism behind their ability to distort public opinion. That mechanism is: People are influenced by the tone and volume of comments they read.

Are you saying you're against people being allowed to tell their friends and supporters about something they consider to be unethical and encouraging them to vote and comment according to their conscience?

... (read more)

There are some grey areas here:

  • Inviting participation from people who are not part of the relevant community is clearly brigading. Unless they abstain from voting and clearly disclose their origin, they would be masquerading as community members and giving a false impression of the community's views.
  • Inviting participation from people who are part of the relevant community presents a closer question. There's still a risk of creating a misleading impression of the community's views, but there isn't the astroturf-like presentment of inauthentic views as commu
... (read more)
7
Kat Woods 🔶 ⏸️
OK, that seems more reasonable. Not sure I agree, but at least this seems doable. Before it just seemed like you were saying that people shouldn't be allowed to share a post with friends and say to vote and comment according to their conscience.  This is food for thought. I will think about it and may update my policy. 

Why would it be bad if he was given advance warning about this report?


Some people -  to be completely frank, like yourself - will use advanced notice to schedule their friends, fans and colleagues to write defensive comments. A high concentration of these types of comments can distort the quality of the conversation. This is commonly referred to as brigading

This strategy is so effective, that foreign governments have setup "troll-farms", and companies have setup "astroturfing" operations to benefit from degrading the quality of certain conversa... (read more)

Jason
26
11
1
1

I would create a distinction between giving someone a read of a draft ahead of time, and actively communicating the date and time something is posted. 

Could you say more about that? The Boards' post stated their factual findings and actions without giving much of Owen's side of the story. While I don't think that was inappropriate, it seems fair to give Owen at least some lead time to prepare a statement of his perspective on the matter. 

There is a history of people on this Forum veering to one side when a post is published before the respondent has a fair chance to respond, then moving to the other side when the response is filed. It's better to avoid that dynamic when possible.

1
Kat Woods 🔶 ⏸️
Astroturfing and troll farms are different from friends and people on your side saying their opinion. Astroturfing is when it's people or fake people saying things they don't actually believe in exchange for pay.  Are you saying you're against people being allowed to tell their friends and supporters about something they consider to be unethical and encouraging them to vote and comment according to their conscience?

There's been some complaints from a banned EA Forum user that the timing of this post, and the timing of comments that bolster the character of Owen, have been coordinated. Whilst I think it's unlikely this is the case, I would love to see the following: 

- Confirmation from OP (@EV UK Board) that Owen was not given advanced warning on the posting of this report. Or if he was, some discussion around the potential issues with doing so. 

- Some further discussion in the EA Forum team, and perhaps rules set, on coordinated posting (AKA "brigading"). 

I was told approximately when the post would go up. In fact, I asked them to delay a few days so that somebody could write to the people who spoke to the investigation to give them an opportunity to fact-check or object to my detailed responses. (I made some minor updates following feedback there, but of course this shouldn't be taken as saying that everyone involved endorses what I've written; in particular, people may reasonably have chosen not to read it.)

I did not suggest anyone comment in my defence, something I'd regard as inappropriate. Nor did I le... (read more)

Why would it be bad if he was given advance warning about this report? There's nothing in here about him being retaliatory. It seems probably good to hear the other side and be given a chance to look at the post before it goes live. 

Also, it does say in the document that Owen was given advanced notice. His document says that he saw the draft and disagreed with aspects of it that they didn't address in the post. 

In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.

 

I think this is a spot on analogy, and something we've discussed in our group a lot.

Meta note: I'm not going to spend much more time on nonlinear threads, since I think it's among the poorer uses of my time. With this in mind, I hope people don't take unilateral actions (e.g. deanonymizing Chloe or Alice) after discussing in this thread, because I suspect at this point threads like these filter for specific people and are less representative of the EA community as a whole.

As we later received more screenshots, it seems like we actually received definitive confirmation that the conversation on that date did indeed not result in Alice getting food.

I'm waiting for Ben, or someone else, to make a table of claims, counter claims, and what the evidence shows. Because nonlinear providing evidence that doesn't support their claims seems to be a common occurance. 

Just to give a new example, Kat screenshots herself replying "mediating! Appreciate people not talking to loud on the way back [...] " here, to provide evidence suppor... (read more)

Uh, the word in that screenshot is "meditating". She was asking people to not talk too loudly while she was meditating.

Excited to hear both of these announcements!

This sounds right, but the counterfactual (no social accountability) seems worse to me, so I am operating on the assumption it's a necessary evil. 

I live high trust country, which has very little of this social accountability, i.e. if someone does something potentially rude or unacceptable in public, they are given the benefit of the doubt. However, I expect this works because others are employed, full time, to hold people accountable. I.e. police officers, ticket inspectors, traffic wardens. I don't think we have this in the wider Effective Altruism community right now. 

Load more