All of Ben Millwood's Comments + Replies

Yeah, sorry, when I said "unhinged" I meant "the US penal system is in general unhinged", not "this ruling in particular is unhinged". I also used "evil" as an illustrative / poetic example of something which I'd rather be inconsistent than consistent, and implied more than I intended that the sentencing judge was actually doing evil in this case.

It's possible that I'm looking at how the system treats e.g. poor people and racial minorities, where I think it's much more blatantly unreasonable, and transplanting that judgement into cases where it's less meri... (read more)

2
Jason
2m
I understood the gist in context as ~ "using US sentencing outcomes as a partial framework, or giving weight to consistency when many sentences are excessive or even manifestly so, poses significant problems." And I think that is a valid point.  Your last sentence raises another possible difference in how to approach the question: My reactions to how long he should serve are bounded by the options available under US law. I didn't check, but I think the maximum term of supervised release (the means of imposing certain post-sentence restrictions) is only a few years here. And there is no discretionary parole in the federal system, so I can only go off of SBF's lack of remorse (which requires acknowledgement of wrongdoing, not just mistakes-were-made) in assessing his future dangerousness. It's possible I would go down somewhat if I could maintain a tight leash on post-sentence conduct in exchange. Finally, I think it's appropriate to consider a few other practical realities. It is practically essential to give defendants an incentive to plead guilty when they are actually guilty; that is commonly thought of as 25-33%. Likewise, we have to further punish defendants who tamper with witnesses and perjure themselves. The US system detains way too many people pre-trial, and if we're going to fix that then I think the additional sanction for abusing pre-trial release has to be meaningful. So I have almost a doubling of the sentence here compared to a version of SBF who pled guilty, didn't tamper, and didn't perjure. So to me, saying 25 years was enough here implies ~12.5 would be enough for that version of SBF, with about 9-10 years estimated actual incarceration.

yes, that's right.

I can think of grounds to disagree, though. Say for example you were able to disproportionately protect e.g. white people from being prosecuted for jaywalking. I think jaywalking shouldn't be illegal, so in a sense any person you protect from prosecution is a win. But there would be indirect effects to a racially unfair punishment, e.g. deepening resentment and disillusionment, enabling and encouraging racists in other aspects of their beliefs and actions. So even though there would be less direct harm, there might be more indirect harm.

I... (read more)

While I see what you're saying here, I prefer evil to be done inconsistently rather than consistently, and every time someone merely gets what they deserve instead of what some unhinged penal system (whether in the US or elsewhere) thinks they deserve seems like a good thing to me.

(I don't personally have an opinion on what SBF actually deserves.)

4
Brad West
19h
To clarify, you would sacrifice consistency to achieve a more just result in an individual case, right? But if there could be consistently applied, just, results, this would be the ideal result... I don't understand the disagree votes if I am understanding correctly.
2
Jason
20h
Totally fair! I think part of my reasoning here relies on the difference between "sentence I think is longer than necessary for the purposes of sentencing" (which I would not necessarily classify as an "evil" in the common English usage of that term) and an "unhinged" result. I would not support a consistent sentence if it were unhinged (or even a close call), and I would generally split the difference in various proportions if a sentence fell in places between those two points. It's a little hard to define the bounds of "unhinged," but I think it might be vaguely like "no reasonable person could consider this sentence to have not been unjustly harsh." Here, even apart from the frame of reference of US sentencing norms, I cannot say that any reasonable person would find throwing the book at SBF here to have been unjustly harsh in light of the extreme harm and culpability.

I think if there's no credible reason to assign responsibility to the intervention, there's no need to include it in the model. I think assigning the charity responsibility for the consequences of a crime they were the victim of is just not (by default) a reasonable thing to do.

It is included in the detailed write-up (the article even links to it). But without any reason to believe this level of crime is atypical for the context or specifically motivated by e.g. anger against the charity, I don't think anything else needs to be made of it.

I've been linked to The benefits of Novavax explained which is optimistic about the strengths of Novavax, suggesting it has the potential to offer longer-term protection, and protection against variants as well.

I think the things the article says or implies about pushback from mRNA vaccine supporters seem unlikely to me -- my guess is that in aggregate Wall Street benefits much more from eliminating COVID than it does from selling COVID treatments, though individual pharma companies might feel differently -- but they seem like the sort of unlikely thing th... (read more)

I tend to think there's an asymmetry between how good well-being is & how bad suffering is

This isn't relevant if you think GiveWell charities mostly act to prevent suffering. I think this is certainly true for the health stuff, and arguably still plausible for the economic stuff.

This is an important point. People often confuse harm/benefit asymmetries with doing/allowing asymmetries. Wenar's criticism seems to rest on the latter, not the former. Note that if all indirect harms are counted within the constraint against causing harm, almost all actions would be prohibited. (And on any plausible restriction, e.g. to "direct harms", it would no longer be true that charities do harm. Wenar's concerns involve very indirect effects. I think it's very unlikely that there's any consistent and plausible way to count these as having dispropo... (read more)

'Cause here' is an example of an ineffective cause, with an estimated cost of 'cost here' to save one life.

You might find it tricky to fill these in. In general cost estimates for less effective charities are, when they exist at all, much less developed and much lower quality, because it's laborious to develop an accurate estimate and there's not much demand for precision once something is unlikely to be a top charity.

The nature of the effective altruist project is mostly to distinguish between "known to be effective" and "not known to be effective", an... (read more)

  • Crypto prices in general also turned out in their favour, and without having looked into it closely I'd guess both of those bets paying off were necessary for people to get paid back,
  • If the bankruptcy hadn't forced dollarization all of FTX's customer deposits, I'm guessing they still wouldn't be able to pay everyone back today,
  • Customer money wasn't supposed to be going into bets with any variance. Having a diversified portfolio reduces variance but doesn't eliminate it (and anyway I suspect FTX's portfolio wasn't in reality very diversified, given that tech stocks and crypto have historically been pretty correlated)

Gathering some notes on private COVID vaccine availability in the UK.

News coverage:

It sounds like there's been a licensing change allowing provision of the vaccine outside the NHS as of March 2024 (ish). Pharmadoctor is a company that supplies pharmacies and has been putting about the word that they'll soon be able to supply them with vaccine doses for private sale -- most media coverage I found... (read more)

2
Ben Millwood
1d
I've been linked to The benefits of Novavax explained which is optimistic about the strengths of Novavax, suggesting it has the potential to offer longer-term protection, and protection against variants as well. I think the things the article says or implies about pushback from mRNA vaccine supporters seem unlikely to me -- my guess is that in aggregate Wall Street benefits much more from eliminating COVID than it does from selling COVID treatments, though individual pharma companies might feel differently -- but they seem like the sort of unlikely thing that someone who had reasonable beliefs about the science but spent too much time arguing on Twitter might end up believing. Regardless, I'm left unsure how to feel about its overall reliability, and would welcome thoughts one way or the other.

Yeah I think this is quite sensible -- I feel like I noticed one thing missing from the normal doom scenario and didn't notice all of the implications of missing that thing, in particular that the reason the AI in the normal doom scenario takes over is because it is highly likely to succeed, and if it isn't, takeover seems much less interesting.

oh man, it's altruistically-good and selfishly-sad to see so many of the things I was thinking about pre-empted there, thanks for the link!

1
trevor1
9d
Yep, that's the way it goes!  Also, figuring out what's original and what's memetically downstream, is an art. Even more so when it comes to dangerous technologies that haven't been invented yet.

deworming seems to be beneficial for education (even if the magnitude might have been overstated)

Maybe a nitpick, but idk if this is suspicious convergence -- I thought the impact on economic outcomes (presumably via educational outcomes) was the main driver for it being considered an effective intervention?

Quoting this paragraph and bolding the bit that I want to discuss:

Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview. So we should be especially cautious of completely dismissing commonsense priorities in

... (read more)

However, if they anticipate trade 2 being offered after Alice is born, then I think they shouldn't make trade 1, since they know they'll make trade 2 and end up in World 3 minus some money, which is worse than World 1 for presently existing people and necessary people before Alice is born.

I think it's pretty unreasonable for an ethical system to:

  • change its mind about whether something is good or bad, based only on time elapsing, without having learned anything new (say, you're offered trade 2, and you know that Alice's mother has just gone into labour
... (read more)
2
MichaelStJules
9d
Only presentists have the problem in the first bullet with your specific example. There's a similar problem that necessitarians have if the identity of the extra person isn't decided yet, i.e. before conception. However, they do get to learn something new, i.e. the identity. If a necessitarian knew the identity ahead of time, there would be no similar problem. (And you can modify the view to be insensitive to the identity of the child by matching counterparts across possible worlds.) The problem in the second bullet, basically against burning bridges or "resolute choice", doesn't seem that big of a deal to me. You run into similar problems with Parfit's hitchhiker and unbounded utility functions. Maybe I can motivate this better? Say you want to have a child, but being a good parent (and ensuring high welfare for your child) seems like too much trouble and seems worse to you than not having kids, even though, conditional on having a child, it would be best. Your options are: 1. No child. 2. Have a child, but be a meh parent. You're better off than in 1, and the child has a net positive but just okay life. 3. Have a child, but work much harder to be a good parent. You're worse off than in 2, but the child is much better off than in 2, and this outcome is better than 2 in a pairwise comparison. In binary choices: 1. 1 < 2, because 2 is better for you and no worse for your child (person-affecting). 2. 2 < 3, impartially by assumption. 3. 3 < 1, because 1 is better for you and no worse for your child (person-affecting). With all three options available, I'd opt for 1, because 2 wouldn't be impartially permissible if 3 is available, and I prefer 1 to 3. 2 is not really an option if 3 is available. It seems okay for me to frustrate my own preference for 2 over 1 in order to avoid 3, which is even worse for me than 1. No one else is worse off for this (in a person-affecting way); the child doesn't exist to be worse off, so has no grounds for complaint. So it

I upvoted this comment for the second half about categories, but this part didn't make much sense to me:

I think the advantage of a label like "Global health and development" is that is doesn't require a super specific worldview: you make your own assumptions about what you value, then you can decide for yourself whether GHD works as a cause area for you, based on the evidence presented.

I can imagine either speciesism or anti-speciesism being considered "specific" worldviews, likewise person-affecting ethics or total ethics, likewise pure time discounti... (read more)

Should I take the fact that you have stopped donations as a signal that you no longer value further responses? Will you close the Google Form and/or update this post when you're beginning analysis and further responses won't be counted? Is there a specific deadline?

2
Cameron Berg
10d
Hi Ben, we are continuing to accept further responses and of course value any additional respondents. Stopping donations is more a function of our available budget for this project than how much value we put on the additional data. We are keeping the form open until the data analysis is complete (it is easy to just plug in new entries to the existing analysis pipeline), at which point we will close the form. No specific deadline, but we imagine the analysis will be complete in the next week or two.

As a quick comment, I think something else that distinguishes GHD and animal welfare is that the global non-EA GHD community seems to me the most mature and "EA-like" of any of the non-EA analogues in other fields. It's probably the one that requires the most modest departure from conventional wisdom to justify it.

Is it at least fair to say that in situations where the other main actors aren't explicitly coordinating with you and aren't aware of your efforts (and, to an approximation, weren't expecting your efforts and won't react to them), you should be thinking more like I suggested?

2
Owen Cotton-Barratt
14d
I think maybe yes? But I'm a bit worried that "won't react to them" is actually doing a lot of work. We could chat about more a concrete example that you think fits this description, if you like.

I think there are several different activities that people call "impact attribution", and they differ in important ways that can lead to problems like the ones outlined in this post. For example:

  1. if I take action A instead of action B, then the world will be X better off,
  2. I morally "deserve credit" in the amount of X for the fact that I took action A instead of B.

I think the fact that any action relies enormously on context, and on other people's previous actions, and so on, is a strong challenge to the second point, but I'd argue it's the first point t... (read more)

2
charlieh943
9d
Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important.  You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/merit – because individuals' actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandel's book, The Tyranny of Merit. Sandel takes issue with the attitude of "winners" within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst "high-impact individuals" .
4
Owen Cotton-Barratt
15d
I mostly-disagree with this on pragmatic grounds. I agree that that's the right approach to take on the first point if/when you have full information about what's going on. But in practice you essentially never have proper information on what everyone else's counterfactuals would look like according to different actions you could take. If everyone thinks in terms of something like "approximate shares of moral credit", then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if they'd all done something different. Doing this properly might mean impact markets (where the "market" part works as a mechanism for distributing cognition, so that each market participant is responsible for thinking through their own alternative options, and feeding that information into the system via their willingness to do work for different amounts of pay), but I think that you can get some rough approximation to the benefits of impact markets without actual markets by having people do the things they would have done with markets -- and in this context, that means paying attention to the share of credit different parties would get.

I haven't thought about this carefully yet, but I believe this kind of thinking comes out differently depending on whether you say "the average cost per net is $1" or "the average number of nets I can make for $1 is 1". I think often when we say things like this, we imagine a neat symmetrical normal distribution around the average, but you can't simultaneously have a neat normal distribution around both of these numbers! Perhaps you'd need to look more into where the numbers are coming from to get a better intuition for which shape of distribution is more plausible.

2
EdoArad
12d
Exactly!

I think "human-level" is often a misleading benchmark for AI, because we already have AIs that are massively superhuman in some respects and substantially subhuman in others. I sometimes worry that this is leading people to make unwarranted assumptions of how closely future dangerous AIs will track humans in terms of what they're capable of. This is related to a different post I'm writing, but maybe deserves its own separate treatment too.

A problem with a lot of AI thoughts I have is that I'm not really in enough contact with the AI "mainstream" to know what's obvious to them or what's novel. Maybe "serious" AI people already don't say human-level, or apply a generous helping of "you know what I mean" when they do?

Many of the post ideas on my list of things I want to write would be basically as good if someone else wrote them (and they come with some existing prioritisation in agreevotes)

My understanding (for whatever it's worth) is that most of the reason why a full repayment looks feasible now is a combination of:

  • Creditors are paid back the dollar value of their assets at the time of bankruptcy. Economically it's a bit like everyone was forced to sell all their crypto to FTX at bankruptcy date, and then the crypto FTX held appreciated a bunch in the meantime.
  • FTX held a stake in Anthropic, and for general AI hype reasons that's likely to have appreciated a lot too.

I think it's reasonable to think of both of these as luck, and certainly a company relying on them to pay their debts is not solvent.

1
Ian Turner
8d
Well, regarding Anthropic at least, this particular bet may be lucky, but if you make a bunch of high-variance bets and one of them turns out in your favor, is that still just luck?
3
bern
1mo
Perhaps. But it sounds like many[1] have been treating the fact that FTX did in fact face a liquidity crisis as strong (conclusive?) evidence of SBF's excessive risk-taking in a way that's relevant for intent. And now they claim that the extent to which customers are made whole or FTX was insolvent is not relevant. It feels like people in general are happy to attribute good luck to his decisions but not bad luck. 1. ^ Including the prosecution: "its customers were left with billions of dollars in losses", "the defendant talked with his inner circle about...how customers could never be repaid", "Billions of dollars from thousands of people gone", "there is no serious dispute that around $10 billion went missing"...

This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who'd like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they're trying to promote a cause. "I want effective altruists to highly prioritise something that they currently don't" is in some sense how all our existing priorities got to where they are. I don't think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).

8
Ian Turner
2mo
Hi Ben, It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning. Some examples that seem like they might be in the latter category to me: * https://forum.effectivealtruism.org/posts/Dytsn9dDuwadFZXwq/fundraising-for-a-school-in-liberia * https://forum.effectivealtruism.org/posts/R5r2FPYTZGDzWdJEY/how-to-get-wealthier-folks-involved-in-mutual-aid * https://forum.effectivealtruism.org/posts/zsLcixRzqr64CacfK/zzappmalaria-twice-as-cost-effective-as-bed-nets-in-urban In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions. If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.    Ian
2
Joseph Lemien
2mo
I think that to a certain extent that is right, but this context was less along the lines of "here is a cause that is going to be highly impactful" and more along the lines of "here is a cause that I care about." Less "mental health coaching via an app can be cost effective" and more like "let's protect elephants." But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.

Sure, it's easy to dismiss the value of unaligned AIs if you compare against some idealistic baseline; but I'm asking you to compare against a realistic baseline, i.e. actual human nature.

I haven't read your entire post about this, but I understand you believe that if we created aligned AI, it would get essentially "current" human values, rather than e.g. some improved / more enlightened iteration of human values. If instead you believed the latter, that would set a significantly higher bar for unaligned AI, right?

7
Matthew_Barnett
2mo
That's right, if I thought human values would improve greatly in the face of enormous wealth and advanced technology, I'd definitely be open to seeing humans as special and extra valuable from a total utilitarian perspective. Note that many routes through which values could improve in the future could apply to unaligned AIs too. So, for example, I'd need to believe that humans would be more likely to reflect, and be more likely to do the right type of reflection, relative to the unaligned baseline. In other words it's not sufficient to argue that humans would reflect a little bit; that wouldn't really persuade me at all.

It seems like you're just substantially more pessimistic than I am about humans. I think factory farming will be ended, and though it seems like humans have caused more suffering than happiness so far, I think their default trajectory will be to eventually stop doing that, and to ultimately do enough good to outweigh their ignoble past. I don't think this is certain by any means, but I think it's a reasonable extrapolation. (I maybe don't expect you to find it a reasonable extrapolation.)

Meanwhile I expect the typical unaligned AI may seize power for some ... (read more)

[edit: fixed] looks like your footnote didn't make it across from LW

2
Linch
2mo
ty fixed

A lot of these points seem like arguments that it's possible that unaligned AI takeover will go well, e.g. there's no reason not to think that AIs are conscious, or will have interesting moral values, or etc.

My stance is that we (more-or-less) know humans are conscious and have moral values that, while they have failed to prevent large amounts of harm, seem to have the potential to be good. AIs may be conscious and may have welfare-promoting values, but we don't know that yet. We should try to better understand whether AIs are worthy successors before tran... (read more)

My stance is that we (more-or-less) know humans are conscious and have moral values that, while they have failed to prevent large amounts of harm, seem to have the potential to be good.

I claim there's a weird asymmetry here where you're happy to put trust into humans because they have the "potential" to do good, but you're not willing to say the same for AIs, even though they seem to have the same type of "potential".

Whatever your expectations about AIs, we already know that humans are not blank slates that may or may not be altruistic in the future: we ac... (read more)

Something I'm trying to do in my comments recently is "hedge only once"; e.g. instead of "I think X seems like it's Y", you pick either one of "I think X is Y" or "X seems like it's Y". There is a difference in meaning, but often one of the latter feels sufficient to convey what I wanted to say anyway.

This is part of a broader sense I have that hedging serves an important purpose but is also obstructive to good writing, especially concision, and the fact that it's a particular feature of EA/rat writing can be alienating to other audiences, even though I think it comes from a self-awareness / self-critical instinct that I think is a positive feature of the community.

2
MichaelDickens
3mo
I was just thinking about this a few days ago when I was flying for the holidays. Outside the plane was a sign that said something like And I was thinking about whether this was a justified double-hedge. The author of that sign has a subjective belief that exposure to those chemicals increases the probability that you get cancer, so you could say "may give you cancer" or "increases the risk of cancer". On the other hand, perhaps the double-hedge is reasonable in cases like this because there's some uncertainty about whether a dangerous thing will cause harm, and there's also uncertainty about whether a particular thing is dangerous, so I supposed it's reasonable to say "may increase the risk of cancer". It means "there is some probability that this increases the probability that you get cancer, but also some probability that it has no effect on cancer rates."

I think the concern about jargon is misplaced in this context. Jargon is learned by native and non-native speakers alike as they engage with the community: it's specifically the stuff that already knowing the language doesn't help you with, which means not knowing the language doesn't disadvantage you. That's not to say jargon doesn't have its own problems, but I think that someone who attempts to reduce jargon specifically as a way to reach non-native speakers better has probably misdirected their focus.

But a core thing that you don't mention (maybe because you are a native speaker, and you have to not be one to realize that, not being mean here but simply stating what I think is a fact), is that jargon adds to the effort. 

Not only you have to speak a flawless English and not mull over potential mistakes that potentially make you look foolish in the eyes of your interlocutor and reduce the credibility of your discourse, but you also have to use the right jargon. Add saying something meaningful in top of it: you have to pay attention to how you say th... (read more)

I think if we deanonymise now, there's a strong chance that the next whistleblower will remember what happened as "they got deanonymised" and will be reluctant to believe it won't happen to them. It kind of doesn't matter if there's reasons why it's OK in this case, as long as they require digging through this post and all the comments to understand them. People won't do that, so they won't feel safe from getting the same treatment.

I think that posting that someone is banned and why they were banned is not mainly about punishing them. It's about helping people understand what the moderation team is doing, how rule-breaking is handled, and why someone no longer has access to the forum. For example, it helps us to understand if the moderation team are acting on inadequate information, or inconsistently between different people. The fact that publishing this information harms people is an unfortunate side effect, after the main effect of improving transparency and keeping people informe... (read more)

6
Jason
3mo
It's unclear to me that naming names materially advances the first two goals. As to the third, the suspended user could have the option of having their name disclosed. Otherwise, I don't think we're entitled to an explanation of why a particular poster isn't active anymore.

People often propose HR departments as antidotes to some of the harm that's done by inappropriate working practices in EA. The usual response is that small organisations often have quite informal HR arrangements even outside of EA, which does seem kinda true.

Another response is that it sometimes seems like people have an overly rosy picture of HR departments. If your corporate culture sucks then your HR department will defend and uphold your sucky corporate culture. Abusive employers will use their HR departments as an instrument of their abuse.

Perhaps the... (read more)

7
Joseph Lemien
3mo
I feel at least somewhat qualified to speak on this, having read a bunch about human resources, being active in an HR professionals chat group nearly every day, and having worked in HR at a few different organizations (so I have seen some of the variance that exists). I hope you'll forgive me for my rambling on this topic, as there are several different ideas that came to mind when reading your paragraphs. The first thing is that I agree with you on at least one aspect: rather than merely creating a department and walking away, adopting and adapting best practices and relevant expertise would be more helpful. If the big boss is okay with [insert bad behavior here] and isn't open to the HR Manager's new ideas, then the organization probably isn't going to change. If an HR department is defending and upholding sucky corporate culture, that is usually because senior leadership is instructing them to do so. Culture generally comes from the top. And if the leader isn't willing to be convinced by or have his mind changed by the new HRO he hired, then things probably won't be able to get much better.[1] "HR is not your friend" is normally used to imply that you can't trust HR, or that HR is out to get you, or something like that. Well, In a sense it is true that "HR is not your friend." If you are planning to do jump ship, don't confide in the HR manager about it trusting that they won't take action. If that person has a responsibility to take action on the information you provide, you should think twice before volunteering that information and consider if the action is beneficial to you or not. The job of the people on an HR team (just like the job of everyone else employed by an organization) is to help the organization achieve it's goals. Sometime that means pay raises for everyone, because the aren't salaries competitive and the company wants to have low attrition. Sometimes that means downsizing, because growth forecast were wrong and the company over-hired. The acc
5
AnonymousTurtle
3mo
  I see a lot of this online, but it doesn't match my personal experience. People working in HR that I've been in contact with seem generally kind people, aware of tradeoffs, and generally care about the wellbeing of employees. I worry that the online reputation of HR departments is shaped by a minority of terrible experiences, and we overgeneralize that to think that HR cannot or will not help, while in my experience they are often really eager to try to help (in part because they don't want you and others to quit, but also because they are nice people). Maybe it's also related to minimum-wage non-skilled jobs vs higher paying jobs, where employment tends to be less adversarial and less exploitative.

It seems like we disagree on how bad it is to self-vote (I don't think it's anywhere near the level of "actual crime", but I do think it's pretty clearly dishonest and unfair, and for such a petty benefit it's hard for me to feel sympathetic to the temptation).

But I don't think it's the central point for me. If you're simultaneously holding that:

  • this information isn't actually a big deal, but
  • releasing this publically would cause a lot of harm through reputational damage,

then there's a paternalistic subtext where people can't be trusted to come to the ... (read more)

4
lilly
3mo
I feel like this is getting really complicated and ultimately my point is very simple: prevent harmful behavior via the least harmful means. If you can get people to not vote for themselves by telling them not to, then just… do that. I have a really hard time imagining that someone who was warned about this would continue to do it; if they did, it would be reasonable to escalate. But if they’re warned and then change their behavior, why do I need to know this happened? I just don’t buy that it reflects some fundamental lack of integrity that we all need to know about (or something like this).

Just because something is true doesn’t mean you forfeit your rights to not have that information be made public.

I agree that not all true things should be made public, but I think when it specifically pertains to wrongdoing and someone's trustworthiness, the public interest can override the right to privacy. If you look into your neighbour's window and you see them printing counterfeit currency, you go to the police first, rather than giving them an opportunity to simply hide their fraud better.

8
lilly
4mo
Maybe the crux is: I think forum users upvoting their own comments is more akin to them Facetuning dating app photos than printing counterfeit currency. Like, this is pretty innocuous behavior and if you just tell people not to do it, they’ll stop.

I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there's a case to be made when the information is cherry-picked or biased, or there's no opportunity to hear a fair response. But goodness, if we've learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.

8
lilly
4mo
I would guess that most people engage in private behavior that would be reputationally damaging if the internet were to find out about it. Just because something is true doesn’t mean you forfeit your rights to not have that information be made public. I think people might reasonably (though wrongly) assume that forum mods are not monitoring accounts at this level of granularity, and thus believe that their voting behavior is private. Given this, I think mods should warn before publicly censoring. (Just as it would be better to inform your neighbor that you can see them doing something embarrassing through their window before calling the police or warning other people about then—maybe they just don’t realize you can see, and telling them is all they need to not do the thing anymore, which, after all, is the goal.) Frankly, I don’t love that mods are monitoring accounts at this level of granularity. (For instance, knowing this would make me less inclined to put remotely sensitive info in a forum dm.)

"But does not everyone do that? I mean, they should think about effectiveness and all that! It is the only sensible thing to do."

If only! I think from the inside (and, it seems, some people on the outside), EA can seem "obvious", at least in the core / fundamentals. But I think most philanthropy is not done like this still, even among people who spend a lot of time and effort on it.

For example, the idea that we should compare between different cause areas or that we should be neutral between helping those in our own country vs. those abroad still seem relative minority positions to me.

Thanks for all the work you do :)

typo (edit: fixed): "The Global Fund is the world’s largest funder of maria control activities"

2
RobM
4mo
Thanks Ben

I would most naturally interpret it as "I also have this question" or "I agree with something implicitly expressed by this question"

I doubt anyone made a strategic decision to start fundraising orgs outside the Bay Area instead of inside it. I would guess they just started orgs while having personal reasons for living where they lived. People aren't generally so mobile or project-fungible that where projects are run is something driven mostly by where they would best be run.

That said, I half-remember that both 80k and CEA tried being in the Bay for a bit and then left. I don't know what the story there was.

"ask not what you can do for EA, but what EA can do for you"

like, you don't support EA causes or orgs because they want you to and you're acquiescing, you support them because you want to help people and you believe supporting the org will do that – when you work an EA job, instead of thinking "I am helping them have an impact", think "they are helping me have an impact"

of course there is some nuance in this but I think broadly this perspective is the more neglected one

I like the spirit of the reactions feature although the specific choice of reactions seems quite narrow / unnatural to me? I think two big missing ones from social media are laugh and sad – if you're concerned about being laughing at comments instead of with them you could mitigate it by labelling it something more unambiguously complimentary, like "witty" or "enjoyable"?

I think "changed my mind" is a great one, though.

4
Linch
6mo
I would like an option to nonymously react to whether I perceive an argument is good or bad. This will be a good middle ground between "need to write 3 sentences every time I need to explain why an argument has holes or is missing information" and "authors feel like they're anonymously critiqued by people who they have no hope of learning the perspective of."

This is similar to how StackOverflow / StackExchange works, I think – any user can propose an edit (or there's some very low reputation threshold, I forget) but if you're below some reputation bar then your edit won't be published until reviewed by someone else.

Making this system work well though probably requires higher-karma users having a way of finding out about pending edits.

I think this would be a good top-level post

OK, but this post is about drawing an analogy between the degrowth debate and the AI pause debate, and I don't see the analogy. Do you disagree with my argument for why they aren't analogous?

1
mikbp
6mo
If I understood you well, yes, I disagree. EAs at large basically do not enter the degrowth debate. They act a bit like LeCuns of the degrowth debate, sort to say. Maybe what I mean is more meta than what you are referring to? EAs complain that many people just disregard the dangers of AI by saying something in the lines of "AI development is good and stopping it is anyway impossible", or "we will manage the issues", etc. And what I mean is that EAs do/have done the same kind of things with growth.

Especially if you disagree, explain why or upvote a comment that roughly reflects your view rather than downvoting. Downvoting controversial views only hides them rather than confronting them.

As a meta-comment, please don't assume that anyone who downvotes does so because they disagree, or only because they disagree. A post being controversial doesn't mean it must be useful to read, any more than it means it must not be useful to read. I vote on posts like this based on whether they said something that I think deserves more attention or helped me understand something better, regardless of whether I think it's right or wrong.

5
alexherwix
6mo
One caution I want to add here is that downvoting when a post is fresh / not popular can have strong filter effects and lead to premature muting of discussion. If the first handful of readers simply dislike a post and downvote it, this makes it much less likely for a more diverse crowd of people to find it and express their take on it. We should consider that there are many different viewpoints out there and that this is important for epistemic health. Thus, I encourage anyone to be mindful when considering to further downvote posts that are already unpopular.

While I agree there are similarities in the form of argument between degrowth and AI pause, I don't think those similarities are evidence that the two issues should have the same conclusion. There's simply nothing at all inconsistent about believing all of these at the same time:

  • AI pause is desirable
  • AI pause is achievable
  • Degrowth is undesirable
  • Degrowth is not feasible

Almost the entire question, for resolving either of these issues, is working out whether these premises are really true or not. And that's where the similarities end, IMO: there's not m... (read more)

1
mikbp
6mo
Exactly. What I try to point to is that EA as movement has not engaged in working out whether degrowth is desirable or not. I don't say anything about the conclusions --in part because I myself am not clear. I actually believe it is extremely difficult to get a clear answer so I would expect a lot of nuance.
4
alexherwix
6mo
I think one point of this post is to challenge the community to engage more openly with the question of degrowth and to engage in argument rather than dismiss it outright. I have not followed this debate in detail but I sympathize with the take that issues which are controversial with EAs are often disregarded without due engagement by the community.

I don't know if you intended it this way, but I read this comment as saying "the author of this post is missing some fairly basic things about how IR works, as covered by introductory books about it". If so, I'd be interested in hearing you say more about what you think is being missed.

1
trevor1
6mo
Agreed. I'm not really the right kind of person to be commenting on this here, I thought I might end up being the only person responding (this was the first comment) so I left a comment that seemed significantly better than nothing. The helpfulness of my comment was substantially outclassed by subsequent comments.

In the name of trying to make legible what I think is going on in the average non-expert's head about this, I'm going to say a bunch of things I know are likely to be mistaken, incomplete, or inadequately sophisticated. Please don't take these as endorsements that this thinking is correct, just that it's what I see when I inspect my instincts about this, and suspect other casual spectators might have the same ones.

It feels intuitive that Google and OpenAI and Anthropic etc. are more likely to co-operate with each other than any of them are to co-operate wi... (read more)

1
Oliver Sourbut
6mo
(Prefaced with the understanding that your comment is to some extent devil's advocating and this response may be too) What is 'step in'? I think when people are describing things in aggregated national terms without nuance, they're implicitly imagining govts either already directing, or soon/inevitably appropriating and directing (perhaps to aggressive national interest plays). But govts could just as readily regulate and provide guidance on underprovisioned dimensions (like safety and existential risk mitigation). Or they could in fact be powerless, or remain basically passive until too late, or... (all live possibilities to me). In these alternative cases, the kind of language and thinking I'm highlighting in the post seems like a sort of nonsense to me - like it doesn't really parse unless you tacitly assume some foregone conclusions.
1
Oliver Sourbut
6mo
Thanks Ben! Appreciated. These psychological (and real) factors seem very plausible to me for explaining why mistakes in thinking and communication are made. Mhm, this seems less lossy as a hypothetical model. Even if they were only 'closer friends', though, I don't think it's at all clearcut enough for it to be appropriate to glom them (and with the govt!) when thinking about strategy. And the more so when tempered by 'closer enemies'. As in, I expect anyone doing that to systematically be (harmfully) wrong in their thinking and writing. ---------------------------------------- I understand what you're gesturing at regarding anticipation that US actors might associate more with other US than with Chinese actors. I don't know what to think here but it seems far from set in stone. Some personal anecdata. I worked in a growing internet company for some years. One of the big talking points was doing business in China, which involved making deals with Chinese entities. I wasn't directly involved but I want to say it was... somewhat hard but not prohibitive? We ended up with offices in Shanghai, some employees there, and some folks who travelled back and forth sometimes.[1] I tentatively think we did more business with China-based entities than with US-based market-competitors. I confidently know we did more business with non-US-based entities than with US-based market-competitors. Meanwhile and less anecdotally, the stories about smuggling and rules-lawyering sales under the US govt's limit are literally examples of US- and China- based actors colluding! It's beyond sloppy to summarise that by drawing boundaries around 'US' and 'China'. I could of course find examples which reinforce the 'intra-bloc harmony' hypothesis. Point is that it seems far from settled, so resting on implicit assumptions here will predictably lead to errors. ---------------------------------------- 1. As a tongue-in-cheek aside, shockingly, Chinese colleagues I've had in industry and a

I'd like to see more of an analysis of where we are now with what people want from CH, what they think it does, and what it actually does, and to what extent and why gaps exist between these things, before we go too deep into what alternative structures could exist. Currently I don't feel like we really understand the problem well enough to solve it.

Load more