This is a special post for quick takes by Ben Millwood. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 11:01 PM

People often propose HR departments as antidotes to some of the harm that's done by inappropriate working practices in EA. The usual response is that small organisations often have quite informal HR arrangements even outside of EA, which does seem kinda true.

Another response is that it sometimes seems like people have an overly rosy picture of HR departments. If your corporate culture sucks then your HR department will defend and uphold your sucky corporate culture. Abusive employers will use their HR departments as an instrument of their abuse.

Perhaps the idea is to bring more mainstream HR practices or expertise into EA employers, rather than merely going through the motions of creating the department. But I think mainstream HR comes primarily from the private sector and is primarily about protecting the employer, often against the employee. They often cast themselves in a role of being there to help you, but a common piece of folk wisdom is "HR is not your friend". I think frankly that a lot of mainstream HR culture is at worst dishonest and manipulative, and I'd be really sad to see us uncritically importing more of that.

I feel at least somewhat qualified to speak on this, having read a bunch about human resources, being active in an HR professionals chat group nearly every day, and having worked in HR at a few different organizations (so I have seen some of the variance that exists). I hope you'll forgive me for my rambling on this topic, as there are several different ideas that came to mind when reading your paragraphs.

The first thing is that I agree with you on at least one aspect: rather than merely creating a department and walking away, adopting and adapting best practices and relevant expertise would be more helpful. If the big boss is okay with [insert bad behavior here] and isn't open to the HR Manager's new ideas, then the organization probably isn't going to change. If an HR department is defending and upholding sucky corporate culture, that is usually because senior leadership is instructing them to do so. Culture generally comes from the top. And if the leader isn't willing to be convinced by or have his mind changed by the new HRO he hired, then things probably won't be able to get much better.[1]

"HR is not your friend" is normally used to imply that you can't trust HR, or that HR is out to get you, or something like that. Well, In a sense it is true that "HR is not your friend." If you are planning to do jump ship, don't confide in the HR manager about it trusting that they won't take action. If that person has a responsibility to take action on the information you provide, you should think twice before volunteering that information and consider if the action is beneficial to you or not. The job of the people on an HR team (just like the job of everyone else employed by an organization) is to help the organization achieve it's goals. Sometime that means pay raises for everyone, because the aren't salaries competitive and the company wants to have low attrition. Sometimes that means downsizing, because growth forecast were wrong and the company over-hired. The accountant is also not your friend, nor is the janitor, nor is marketing executive, nor is any other role at the organization. So I guess what I am getting at here is HR is not really more your friend or less your friend than any other department, but HR is the only department that carries out actions that might adversely affect employees. And note that just because HR carries out the actions, doesn't mean HR make the decision or put the company in that situation; this is the shooting the messenger.

While this may be true that in some organizations and for some people HR is "primarily about protecting the employer, often against the employee," I'm skeptical that this is representative of people who do HR work more generally. On the one hand, yes, the job is to help the organization achieve it's goals. But when talking about the individuals that work in HR, when this topic comes up among HR people the general reaction is along the lines of "I want to do as much as I can for the employees, and the boundaries limiting me are from upper management. I want to give our staff more equitable pay, but leadership doesn't care that we have high turnover rates. I want to provide parental leave, but the head honcho disagrees. I really do not want to fire John Doe, because it seems unreasonable and unfair and unjust, but this is what leadership has decided."[2] 

The other thought I have about this parallels computer programmers/software engineers/developers and their thoughts on project managers. If you look at online discussions of programmers you will find no shortage of complaints about project managers (and about Scrum, and about agile), and many people writing about how useless their project manager is. But you shouldn't draw the conclusion that project management isn't useful. Instead, an alternative explanation is that these programmers are working with project managers that aren't very skillful, so their impression is biased. Working with a good project manager can be incredibly beneficial. So to leave the parallel and go back to HR, it is easy to find complaints on the internet about bad things that are attributed to HR. I would ask how representative those anecdotes are.

  1. ^

    Alternatively, if the leader is simply unaware of some bad things and the new HR manager can bring attention to those things, then improvements are probably on the way. But having HR is not sufficient on it's own.

  2. ^

    The other common response that tends to come up is to focus on all the things that HR does for the employees, things which are generally framed as limiting the company's power over employees: No, you can't pay the employees that little, because it is illegal. No, you can't fire this person without a documented history of poor performance, and no, scowling at you doesn't count as poor performance. Yes, you really do need to justify hiring your friend, and him being a 'great guy' isn't enough of a business case. No, it isn't reasonable to expect staff to be on call for mandatory unpaid overtime every weekend, because we will hemorrhage employees.

I think mainstream HR comes primarily from the private sector and is primarily about protecting the employer, often against the employee. They often cast themselves in a role of being there to help you, but a common piece of folk wisdom is "HR is not your friend". I think frankly that a lot of mainstream HR culture is at worst dishonest and manipulative, and I'd be really sad to see us uncritically importing more of that.

 

I see a lot of this online, but it doesn't match my personal experience. People working in HR that I've been in contact with seem generally kind people, aware of tradeoffs, and generally care about the wellbeing of employees.

I worry that the online reputation of HR departments is shaped by a minority of terrible experiences, and we overgeneralize that to think that HR cannot or will not help, while in my experience they are often really eager to try to help (in part because they don't want you and others to quit, but also because they are nice people).

Maybe it's also related to minimum-wage non-skilled jobs vs higher paying jobs, where employment tends to be less adversarial and less exploitative.

Something I'm trying to do in my comments recently is "hedge only once"; e.g. instead of "I think X seems like it's Y", you pick either one of "I think X is Y" or "X seems like it's Y". There is a difference in meaning, but often one of the latter feels sufficient to convey what I wanted to say anyway.

This is part of a broader sense I have that hedging serves an important purpose but is also obstructive to good writing, especially concision, and the fact that it's a particular feature of EA/rat writing can be alienating to other audiences, even though I think it comes from a self-awareness / self-critical instinct that I think is a positive feature of the community.

I was just thinking about this a few days ago when I was flying for the holidays. Outside the plane was a sign that said something like

Warning: Jet fuel emits chemicals that may increase the risk of cancer.

And I was thinking about whether this was a justified double-hedge. The author of that sign has a subjective belief that exposure to those chemicals increases the probability that you get cancer, so you could say "may give you cancer" or "increases the risk of cancer". On the other hand, perhaps the double-hedge is reasonable in cases like this because there's some uncertainty about whether a dangerous thing will cause harm, and there's also uncertainty about whether a particular thing is dangerous, so I supposed it's reasonable to say "may increase the risk of cancer". It means "there is some probability that this increases the probability that you get cancer, but also some probability that it has no effect on cancer rates."

I like this as an example of a case where you wouldn't want to combine these two different forms of uncertainty

Ideas of posts I could write in comments. Agreevote with things I should write. Don't upvote them unless you think I should have karma just for having the idea; upvote the post when I write it :P

Feel encouraged also to comment with prior art in cases where someone's already written about something. Feel free also to write (your version of) one of these posts, but give me a heads-up to avoid duplication :)

(some comments are upvoted because I wrote this thread before we had agreevotes on every comment; also I'm removing my own upvotes on these)

Edit: This is now The illusion of consensus about EA celebrities

Something to try to dispel the notion that every EA thinker is respected/ thought highly of by every EA community member. Like, you tend to hear strong positive feedback, weak positive feedback, and strong negative feedback, but weak negative feedback is kind of awkward and only comes out sometimes

I would really like this. I've been thinking a bunch about whether it would be better if we had slightly more bridgewater-ish norms on net (I don't know the actual structure that underlies that and makes it work), where we're just like yeah, that person has these strengths, these weaknesses, these things people disagree on, they know it too, it's not a deep dark secret.

something about the role of emotions in rationality and why the implicit / perceived Forum norm against emotions is unhelpful, or at least not precisely aimed

(there's a lot of nuance here, I'll put it in dw)

edit: I feel like the "notice your confusion" meme is arguably an example of emotional responses providing rational value.

thinking about this more, I've started thinking:

  • emotions are useful for rationality
  • the forum should not have a norm against emotional expression

is two separate posts. I'll probably write it as two posts, but feel free to agree/disagree on this comment to signal that you do/don't want two posts. (One good reason to want two posts is if you only want to read one of them.)

Take a list of desirable qualities of a non-profit board (either Holden's or another that was posted recently) and look at some EA org boards and do some comparison / review their composition and recent activity.

edit: I hear Nick Beckstead has written about this too

I have an intuition that the baseline average for institutional dysfunction is quite high, and I think I am significantly less bothered by negative news about orgs than many people because I already expect the average organisation (from my experience both inside and outside EA) to have a few internal secrets that seem "shockingly bad" to a naive outsider. This seems tricky to communicate / write about because my sense of what's bad enough to be worthy of action even relative to this baseline is not very explicit, but maybe something useful could be said.

Disclosure-based regulation (in the SEC style) as a tool either for internal community application or perhaps in AI or biosecurity

Something contra "excited altruism": lots of our altruistic opportunities exist because the world sucks and it's ok to feel sad about that and/or let down by people who have failed to address it.

Encouraging people to take community health interventions into their own hands. Like, ask what you wish someone in community health would do, and then consider just doing it. With some caveats for unilateralist curse risks.

The Optimal Number of Innocent People's Careers Ruined By False Allegations Is Not Zero

(haha just kidding... unless? 🥺)

Seems like a cheap applause light unless you accompany it the equivalent stories about how the optimal number of almost any bad thing is not zero.

I was surprised to hear anyone claim this was an applause light. My prediction was that many people would hate this idea, and, well, at time of writing the rep score stands at -2. Sure doesn't seem like I'm getting that much applause :)

I think the optimal number of most bad things is zero, and it's only not zero when there's a tradeoff at play. I think most people will agree in the abstract that there's a tradeoff between stopping bad actors and sometimes punishing the innocent, but they may not concretely be willing to accept some particular costs in the kind of abusive situations we're faced with at the moment. So, were I to write a post about this, it would be trying to encourage people to more seriously engage with flawed systems of abuse prevention, to judge how their flaws compare to the flaws in doing nothing.

I post about the idea here partly to get a sense of whether this unwillingness to compromise rings true for anyone else as a problem we might have in these discussions. So far, it hasn't got a lot of traction, but maybe I'll come back to it if I see more compelling examples in the wild.

I am confused by the parenthetical.

Assuming both false-positives and false-negatives exist at meaningful rates and the former cannot be zeroed while keeping an acceptable FN rate, this seems obviously true (at least to me) and only worthy of a full post if you're willing to ponder what the balance should be.

ETA: An edgy but theoretically interesting argument is that we should compensate the probably-guilty for the risk of error. E.g., if you are 70 percent confident the person did it, boot them but compensate them 30 percent of the damages that would be fair if they were innocent. The theory would be that a person may be expected to individually bear a brutal cost (career ruin despite innocence), but the benefit (of not allowing people who are 70 percent likely to be guilty be running around in power) accrues to the community from which the person has been booted. So compensation for risk that the person is innocent would transfer some of the cost of providing that benefit to the community. I'm not endorsing that as a policy proposal, mind you...

I think the forum would be better if people didn't get hit so hard by negative feedback, or by people not liking what they have to say. I don't know how to fix this with a post, but at least arguing the case might have some value.

[anonymous]1y3
0
0

I think the forum would be even better if people were much kinder and empathic when giving negative feedback. (I think we used to be better at this?) I find it very difficult to not get hit hard by negative feedback that's delivered in a way that makes it clear they're angry with me as a person; I find it relatively easy to not get upset when I feel like they're not being adversarial. I also find it much easier to learn how to communicate negative feedback in a more considerate way than to learn how to not take things personally. I suspect both of these things are pretty common and so arguing the case for being nicer to each other is more tractable?

very sad that this got downvoted 😭

(jk)

"ask not what you can do for EA, but what EA can do for you"

like, you don't support EA causes or orgs because they want you to and you're acquiescing, you support them because you want to help people and you believe supporting the org will do that – when you work an EA job, instead of thinking "I am helping them have an impact", think "they are helping me have an impact"

of course there is some nuance in this but I think broadly this perspective is the more neglected one

I have a Google Sheet set up that daily records the number of unread emails in my inbox. Might be a cute shortform post.

Some criticism of the desire to be the donor of last resort, skepticism of the standard counterfactual validity concerns.

If everyone has no idea what other people are funding and instead just donates a scaled down version of their ideal community-wide allocation to everything, what you get is a wealth-weighted average of everyone's ideal portfolios. Sometimes this is an okay outcome. There's some interesting dynamics to write about here, but equally I'm not sure it leads to anything actionable.

I'd like to write something about my skepticism of for-profit models of doing alignment research. I think this is a significant part of why I trust Redwood more than Anthropic or Conjecture.

(This could apply to non-alignment fields as well, but I'm less worried about the downsides of product-focused approaches to (say) animal welfare.)

That said, I would want to search for existing discussion of this before I wade into it.

Things I've learned about good mistake culture, no-blame post-mortems, etc. This is pretty standard stuff without a strong EA tilt so I'm not sure it merits a place on the forum, but it's possible I overestimate how widely known it is, and I think it's important in basically any org culture.

Something about the value of rumours and the whisper network

A related but distinct point is that the disvalue of anonymous rumours is in part a product of how people react to them. Making unfounded accusations is only harmful to the extent that people believe them uncritically. There's always some tension there but we do IMO collectively have some responsibility to react to rumours responsibly, as well as posting them responsibly.

[anonymous]1y2
0
0

I'd love it if it could include something on the disvalue of rumours too? (My inside view is that I'd like to see a lot less gossip, rumours etc in EA. I may be biased by substantial personal costs that I and friends have experienced from false rumours, but I also think that people positively enjoy gossip and exaggerating gossip for a better story and so we generally want to be pushing back on that usually net-harmful incentive.)

I have a doc written on this that I wanted to make a forum post out of but haven't gotten to - happen to share.

I enjoy a lot that this document will be shared in private. Great meta comment.

Lead with the punchline when writing to inform

The convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing.

  • Consider who doesn't benefit from your article, and if you can help them filter themselves out.
  • Consider how people might skim-read your article, and how to help them derive value from it.
  • Lead with the punchline – see if you can make the most important sentence in your article the first one.
  • Some information might be clearer in a non-discursive structure (like… bullet points, I guess).

Writing to persuade might still be best done discursively, but if you anticipate your audience already being sold on the value of your information, just present the information as you would if you were presenting it to a colleague on a project you're both working on.

Agree that there's a different incentive for cooperative writing than for clickbait-y news in particular. And I agree with your recommendations. That said, I think many community writers may undervalue making their content more goddamn readable. Scott Alexander is a verbose and often spends paragraphs getting to the start of his point, but I end up with a better understanding of what he's saying by virtue of being fully interested.

All in all though, I'd recommend people try to write like Paul Graham more than either Scott Alexander or an internal memo. He is in general more concise than Scott and more interesting than a memo.

He has several essays about how he writes.

Writing, Briefly — Laundry list of tips

Write like you talk

The Age of the Essay — History of the essays we write in school versus the essays that are useful

A Version 1.0 — "The Age of the Essay" in rough draft form with color coding for if it was kept

Though betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here's some circumstances when it can be a lot more difficult to get the outcomes you want from a bet:

  • When the value of money changes depending on the different outcomes,
  • When the likelihood of people being able or willing to pay out on bets changes under the different outcomes.

As an example, I saw someone claim that the US was facing civil war. Someone else thought this was extremely unlikely, and offered to bet on it. You can't make bets on this! The value of the payout varies wildly depending on the exact scenario (are dollars lifesaving or worthless?), and more to the point the last thing on anyone's minds will be internet bets with strangers.

In general, you can't make bets about major catastrophes (leaving aside the question of whether you'd want to), and even with non-catastrophic geopolitical events, the bet you're making may not be the one you intended to make, if the value of money depends on the result.

A related idea is that you can't sell (or buy) insurance against scenarios in which insurance contracts don't pay out, including most civilizational catastrophes, which can make it harder to use traditional market methods to capture the potential gains from (say) averting nuclear war. (Not impossible, but harder!)

After reading this I thought that a natural next step for the self-interested rational actor that wants to short nuclear war would be to invest in efforts to reduce its likelihood, no? Then one might simply look at the yearly donation numbers of a pool of such efforts.

Yes, this is a general strategy for a philanthropists who wants to recoup some of their philanthropic investment:

1. Short harmful industry/company X (e.g. tobacco/Philip Morris, meat / Tyson)

2. Then lobby against this industry (e.g. fund a think tank that lobbies for tobacco taxes in a market that the company is very exposed to).

3. Profit from the short to get a discount on your philanthropic investment.

Contrary to what many people intuit, this is perfectly legal in many jurisdictions (this is not legal or investment advice though).

Even if it's legal, some people may think it's unethical to lobby against an industry that you've shorted.

It could provide that industry with an argument to undermine the arguments against them. They might claim that their critics have ulterior motives.

This is a excellent point, I agree. You're absolutely right that they could argue that and that reputational risks should be considered before such a strategy is adopted. And even though it is perfectly legal to lobby for your own positions / stock, lobbying for shorts is usually more morally laden in the eyes of the public (there is in fact evidence that people react very strongly to this).

However, I think if someone were to mount the criticism of having ulterior motives, then there is a counterargument to show that this criticism is ultimately misguided:

If the market is efficient, then the valuation of an industry will have risks that could be created easily through lobbying priced in. In other words, if the high valuation of Big Tobacco were dependent on someone not doing a relatively cheap lobbying campaign for tobacco taxes, then shorting it would make sense for socially neutral investors with no altruistic motives - and thus is should already be done.

Thus, this strategy would only work for truly altruistic agent who will ultimately lose money in the process, but only get a discount on their philanthropic investment. In other words, the investment in the lobbying should likely be higher than the profit from the short. And so, it would be invalid to say that someone using this strategy would have ulterior motives. But yes again, I take your point that this subtle point might get lost and it will end up being a PR disaster.

I don't buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it's possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. Trading based on a belief that a particular industry is stronger / weaker than the market perceives it to be is surely fine; that's basically what active investors do, right?)

(Some people believe the market is efficient even with respect to private information. I don't understand those people.)

However, I have my own counterargument, which is that the "conflict of interest" claim seems just kind of confused in the first place. If you hear someone criticizing a company, and you know that they have shorted the company, should that make you believe the criticism more or less? Taking the short position as some kind of fixed background information, it clearly skews incentives. But the short position isn't just a fixed fact of life: it is itself evidence about the critic's true beliefs. The critic chose to short and criticize this company and not another one. I claim the short position is a sign that they do truly believe the company is bad. (Or at least that it can be made to look bad, but it's easiest to make a company look bad if it actually is.) In the case where the critic does not have a short position, it's almost tempting to ask why not, and wonder whether it's evidence they secretly don't believe what they're saying.

All that said, I agree that none of this matters from a PR point of view. The public perception (as I perceive it) is that to short a company is to vandalize it, basically, and probably approximately all short-selling is suspicious / unethical.

it's possible to make money this way

Agreed, but I don't think there's a big market inefficiency here with risk-adjusted above market rate returns. Of course, if you do research to create private information then there should be a return to that research.

Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. [...[

True, but I've heard that in the US, normally, if I lobby in the U.S. for an outcome and I short the stock about which I am lobbying, I have not violated any law unless I am a fiduciary or agent of the company in question. Also see https://www.forbes.com/sites/realspin/2014/04/24/its-perfectly-fine-for-herbalife-short-sellers-to-lobby-the-government/#95b274610256

I have my own counterargument

I really like this, but...

it can be made to look bad

This seems to be why people have a knee jerk reaction against it.

Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P

For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.

I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)

I'm sure if I thought about it for a bit I could figure out when these two mutually contradictory strategies look better or worse than each other. But mostly I don't take either of them very seriously most of the time anyway :)

I think these strategies can actually be combined:

A patient philanthropist sets up their endowment according to mission hedging principles.

For instance, someone wanting to hedge against AI risks could invest in (leveraged) AI FAANG+ ETF (https://c5f7b13c-075d-4d98-a100-59dd831bd417.filesusr.com/ugd/c95fca_c71a831d5c7643a7b28a7ba7367a3ab3.pdf), then when AI seems more capable and risky and the market is up, they sell and buy shorts, then donate the appreciated assets to fund advocacy to regulate AI.

I think this might work better for bigger donors.

Like this got me thinking: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley

“We can push the odds of victory up significantly—from 23% to 35-55%—by blitzing the airwaves in the final two weeks.”

https://www.predictit.org/markets/detail/6788/Which-party-will-win-the-US-Senate-election-in-Texas-in-2020

People talk about AI resisting correction because successful goal-seekers "should" resist their goals being changed. I wonder if this also acts as an incentive for AI to attempt takeover as soon as it's powerful enough to have a chance of success, instead of (as many people fear) waiting until it's powerful enough to guarantee it.

Hopefully the first AI powerful enough to potentially figure out that it wants to seize power and has a chance of succeeding is not powerful enough to passively resist value change, so acting immediately will be its only chance.