A

AnonymousTurtle

718 karmaJoined Aug 2022

Comments
76

METR ‘Model Evaluation & Threat Research’ might also be worth mentioning. I wonder if there's a list of capability evaluations projects somewhere

I think mainstream HR comes primarily from the private sector and is primarily about protecting the employer, often against the employee. They often cast themselves in a role of being there to help you, but a common piece of folk wisdom is "HR is not your friend". I think frankly that a lot of mainstream HR culture is at worst dishonest and manipulative, and I'd be really sad to see us uncritically importing more of that.

 

I see a lot of this online, but it doesn't match my personal experience. People working in HR that I've been in contact with seem generally kind people, aware of tradeoffs, and generally care about the wellbeing of employees.

I worry that the online reputation of HR departments is shaped by a minority of terrible experiences, and we overgeneralize that to think that HR cannot or will not help, while in my experience they are often really eager to try to help (in part because they don't want you and others to quit, but also because they are nice people).

Maybe it's also related to minimum-wage non-skilled jobs vs higher paying jobs, where employment tends to be less adversarial and less exploitative.

I agree, if anything data in the for-profit world probably updates me against very-small-sized companies being optimal.

That argument just doesn't go all the way to trillion dollar behemoths like I thought.

In the non-profit world, GiveWell's top charities seem to have very different team sizes, so maybe we just can't say much with generality.

I think it's bad to repeatedly accuse people of things they didn't do, or having responsibilities they didn't have, and then write "Oops, sorry!", and we should do less of this.

You could have easily checked in with them, as with Macaskill last time, so that RP didn't have to rush in immediately with a correction, since otherwise way fewer people will see the correction than the original claim/accusation (if any). It lowers this forum's epistemics, wastes people's time, and stains accused people's reputation for no reason.

The tradeoff between writing a claim instantly or spending some time to confirm its correctness usually favours the latter. If I were on the board of RP, having my name on this thread could be damaging, and I would feel lucky that it got corrected immediately. I downvoted because I want to see fewer comments like that.

This is a very good point, when huge companies get split the stock usually rises.

When Alibaba was forced to split into six separate groups the stock went up 10%. Please someone correct me if I'm wrong, but if I remember correctly when Standard Oil split into 43 companies the combined stocks also appreciated a lot.

Why wouldn't the same apply to Alphabet?

I'm curious to what extent these issues are linked to worries about the brand's reputation, specifically the negative connotations currently associated with the EA brand.

Thank you for writing this. I share many of these, but I'm very uncertain about them.

Here it is:

Giving a range of probabilities when you should give a probability + giving confidence intervals over probabilities + failing to realize that probabilities of probabilities just reduce to simple probabilities

I think this is rational, I think of probabilities in terms of bets and order books. I think this is close to my view, and the analogy of financial markets is not irrelevant.

Unstable beliefs about stuff like AI timelines in the sense of I'd be pretty likely to say something pretty different if you asked tomorrow

Changing literally day-to-day seems extreme, but month-to-month seems very reasonable given the speed of everything that's happening, and it matches e.g. the volatility of NVIDIA stock price.

Axiologies besides ~utilitarianism

To me, "utilitarianism" seems pretty general, as long as you can arbitrarily define utility and you can arbitrarily choose between Negative/Rule/Act/Two-level/Total/Average/Preference/Classical utilitarianism. I really liked this section of a recent talk by Toby Ord (Starting from "It starts by observing that the three main traditions in Western philosophy each emphasize a different focal point:"). (I also don't know if axiology is the right word for what we want to express here, we might be talking past each other)

Veg(etari)anism for terminal reasons; veg(etari)anism as ethical rather than as a costly indulgence

I mostly agree with you, but second order effects seem hard to evaluate and both costs and benefits are so minuscule (and potentially negative) that I find it hard to do a cost-benefit-analysis.

Thinking personal flourishing (or something else agent-relative) is a terminal goal worth comparable weight to the impartial-optimization project

I agree with you, but for some it might be an instrumentally useful intentional framing. I think some use phrases like "[Personal flourishing] for its own sake, for the sake of existential risk." (see also this comment for a fun thought experiment for average utilitarians, but I don't think many believe it)

Cause prioritization that doesn't take seriously the cosmic endowment is astronomical, likely worth >10^60 happy human lives and we can nontrivially reduce x-risk

Some think the probability of extinction per century is only going up with humanity increasing capabilities, and are not convinced by arguments that we'll soon reach close-to-speed-of-light travel which will make extinction risk go down. See also e.g. Why I am probably not a longtermist (except point 1). I find this very reasonable.

Deciding in advance to boost a certain set of causes [what determines that set??], or a "portfolio approach" without justifying the portfolio-items

I agree, I think this makes a ton of sense for people in community building that need to work with many cause areas (e.g. CEA staff, Peter Singer), but I fear that it makes less sense for private individuals maximizing their impact.

Not noticing big obvious problems with impact certificates/markets

I think many people notice big obvious problems with impact certificates/markets, but think that the current system is even worse, or that they are at least worth trying and improving, to see if at their best they can in some cases be better than the alternatives we have. The current funding systems also have big obvious problems. What big obvious problems do you think they are missing?

Naively using calibration as a proxy for forecasting ability

I agree with this, just want to mention that it seems better than a common alternative that I see: using LessWrong-sounding-ness/reputation as a proxy for forecasting ability

Thinking you can (good-faith) bet on the end of the world by borrowing money ... I think many people miss that utility is about ∫consumption not ∫bankroll (note the bettor typically isn't liquidity-constrained)

I somewhat agree with you, but I think that many people model it a bit like this: "I normally consume 100k/year, you give me 10k now so I will consume 110k this year, and if I lose the bet I will consume only 80k/year X years in the future". But I agree that in practice the amounts are small and it doesn't work for many reasons.

This is dutch-book-able only if there is no bid-ask spread. A rational choice in this case would be to have a very wide bid-ask spread. E.g. when Holden Karnofsky writes that his P(doom) is between 10% and 90%, I assume he would bet for doom at 9% or less, bet against doom at 91% or more, and not bet for 0.11<p<0.89. This seems a very rational choice in a high-volatility situation where information changes extremely quickly. (As an example, IIRC the bid-ask spread in financial markets increases right before earnings are released).

Thank you for all the good you are doing, and good luck! You might be interested in this Slack space: There is now an EA Managers Slack.

I have a friend in your same position, if you find something else that's good/useful let me know!

I agree that a one-grant-at-a-time funding model has downsides, but mostly I see many EA-meta projects funded with little to no feedback loops or oversight.

In for-profit jobs, people usually have managers, and if their work doesn't get the expected results they get negative feedback and improvement plans before being fired or moved to different roles.

In meta-EA, I often see people get funding with no strings attached and no measurement of effectiveness, the only feedback they get is if they have their grants renewed one year later. I think a better solution than multi-year no-strings-attached funding would be to have way more regular feedback from funders, to get advice or at least not be surprised if they decide to not renew your grant.

I also think this has very bad selection effects in people optimizing their grant applications instead of their positive impact, since the applications is often the ~only information that funders have, and I'm worried that some funded EA meta projects that spent a lot of time on their grant applications are actually having negative counterfactual impact.

I also think that as long as you have clear, (ideally measurable) counterfactual results and a strong theory of change, it's relatively easy to get funding for EA meta work (compared to e.g. animal welfare or global health).

For work in global health, you should stop getting funded if your work is less cost-effective than buying more bednets. Similarly, in EA meta you should stop getting funded if your work is less cost-effective than buying more ads for 80k (as a random example of highly effective infinitely scalable intervention, I don't know what the ideal benchmark should be). If EA Poland could show that their programs are more cost-effective than e.g. more ads for 80k, I think people that are currently funding 80k ads would fund them instead.

 

The net result is that there are a bunch of people with EA-relevant talents that aren't particularly applicable outside the EAsphere

I think this is extremely bad regardless of the funding model and funding situation, and people should try very hard to avoid this. This would lead to terrible incentives and dynamics, and probably make you less effective in your EA role (including community building). See My mistakes on the path to impact, I recommend reading the whole post, but here's one quote

I could have noticed the conflict between the talent-constrained message as echoed by the community with the actual 80,000 Hours advice to keep your options open and having Plan A, B and Z.

Load more