All of Kevin Lacker's Comments + Replies

It is a disaster for EA. We need the EAs on the board to explain themselves, and if they made a mistake, just admit that they made a mistake and step down.

"Effective altruism" depends on being effective. If EA is just putting people in charge of other peoples' money, they make decisions that seem like bad decisions, they never explain why, refuse to change their mind whatever happens... that's no better than existing charities! This is what EA was supposed to prevent! We are supposed to be effective. Not to fire the best employees and destroy a company tha... (read more)

I don't think that they own the EA community an explanation (it would be nice, but they don't have to). The only people that can have a right to demand that are the people that have appointed them there and the OAI staff.

https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money

>I might as well give my money to the San Francisco Symphony. At least they won't spend it ruining things that I care about.

It is your right, but I don't know how this is related? How have they spent EA donors' money? If you are ref... (read more)

This could end up also having really bad consequences for the goals of EA, so it's perhaps similar to FTX in that way (but things are still developing and it might somehow turn out well).

Or maybe you feel like the board displayed inexperience and that they were in over their heads. I can probably get behind that based on how things look right now (but there's a chance we learn more details later that put things into a different light).

Still, I feel like inexperience is only unforgivable if it comes combined with hubris. Many commenters seem to think that t... (read more)

It's embarassing for the EA movement, too. It's another SBF situation. Some EAs get control over billions of dollars, and act completely irresponsibly with that power.

Probably disagree? Hard to say for sure since we lack details, but it's not obvious to me that the board acted irresponsibly, let alone to the degree that SBF did. I guess one, it seems fairly likely that Ilya Sutskever initiated the whole thing, not the EAs on the board. And two, the board members have fiduciary duties to further the OAI nonprofit's mission, i.e., to ensure that AGI benefits... (read more)

The strategy of "get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong" seems to be the opposite of what EA is all about. Shouldn't we be using evidence and reason to figure out how to benefit others as much as possible?

When the logic is, I feel very strongly about cause area X. Therefore we should do things about X as much as possible. Anything that helps X is good. Any people excited about X are good. Any way of spending money on X is good. Well, then X could equally well be cancer research, or saving the whales, or donating to the Harvard endowment, or the San Francisco Symphony.

2
NickLaing
1y
For me (a global health guy), EA is mostly about doing the most good we can with our life. If right now, increasing awareness about AI danger, while some details may be lost is what will do the most good then I think it is consistent with EA thinking. The OP is using evidence and reason to argue this point.

"The strategy of "get a lot of press about our cause area, to get a lot of awareness, even if they get the details wrong" seems to be the opposite of what EA is all about" Yes, and I think this is a huge vulnerability for things like this. Winning the narrative actually matters in the real world.

5
Harrison Durland
1y
I have a variety of reservations about the original post, but I don’t think this comment does a good job of expressing my views, nor do I find the criticism very compelling, if only for the obvious distinction that the things you list at the end of the comment don’t involve all of humanity dying and >trillions of people not existing in the future.

Some people think that, with a super-powerful AI running the world, there would be no need for traditional government. The AI can simply make all the important decisions to optimize human welfare.

This is similar to the Marxist idea of the "withering away of the state". Once perfect Communism has been achieved, there will be no more need for government.

https://en.wikipedia.org/wiki/Withering_away_of_the_state

In practice, Stalinism didn't really wither away. It was more like, Stalin gained personal control over this new organization, the Communist Party, to ... (read more)

against malaria foundation don't give a high proportion of money to evil dictatorships but they do give some. Same goes for deworm the world.

 

I was wondering about this, because I was reading a book about the DRC - Dancing in the Glory of Monsters - which was broadly opposed to NGO activity in the country as propping up the regime. And I was trying to figure out how to square this criticism with the messages from the NGOs themselves. I am not really sure, though, because the pro-NGO side of the debate (like EA) and the anti-NGO side of the debate (lik... (read more)

4
Kirsten
1y
The argument I've found most persuasive is "it's easier to fight back against an unjust government if you're healthy/have more money".

I think the Wytham Abbey situation is a success for transparency. Due to transparency, many people became aware of the purchase, and were able to give public feedback that it seemed like a big waste of money, and it's embarassing to the EA cause. Now, hopefully, in the future EA decisionmakers will be less likely to waste money in this way.

It's too much to expect EA decisionmakers to never make any mistakes ever. The point of transparency is to force decisionmakers to learn from their mistakes, not to avoid ever making any mistakes.

I'm glad this contest happened, but I was hoping to see some deeper reflection. To me it seems like the most concerning criticisms of the GiveWell approach are criticisms along more fundamental lines. Such as -

  1. It's more effective to focus on economic growth than one-off improvements to public health. In the long run no country has improved public health problems via a charitable model, but many have done it through economic growth. 
  2. NGOs in unstable or war-torn countries are enabling bad political actors. By funding public health in a dictatorship you
... (read more)
1
Jack_S
1y
I definitely don't think it's too much to expect from a self-reflection exercise, and I'm sure they've considered these issues.  For no. 1, I wouldn't actually credit growth so much. Most of the rapid increases in life expectancy in poor countries over the last century have come from factors not directly related to economic growth (edit: growth in the countries themselves), including state capacity, access to new technology (vaccines), and support from international orgs/ NGOs. China pre- and post- 1978 seems like one clear example here- the most significant health improvements came before economic growth.  Can you identify the 'growth miracles' vs. countries that barely grew over the last 20 years in the below graph?  I'd also say that reliably improving growth (or state capacity) is considerably more difficult than reliably providing a limited slice of healthcare. Even if GiveWell had a more reliable theory of change for charitably-funded growth interventions, they probably aren't going to attract donations- donating to lobbying African governments to remove tariffs doesn't sound like an easy sell, even for an EA-aligned donor.  For 2, I think you're making two points- supporting dictators and crowding out domestic spending.  On the dictator front, there is a trade-off, but there are a few factors:  * I'm very confident that countries with very weak state capacity (Eritrea?) would not be providing noticeably better health care if there were fewer NGOs.  * NGOs probably provide some minor legitimacy to dictators, but I doubt any of these regimes would be threatened by their departure, even if all NGOs simultaneously left (which isn't going to happen). So the marginal negative impact of increased legitimacy from a single NGO must be very small.  On the 'crowding out' front, I don't have a good sense of the data, but I'd suspect that the issue might be worse in non-dictatorships- countries/ regions that are easier/ more desirable for western NGOs to set up sh
3
NickLaing
1y
Hey Kevin I do like those points and I think especially number 2 is worth a lot f consideration - not only in unstable and war torn countries but also in stable, stagnant countries like Uganda where I work. Jason's answer is also excellent. Agree that number 2 is a very awkward situation, and working in health in a low income country myself I ask myself this all the time. The worst case scenario in terms of propping up a dictator though I think is funding them directly - which a LOT of government to government aid does. Fortunately against malaria foundation don't give a high proportion of money to evil dictatorships but they do give some. Same goes for deworm the world. I think there should be some kind of small negative adjustment (even if token) from GiveWell on this front. I find the "economic growth" argument a tricky one as my big issue here is tractability". I'm not sure we know well at all how to actually stimulate economic growth consistently and well. There are a whole lot of theories but the solid empirical research base is very poor. I'd be VERY happy to fund economic growth if I had a moderate degree of certainty that the intervention would work.
8
Jason
1y
#2 would have been largely in scope, I think. GiveWell's analyses usually include an adjustment for estimated crowding out of other public health expenditures. #1 is more a cause-prioritization argument in my book. The basic contours of that argument are fairly clear and well-known; the added value would be in identifying a specific program that can be implemented within the money available to EA orgs for global health/development and running a convincing cost-benefit analysis on it. That's a much harder thing to model than GiveWell-type interventions and would need a lot more incentive/resources than $20K.
9
Lorenzo Buonanno
1y
You might be interested in the response to a similar comment here
  1. Don't rush to judgment. We don't know the full story yet.
  2. If it's fraud, give back whatever money can be given back.
  3. If it's fraud, make it clear that the EA community does not support a philosophy of "making money on criminal activity is okay if you donate it to an effective charity".

I don't know how the criminal law works. But if it turns out that the money in the FTX Future Fund was obtained fraudulently, would it be ethical to keep spending it, rather than giving it back to the victims of the fraud?

4
Guy Raveh
1y
I received a small grant from the FF. I was conflicted on whether to take it anyway, given that I didn't see FTX (or any crypto company) in a good light even before this story. Now I feel somewhat worse about it. I think there are a few things that would come into play here: 1. If we give the money back, assuming that's even an option, does it go to someone in need or to someone rich? 2. Can we afford it? I think I could, so there's a big chance I would choose to do this if requested. But I don't expect anyone to burn their savings for this like Gideon said. I'm currently participating in a program which was also funded by the FF, and which paid a much larger sum than my personal grant for things like my housing during the time of the program. So on the one hand this is something that for me would be harder to pay back. On the other hand: 1. At some point we have to consider ourselves far enough removed from the action - it's not like we chose to get specifically FF money by participating. It's not even like someone receiving a grant from the FF chose to participate in fraud.
[This comment is no longer endorsed by its author]Reply

Banning slaughterhouses is essentially a ban on eating meat, right? I can't imagine that 43% of the US public would support that, when no more than 10% of the US public is vegetarian in the first place. (Estimates vary, you say 1% in this article, and 10% is the most aggressive one I could find.)

It seems much more likely that these surveys are invalid for some reason. Perhaps the word "slaughterhouses" confused people, or perhaps people are just answering surveys based on emotion without bothering to think through what banning slaughterhouses actually means.

0
Pseudonym101
1y
Yes I implore readers to defer to common sense here. The face validity of there results is poor and I would suggest further work is done to improve the survey methodology, understanding people's understanding of the question and how they'd change their response in relation to a political campaign where there would be a saturation of information from very powerful commercial agricultural interests. I'm sick of seeing EA making political blunders.

This explanation of events seems to contradict several of SBF's public statements, such as:

"FTX has enough to cover all client holdings."

"We don't invest client assets (even in treasuries)."

source: https://cointelegraph.com/news/ftx-founder-sam-bankman-fried-removes-assets-are-fine-flood-from-twitter

I guess we'll know more for sure in the coming days. One big open question for EA is whether SBF's money was obtained through fraudulent or illegal activities. As far as I can tell, it is too soon to tell.

In the last few hours, Coindesk reported that Binance is "strongly leading towards" not doing the FTX acquisition.

https://www.coindesk.com/business/2022/11/09/binance-is-strongly-leaning-toward-scrapping-ftx-rescue-takeover-after-first-glance-at-books-source/

2
elifland
1y
See also https://polymarket.com/market/will-binance-pull-out-of-their-ftx-deal and 

I believe the title of this article is misleading - FTX.com was not technically bought out by Binance. Binance signed a non-binding letter of intent to buy FTX.com. Sometimes this is just a minor detail, but in this case it seems quite important. As of the time I am writing this comment (9 a.m. California time on November 9) Polymarket shows an 81% chance that Binance will pull out of this deal.

https://polymarket.com/market/will-binance-pull-out-of-their-ftx-deal

I am not an expert in crypto, but I think people should not assume that this acquisition will g... (read more)

2
Charles He
1y
Yes you're right. Yesterday morning, the title was less inflammatory and a reasonably factual statement. The post is less relevant and sliding off the FP, I'll probably delete the entire post at some point (the comments will remain).

The point about corruption is a good one and it worries me that so many EA cause areas seem to ignore corruption. When you send money to well funded NGOs in corrupt countries, you are also supporting the status quo political leadership there, and the side effects from this seem like they could be more impactful than the stated work of the NGO.

1
Peter Elam
2y
Yeah. Foreign aid is often problematic in corrupt countries, and it can be a major major problem. A quote from the Haiti / DR article that I linked to in the footnotes: Last weekend I was speaking to a leader of a nonprofit last gives large sums of aid to Haiti, and she was telling me just how difficult it is to translate the aid into tangible impact because of the corruption. 

When you say “working with African leaders”, I worry that in many countries that means “paying bribes which prop up dictatorships and fund war.” How can we measure the extent to which money sent to NGOs in sub Saharan Africa is redirected toward harmful causes via taxes, bribes, or corruption ?

1
brb243
2y
In context, this is: This suggests either the opposite of, irrelevance to, or support by analogy of your concern. It is various African institutions and local leaders who would be making sure that the balance between increased energy output and worsening environment is stricken. The design where decisionmaking would be much more aggregated could, possibly, lead to a suboptimal decisionmaking, because authorities could be less in touch with the local nature so disproportionatelly prioritizing energy cost reduction/distribution (although, this illustrative example may show a bias in this reasoning). The reasoning that local institutions and leaders (e. g. community elders, land owners) can make more pro-environmental (which can be interpreted as less biased or 'corrupt) decisions than national or regional governance representatives (who could be lobbied by profit-seeking industries) can seem intuitive. However, it may not be as clear. Some community elders may have little concern for the environment (e. g. many extremely poor persons gain income by charcoal making, which is polluting and unsustainable) and some land owners may be happy to destroy their crops and host a coal mine investment - or sell the land). On the contrary, governments may be aware of a variety of investment options, from mining to juice making and, if given the option, may go for portfolio diversification that also reduces their proneness to war (so, renewable resources, comparative advantage specialization and trading, or energy supply agreements). To answer your question,  the rate of unagreed-upon use of funds by NGOs (as well as other entities, including governments) can be estimated by an external observer of the total value provided by the programs, considering local knowledge of market prices. It can range, from negative percentages (employees sacrifice income compared to the profit sector, negotiate below-market bargains with local providers on the basis of social/environmental benefit)

I’d like to push back a bit on that - it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem. But there are many rational critiques of malaria nets. Malaria nets should not be this symbol where believing in them is a core part of the EA faith.

8
Vaidehi Agarwalla
2y
I'm not saying that. The point I was trying to make was actually the opposite - that even for the "cold and calculating" EAs it can be emotionally difficult to choose the intervention (in this case malaria nets) which doesn't give you the "fuzzies" or feeling of doing good that something else might.  I was trying to say that it's normal to feel like some decisions are emotionally harder than others, and framings which focus on that may be likely to come across as dismissive of other people's actions. (Of course, i didn't elaborate this in the original comment)  I don't make this claim in my comment - I am just using malaria nets as an example since you used it earlier, and it's an accepted shorthand for "commonly recommended effective intervention" (but maybe we should just say that - maybe we shouldn't use the shorthand).  
3
Akhil
2y
I think I sit somewhere between you both- broadly we think that there shouldn’t be “one” road to impact ; whether that be bed nets or something else Our explicit purpose is to use EA frameworks and thinking to help people reach their own conclusions. We think that common EA causes are very promising and Very likely to be highly impactful, but we err on the side of caution in being overly prescriptive.

I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.

I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.

I absolutely think we should stick to that messaging. Trying to do the the most good, rather than some good is the core of our movement. I would point out that there are also many doctors who were not discouraged and chose to change their career entirely as a result of EA. I personally know a few who ended up working on the very things you encourage!

That said we should of course be careful when discouraging interventions if we haven't looked into the details of each cost-effectiveness analysis, as it's easy to arrive at a lower looking impact simply due to methodological differences between Givewell's cost-effectiveness analysis and yours.

3
Vaidehi Agarwalla
2y
I really like framings which acknowledge how hard (emotionally) it can be to choose malaria nets.
8
High Impact Medicine
2y
Thanks for your comment and completely agree with you! I think the framing of what is the most I can do with my work is a great one that is underappreciated.

It is really annoying for Flynn to be perceived as “the crypto candidate”. Hopefully future donations encourage candidates to position themselves more explicitly as favoring EA ideas. The core logic that we should invest more money in preventing pandemics seems like it should make political sense, but I am no political expert.

This is also just an example of how growing and diversifying the EA funding base can be useful even if EA is not on the whole funding constrained ... a longtermist superpac that raised $1 million each from 12 different rich guys who got rich in different ways would arguably be more credible than one with a single donor. 

Similar issues come up in poker - if you bet everything you have on one bet, you tend to lose everything too fast, even if that one bet considered alone was positive EV.

I think you have to consider expected value an approximation. There is some real, ideal morality out there, and we imperfect people have not found it yet. But, like Newtonian physics, we have a pretty good approximation. Expected value of utility.

Yeah, in thought experiments with 10^52 things, it sometimes seems to break down. Just like Newtonian physics breaks down when analyzing a black h... (read more)

3
Guy Raveh
2y
Expected value is only one parameter of the (consequentialist) evaluation of an action. There are more, e.g. risk minimisation. It would be a massive understatement to say that not all philosophical or ethical theories so far boil down to "maximise the expected value of your actions".

Another source of epistemic erosion happens whenever a community gets larger. When you’re just a few people, it’s easier to change your mind. You just tell your friends, hey I think I was wrong.

When you have hundreds of people that believe your past analysis, it gets harder to change your mind. When peoples’ jobs depend on you, it gets even harder. What would happen if someone working in a big EA cause area discovered that they no longer thought that cause area was effective? Would it be easy for them to go public with their doubts?

So I wonder how hard it is to retain the core value of being willing to change your mind. What is an important issue that the “EA consensus” has changed its mind on in the past year?

Another issue that makes it hard to evaluate global health interventions is the indirect effects of NGOs in countries far from the funders. For example this book made what I found to be a compelling argument that many NGOs in Africa are essentially funding civil war, via taxes or the replacement of government expenditure:

https://www.amazon.com/Dancing-Glory-Monsters-Collapse-Africa/dp/1610391071

African politics are pretty far outside my field of expertise, but the magnitudes seem quite large. War in the Congo alone has killed millions of people over the pa... (read more)

Is this forum looking to hire more people?

There is also a “startup” aspect to EA activity - it’s possible EA will be much more influential in the future, and in many cases that is the goal, so helping now can make that happen.

I feel like the net value to the world of an incremental Reddit user might be negative, even….

2
Ben_West
2y
We are looking to hire, thanks! I put a link to our open positions at the bottom of the post.

For one, I don’t see any intercom. (I’m on an iPhone).

For two, I wanted to report a bug that whenever writing a comment, the UI zooms in so that the comment box takes up the whole width. Then it never un-zooms.

Another bug, while writing a comment while zoomed in and scrolling left to right, the scroll bar appears in the middle of the text.

A third bug, when I get a notification that somebody has responded to my post, and view it using the drop down at the upper right, then try to re-use that menu, the X button is hidden, off the screen to the right. Seems like a similar mobile over-zoom thing.

If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.

6
MichaelStJules
2y
I'm saying the amount of suffering is not just the output of some algorithm or something written in memory. I would define it functionally/behaviourally, if at all, although possibly at the level of internal behaviour, not external behaviour. But it would be more complex than your hypothesis makes it out to be.

Even a brief glance through posts indicates that there is relatively little discussion about global health issues like malaria nets, vitamin A deficiency, and parasitic worms, even though those are among the top EA priorities.

In some sense the idea of a separate self is an invention. Names are an invention - the idea that I can be represented as “Kevin” and I am different from other humans. The invention is so obvious nowadays that we take it for granted.

It isn’t unique to humans, though… at least parrots and dolphins also have sequences of sounds that they use to identify specific individuals. Maybe those species are much more “human-like” than we currently expect.

I wonder a lot where to draw the line for animal welfare. It’s hard to worry about planaria. But animals that have names, animals whose family calls to them by name… maybe that has something to do with where to draw the line.

To me this sort of extrapolation seems like a “reductio ad absurdum” that demonstrates that suffering is not the correct metric to minimize.

Here’s a thought experiment. Let’s say that all sentient beings were converted to algorithms, and suffering was a single number stored in memory. Various actions are chosen to minimize suffering. Now, let’s say you replaced everyone’s algorithm with a new one. In the new algorithm, whenever you would previously get suffering=x, you instead get suffering=x/2.

The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.

Have you done a great thing for the world, or is it a meaningless change of units?

5
Lukas_Gloor
2y
This probably doesn't apply to Pearce's qualia realist view, but it's possible to have a functionalist notion of suffering where eliminating suffering would change people's behavior.  For instance, I think of suffering as an experienced need to change something about one's current experience, something that by definition carries urgency to bring about change. If you get rid of that, it has behavioral consequences. If a person experiences pain asymbolia where they don't consider their "pain" bothersome in any way, I would no longer call it suffering. 
9
MichaelStJules
2y
I think it's extraordinarily unlikely suffering could just be this. Some discussion here.

Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x -> x^2 is monotonic.

You could discount utilons - say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”.

But you could also pick a single metautilon function and I believe the outcome would at least be consistent.

Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.

2
Charles He
2y
A monotonic transformation like log doesn’t solve the infinity issue right? Time discounting (to get you comparisons between finite sums) doesn’t preserve the ordering over sequences. This makes me think you are thinking about something else?

In general, it’s a good idea to not let strangers touch your phone. Someone can easily run off with it, and worse, while it’s unlocked, take advantage of elevated access privileges.

I think you may be underestimating the value of giving blood. It seems like according to the analysis here:

https://forum.effectivealtruism.org/posts/jqCCM3NvrtCYK3uaB/blood-donation-generally-not-that-effective-on-the-margin

A blood donation is still worth about 1/200 of a QALY. That’s still altruistic; it isn’t just warm fuzzies. If someone does not believe the EA community’s analyses of the top charities, we should still encourage them to do things like give blood.

Most of the value of giving blood is in fuzzies. You can buy a QALY from AMF for around $100, so that's $0.50, less than 0.1x US minimum wage if blood donation takes an hour.

If someone doesn't believe the valuation of a QALY it still feels wrong to encourage them to give blood for non-fuzzies reasons. I would encourage them to maximize their utility function, and I don't know what action does that without more context-- it might be thinking more about EA, donating to wildlife conservation, or doing any number of things with an altruistic theme.

1
Jonny Spicer
2y
Thanks for pointing that out, I didn't realise how effective blood donation was. I think my original point still stands, if "donating blood" is substituted with a different proxy for something that is sub-maximally effective but feels good though.

I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.

I believe by your definition, lethal autonomous weapon systems already exist and are widely in use by the US military. For example, the CIWS system will fire on targets like rapidly moving nearby ships without any human intervention.

https://en.wikipedia.org/wiki/Phalanx_CIWS

It's tricky because there is no clear line between "autonomous" and "not autonomous". Is a land mine autonomous because it decides to explode without human intervention? Well, land mines could have more and more advanced heuristics slowly built into them. At what point does it become au... (read more)

Hi Kevin,

Thank you for your comment and thanks for reading :)

The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional pr... (read more)

Thank you for a well written post. The fact that there are 10 quintillion insects makes it hard to care about insect welfare. At some point, when deciding whether it is effective to improve insect welfare, we have to compare to the effectiveness of other interventions, like improving human welfare. How many insect lives are worth one human life?

This is just estimating, but if the answer is one billion or less, then I should care more about insect life than human life, which doesn’t seem right. If the answer is a quadrillion or more, it seems like any inter... (read more)

This doesn’t seem like the ideal reasoning I would use.

On one hand, the fact that animal life is worth a lot (ratio is “one human life is worth less than a billion”) can’t be a reason to be skeptical by itself—you either determine this is true or it isn’t (which can be very hard admittedly).

If animal life’s “tradeoff ratio” to other life is too small, it is entirely possible it is too small to be an effective intervention. But it’s not based on feelings or a numerical cutoff but instead many factors of impact and effectiveness.

I'd trade at least 5 high-quality introductions like the one above for a single intro from the same distribution. 

Personally, when I'm recruiting for a role, I'm usually so hungry to get more leads that I'm happy to follow up with very weak references. I would take 5 high-quality introductions, I would take one super-high-quality introduction, I would like all of the above. Yeah, it's great to hire from people who have worked with a friend of yours before, but that will never be 100% of the good candidates.

This may very much depend on what sort of rol... (read more)

Excellent, sounds like you're on it. I do in fact use an iPhone. I should have made a more specific note about where I saw overlapping text earlier, I can't seem to find it again now. I'll use the message us link about any future minor UI bugs.

What's up EAers. I noticed that this website has some issues on mobile devices - the left bar links don't work, several places where text overlaps, tapping the search icon causes an inappropriate zoom - is there someone currently working on this where it would help if I filed a ticket or reported an issue?

2
JP Addison
2y
Don't worry about finding the perfect place (here is a fine place for now). You can message us about bugs, or post in the Feature Suggestion Thread for feature requests, so that others can vote on the ideas. I'm guessing you use an iPhone? This is a longstanding issue that we really should have fixed, it used to be you had to tap twice, though now it appears to have broken entirely. Thanks for the report. I see the behavior, thanks.

Yes, this is completely correct and many people do not get the mathematics right.

One example to think of is “the odds the Earth gets hit by a huge asteroid in the date range 2000-3000”. Whatever the odds are, they will probably steadily, predictably update downwards as time passes. Every day that goes by, you learn a huge asteroid did not hit the earth that day.

Of course, it’s possible an asteroid does hit the Earth and you have to drastically update upwards! But the vast majority of the time, the update direction will be downwards.

Grifters are definitely a problem in large organizations. The tough thing is that many grifters don’t start out as grifters. They start out honest, working hard, doing their best. But over time, their projects don’t all succeed, and they discover they are still able to appear successful by shading the truth a bit. Little by little, the honest citizen can turn into a grifter.

Many times a grifter is not really malicious, they are just not quite good enough at their job.

Eventually there will be some EA groups or areas that are clearly “not working”. The EA movement will have to figure out how to expel these dysfunctional subgroups.