All of Habryka [Deactivated]'s Comments + Replies

So long and thanks for all the fish. 

I am deactivating my account.[1] My unfortunate best guess is that at this point there is little point and at least a bit of harm caused by me commenting more on the EA Forum. I am sad to leave behind so much that I have helped build and create, and even sadder to see my own actions indirectly contribute to much harm.

I think many people on the forum are great, and at many points in time this forum was one of the best places for thinking and talking and learning about many of the world's most important top... (read more)

2
Vasco Grilo🔸
Thanks for all your efforts, Habryka.

It feels appropriate that this post has a lot of hearts and simultaneously disagree reacts. We will miss you, even (perhaps especially) those of us who often disagreed with you. 

I would love to reflect with you on the other side of the singularity. If we make it through alive, I think there's a decent chance that it will be in part thanks to your work.

6
OscarD🔸
fyi for anyone like me who doesn't have lots of the backstory here and doesn't want to read through Habryka's extensive corpus of EAF writings, here is Claude 3.7 Sonnet's summary based on the first page of comments Habryka links to.
5
James Herbert
This is a pity. My impression is that you added a lot of value, and the fact you're leaving is a signal we’ll have fewer people like you involved. It’s probably a trade-off, but I don’t know if it’s the right trade-off. Thanks for your contribution! 

Habryka, just wanted to say thank you for your contributions to the Forum. Overall I've appreciated them a lot! I'm happy that we'll continue to collaborate behind the scenes, at least because I think there's still plenty I can learn from you. I think we agree that running the Forum is a big responsibility, so I hope you feel free to share your honest thoughts with me.

I do think we disagree on some points. For example, you seem significantly more negative about CEA than I am (I'm probably biased because I work there, though I certainly don't think it's per... (read more)

To be clear, many of my links were to archive.is and archive.org and stuff, and they still broke. I do agree I could have taken full offline copies, and the basic problem here seems overcomable (if requiring at least a small amount of web-development expertise and understanding).

(I think this level of brazenness is an exception, the broader thing has I think occurred many dozens of times. My best guess, though I know of no specific example, is that probably as a result of the FTX stuff, many EA organizations changed websites and made requests to delete references from archives, in order to lower their association with FTX)

5
pseudonym
Which EA organizations do you know have made requests to delete references from archives?

Yes, many of my links over the years broke, and I haven't been able to get any working copy.

6
Jeff Kaufman 🔸
That sort of "it's hard to archive things reliably long-term" seems less relevant in the context of a review, where there's a pretty short time between sharing the doc with the charity and making the review public.

> Risk 1: Charities could alter, conceal, fabricate and/or destroy evidence to cover their tracks.

I do not recall this having happened with organisations aligned with effective altruism.

(FWIW, it happened with Leverage Research at multiple points in time, with active effort to remove various pieces of evidence from all available web archives. My best guess is it also happened with early CEA while I worked there, because many Leverage members worked at CEA at the time and they considered this relatively common practice. My best guess is you can find many other instances.)

2
Vasco Grilo🔸
Thanks for sharing, Habryka! If VettedCauses reviewed a random organisation recommended by Animal Charity Evaluators, and shared their review before publication, I guess there would only be a 20 % chance they would regret having shared the review specifically due to risk 1. What would be your guess?

At one point CEA released a doctored EAG photo with a "Leverage Research" sign edited to be bizarrely blank. (Archive page with doctored photo, original photo.) I assume this was an effort to bury their Leverage association after the fact.

4
Michael St Jules 🔸
Were they successful in getting evidence removed from web archives?
4[comment deleted]

Now, consider this in the context of AI. Would the extinction of shumanity by AIs be much worse than the natural generational cycle of human replacement?

I think the answer to this is "yes", because your shared genetics and culture create much more robust pointers to your values than we are likely to get with AI. 

Additionally, even if that wasn't true, humans alive at present have obligations inherited from the past and relatedly obligations to the future. We have contracts and inheritance principles and various things that extend our moral circle of c... (read more)

Yeah, this. 

From my perspective "caring about anything but human values" doesn't make any sense. Of course, even more specifically, "caring about anything but my own values" also doesn't make sense, but in as much as you are talking to humans, and making arguments about what other humans should do, you have to ground that in their values and so it makes sense to talk about "human values". 

The AIs will not share the pointer to these values, in the same way as every individual does to their own values, and so we should a-priori assume the AI will do worse things after we transfer all the power from the humans to the AIs. 

2
Matthew_Barnett
Let's define "shumanity" as the set of all humans who are currently alive. Under this definition, every living person today is a "shuman," but our future children may not be, since they do not yet exist. Now, let's define "humanity" as the set of all humans who could ever exist, including future generations. Under this broader definition, both we and our future children are part of humanity. If all currently living humans (shumanity) were to die, this would be a catastrophic loss from the perspective of shuman values—the values held by the people who are alive today. However, it would not necessarily be a catastrophic loss from the perspective of human values—the values of humanity as a whole, across time. This distinction is crucial. In the normal course of events, every generation eventually grows old, dies, and is replaced by the next. When this happens, shumanity, as defined, ceases to exist, and as such, shuman values are lost. However, humanity continues, carried forward by the new generation. Thus, human values are preserved, but not shuman values. Now, consider this in the context of AI. Would the extinction of shumanity by AIs be much worse than the natural generational cycle of human replacement? In my view, it is not obvious that being replaced by AIs would be much worse than being replaced by future generations of humans. Both scenarios involve the complete loss of the individual values held by currently living people, which is undeniably a major loss. To be very clear, I am not saying that it would be fine if everyone died. But in both cases, something new takes our place, continuing some form of value, mitigating part of the loss. This is the same perspective I apply to AI: its rise might not necessarily be far worse than the inevitable generational turnover of humans, which equally involves everyone dying (which I see as a bad thing!). Maybe "human values" would die in this scenario, but this would not necessarily entail the end of the broader conce

In the absence of meaningful evidence about the nature of AI civilization, what justification is there for assuming that it will have less moral value than human civilization—other than a speciesist bias?

You know these arguments! You have heard them hundreds of times. Humans care about many things. Sometimes we collapse that into caring about experience for simplicity. 

AIs will probably not care about the same things, as such, the universe will be worse by our lights if controlled by AI civilizations. We don't know what exactly those things are, but the only pointer to our values that we have is ourselves, and AIs will not share those pointers.

9
Matthew_Barnett
I think your response largely assumes a human-species-centered viewpoint, rather than engaging with my critique that is precisely aimed at re-evaluating this very point of view.  You say, “AIs will probably not care about the same things, so the universe will be worse by our lights if controlled by AI.” But what are "our lights" and "our values" in this context? Are you referring to the values of me as an individual, the current generation of humans, or humanity as a broad, ongoing species-category? These are distinct—and often conflicting—sets of values, preferences, and priorities. It’s possible, indeed probable, that I, personally, have preferences that differ fundamentally from the majority of humans. "My values" are not the same as "our values". When you talk about whether an AI civilization is “better” or “worse,” it’s crucial to clarify what perspective we’re measuring that from. If, from the outset, we assume that human values, or the survival of humanity-as-a-species, is the critical factor that determines whether an AI civilization is better or worse than our own, that effectively begs the question. It merely assumes what I aim to challenge. From a more impartial standpoint, the mere fact that AI might not care about the exact same things humans do doesn’t necessarily entail a decrease in total impartial moral value—unless we’ve already decided in advance that human values are inherently more important.  (To make this point clearer, perhaps replace all mentions of "human values" with "North American values" in the standard arguments about these issues, and see if it makes these arguments sound like they privilege an arbitrary category of beings.) While it’s valid to personally value the continuation of the human species, or the preservation of human values, as a moral preference above other priorities, my point is simply that that’s precisely the species-centric assumption I’m highlighting, rather than a distinct argument that undermines my observation

Your opening line seems to be trying to mimic the tone of mocking someone obnoxiously. Then you follow-up with an exaggerated telling of events. Then another exaggerated comparison. 

-23
NobodyInteresting

Weird bug. But it only happens when someone votes and unvotes multiple times, and when you vote again the count resets. So this is unlikely to skew anything by much.

Given that I just got a notification for someone disagree-voting on this: 

This is definitely no longer the case in the current EA Funding landscape. It used to be the case, but various changes in the memetic and political landscape have made funding gaps much stickier, and much less anti-inductive (mostly because cost-effectiveness prioritization of the big funders got a lot less comprehensive, so there is low-hanging fruit again).

I’m not making any claims about whether the thresholds above are sensible, or whether it was wise for them to be suggested when they were. I do think it seems clear with hindsight that some of them are unworkably low. But again, advocating that AI development be regulated at a certain level is not the same as predicting with certainty that it would be catastrophic not to. I often feel that taking action to mitigate low probabilities of very severe harm, otherwise known as “erring on the side of caution” somehow becomes a foreign concept in discussions of A

... (read more)

You're welcome, and makes sense. And yeah, I knew there was a period where ARC avoided getting OP funding for COI reasons, so I was extrapolating from that to not having received funding at all, but it does seem like OP had still funded ARC back in 2022. 

Thanks! This does seem helpful.

One random question/possible correction: 

https://x.com/KelseyTuoc/status/1872729223523385587

Is Kelsey an OpenPhil grantee or employee? Future Perfect never listed OpenPhil as one of its funders, so I am a bit surprised. Possibly Kelsey received some other OP grants, but I had a bit of a sense Kelsey and Future Perfect more general cared about having financial independence from OP.

Relatedly, is Eric Neyman an Open Phil grantee or employee? I thought ARC was not being funded by OP either. Again, maybe he is a grantee for o... (read more)

4
lukeprog
Oops, my colleague checked again and the Future Perfect inclusions (Keley and Sigal) are indeed a mistake; OP hasn't funded Future Perfect. Thanks for the correction. (Though see e.g. this similar critical tweet from OP grantee Matt Reardon.) Re: Eric Neyman. We've funded ARC before and would do so again depending on RFMF/etc.

(I am somewhat sympathetic to this request, but really, I don't think posts on the EA Forum should be that narrow in its scope. Clearly modeling important society-wide dynamics is useful to the broader EA mission. To do the most good you need to model societies and how people coordinate and such. Those things to me seem much more useful than the marginal random fact about factory farming or malaria nets)

2
Larks
I agree that not everything needs to supply random marginal facts about malaria. But at the same time I think concrete examples are useful to keep things grounded, and I think it's reasonable to adopt a policy of 'not relevant to EA until at least some evidence to the contrary is provided'. Apparently the OP does have some relevance in mind: I feel like it would have been good to spend like half the post on this! Maybe I am just being dumb but it is genuinely unclear to me what preference falsification the OP is worried about with animal welfare. Without this the post seems to be written as a long response to a question about sex that as far as I can tell no-one on the forum asked. 

I don't think this is true, or at least I think you are misrepresenting the tradeoffs and diversity here. There is some publication bias here because people are more precise in papers, but honestly, scientists are also not more precise than many top LW posts in the discussion section of their papers, especially when covering wider-ranging topics. 

Predictive coding papers use language incredibly imprecisely, analytic philosophy often uses words in really confusing and inconsistent ways, economists (especially macroeconomists) throw out various terms in... (read more)

AI systems modeling their own training process is a pretty big deal for modeling what AIs will end up caring about, and how well you can control them (cf. the latest Anthropic paper)

For most cognitive tasks, there does not seem to be a particularly fundamental threshold at human-level performance (this one is still out in many ways, but we are seeing more evidence for this on an ongoing basis as we reach superhuman performance on many measures)

Developing "contextual awareness" does not require some special grounding insight (i.e. training systems to be general purpose problem solvers naturally causes them to optimize themselves and their environment and become aware of their context, etc.). This was back in 2020, 2021, 2022 one of the recurring disagreements between me and many ML people.

(In general, the salaries which I will work for in EA go up with funding uncertainty, not down, because indeed it means future funding is more likely to dry up, and I have to pay the high costs of a career transition, or self-fund for many years)

You are right! I had mostly paid attention to the bullet points, which didn't extract the parts of the linked report that addressed my concerns, but you are right that it totally links to the same report that totally does!

Sure, I don't think it makes a difference whether the chicken grows to a bigger size in total, or grows to a bigger size more quickly, both would establish a prior that you need fewer years of chicken-suffering for the same amount of meat, and as such that this would be good (barring other considerations).

6
Michael St Jules 🔸
FWIW, Molly's comment you linked to quoted and cited Welfare Footprint Project and basically addressed something like "grows to a bigger size more quickly":

No, those are two totally separate types of considerations? In one you are directly aiming to work against the goals of someone else in a zero-sum fashion, the other one is just a normal prediction about what will actually happen?

You really should have very different norms about how you are dealing with adversarial considerations and how you are dealing with normal causal/environmental considerations. I don't care about calling them "vanilla" or not, I think we should generally have a high prior against arguments of the form "X is bad, Y is hurting X, therefore Y is good".

Thank you! This is the kind of analysis I was looking for.

Huh, yeah, seems like a loss to me. 

Correspondingly, while the OP does not engage in "literally lying" I think sentences like "In light of this ruling, we believe that farmers are breaking the law if they continue to keep these chickens." and "The judges have ruled in favour on our main argument - that the law says that animals should not be kept in the UK if it means they will suffer because of how they have been bred." strike me as highly misleading, or at least willfully ignorant, based on your explanation here.

Agreed, this post seems like it goes way against standard forum norms if this is correct

- Plus #1: I assume that anything the animal industry doesn't like would increase costs for raising chickens. I'd correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency).

I think this feels like a very aggressive zero-sum mindset. I agree that sometimes you want to have an attitude like this, but I at least at the present think that acting with the attitude of "let's just make animal industry as costly as possible" would understan... (read more)

6
Ozzie Gooen
I feel like that's pretty unfair.  You asked for a "rough fermi estimate of the trade-offs", I gave you a list of potential trade-offs.  If we're willing to make decisions with logic like, "while genetically modifying unnaturally fast-growing chickens in factory farms would increase the pain of each one, perhaps the math works out so that there's less pain overall", I feel like adding considerations like, "this intervention will also make meat more expensive, which will reduce use" is a pretty vanilla consideration.

Wow, yeah, I was quite misled by the lead. Can anyone give a more independent assessment of what this actually means legally?

The Humane League (THL) filed a lawsuit against the UK Secretary of State for Environment, Food and Rural Affairs (the Defra Secretary) alleging that the Defra Secretary’s policy of permitting farmers to farm fast-growing chickens unlawfully violated paragraph 29 of Schedule 1 to the Welfare of Farmed Animals (England) Regulations 2007. 

Paragraph 29 of Schedule 1 to the Welfare of Farmed Animals (England) Regulations 2007 states the following: 

  • “Animals may only be kept for farming purposes if it can reasonably be expected, on the basis of their g
... (read more)

Does someone have a rough fermi on the tradeoffs here? On priors it seems like chickens bred to be bigger would overall cause less suffering because they replace more than one chicken that isn't bread to be as big, but I would expect those chickens to suffer more. I can imagine it going either way, but I guess my prior is that it was broadly good for each individual chicken to weigh more.

I am a bit worried the advocacy here is based more on a purity/environmentalist perspective where genetically modifying animals is bad, but I don't give that perspective m... (read more)

5
jcw
The chickens don't end up weighing more at the point that they go to slaughter - the faster growth rate is so that they get to the same slaughter weight in a shorter space of time, which uses less feed. Chickens with faster growth rates therefore aren't replacing more than one slower-growing chicken. The slower-growing breeds live for longer, which would be bad if it was extending the same pain intensity over a longer period of time, but this seems like it isn't what happens: https://welfarefootprint.org/broilers/ 

Welfare Footprint Project has analysis here, which they summarize:

Adoption of the Better Chicken Commitment, with use of a slower-growing breed reaching a slaughter weight of approximately 2.5 Kg at 56 days (ADG=45-46 g/day) is expected to prevent “at least” 33 [13 to 53] hours of Disabling pain, 79 [-99 to 260] hours of Hurtful and 25 [5 to 45] seconds of Excruciating pain for every bird affected by this intervention (only hours awake are considered). These figures correspond to a reduction of approximately 66%, 24% and 78% , respectively, in the time exp

... (read more)
6
Ozzie Gooen
(Obvious flag that I know very little about this specific industry) Agreed that this seems like an important issue. Some quick takes: Less immediately- obvious pluses/minuses to this sort of campaign: - Plus #1: I assume that anything the animal industry doesn't like would increase costs for raising chickens. I'd correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency). - Plus #2: It seems possible that companies have been selecting for growth instead of for well-being. Maybe, if they just can't select for growth, then selecting more for not-feeling-pain would be cheaper. - Minus #1: Focusing on the term "Frankenchicken" could discourage other selective breeding or similar, which could be otherwise useful for very globally beneficial attributes, like pain mitigation. - Ambiguous #1: This could help stop further development here. I assume that it's possible to later use selective breeding and similar to continue making larger / faster growing chickens. I think I naively feel like the pluses outweigh the negatives. Maybe I'd give this a 80% chance, without doing much investigation. That said, I'd also imagine there might well be more effective measures with a much clearer trade-off. The question of "is this a net-positive thing" is arguably not nearly as important as "are there fairly-clearly better things to do." Lastly, for all of that, I do want to just thanks those helping animals like this. It's easy for me to argue things one way or the other, but I generally have serious respect for those working to change things, even if I'm not sure if their methods are optimal. I think it's easy to seem combative on this, but we're all on a similar team here. In terms of a "rough fermi analysis", as I work in the field, I think the numeric part of this is less important at this stage than just laying out a bunch of the key considerations and st

I don't think anyone uses "valuable" in that way. Saying "the most valuable cars are owned by Jeff Bezos" doesn't mean that in-aggregate all of his cars are more valuable than other people's cars. It means that the individual cars that Jeff Bezos owns are more valuable than other cars.

I agree that this is what the post is about, but the title and this[1] sentence do indeed not mean that, under any straightforward interpretation I can think of. I think bad post titles are quite costly (cf. lots of fallout from "politics is the mindkiller" being misapplied over the years), and good post titles are quite valuable.

  1. ^

    "This points to an important conclusion: The most valuable dollars to aren't owned by us. They're owned by people who currently either don't donate at all, or who donate to charities that are orders of magnitude less effecti

... (read more)

The title and central claim of the post seems wrong, though my guess is you mean it poetically (but poetry that isn't true is I think worse, though IDK, it's fine sometimes, maybe it makes more sense to other people). 

Clearly the dollars you own are the most valuable. If you think someone else could do more with your dollars, you can just give them your dollars! This isn't guaranteed to be true (you might not know who would ex-ante best use dollars, but still think you could learn about that ex-post and regret not giving them your money after the oppo... (read more)

4
PabloAMC 🔸
I think there might be a confusion here. Your claim is that the dollars we own are more valuable per dollar But the post is referring to the overall amount of dollars. Eg Jeff Bezos dollars might be more valuable than mine.

FWIW given the context of previous discussions on the EA Forum, I read the title as meaning something like "influencing other people's donation decisions is often more valuable than improving your own" when I saw it on the frontpage. 

I agree that all-things-considered they say that, but I am objecting to "one of the things to consider", and so IMO it makes sense to bracket that consideration when evaluating my claims here.

But I was first! I demand the moderators transfer all of the karma of Jeff's comment to mine :P 

Accolades for intellectual achievements by tradition go to the person who published them first.

8
Nathan Young
Sure, but surely we give it according to Shapley values? What if you had missed this? We should reward Jeff for that.

Clearly you believe that probabilities can be less than 1%, reliably. Your probably of being struck by lightning today is not "0% or maybe 1%", it's on the order of 0.001%. Your probability of winning the lottery is not "0% or 1%" it's ~0.0000001%. I am confident you deal with probabilities that have much less than 1% error all the time, and feel comfortable using them.

It doesn't make sense to think of humility as something absolute like "don't give highly specific probabilities". You frequently have justified belief of a probability being very highly spec... (read more)

3
Sjlver
This is a great point. Clearly you are right. That said, the examples that you give are the kind of frequentist probabilities for which one can actually measure rates. This is quite different from the probability given in the survey, which presumably comes from an imperfect Bayesian model with imprecise inputs. I also don't want to belabor the point... but I'm pretty sure my probability of being stuck by lightning today is far from 0.001%. Given where I live and today's weather, it could be a few orders of magnitude lower. If I use your unadjusted probability (10 micromorts) and am willing to spend $25 to avert a micromort, I would conclude that I should invest $250 in lightning protection today... that seems the kind of wrong conclusion that my post warns about. I think humility is useful in cases like the present survey question, when a specific low probability, derived from an imperfect model, can change the entire conclusion. There are many computations where the outcome is fairly robust to small absolute estimation errors (e.g., intervention (1) in the question). On the other hand, for computations that depend on a low probability with high sensitivity, we should be extra careful about that probability.

You can sort by "oldest" and "newest" in the comment-sort order, and see that mine shows up earlier in the "oldest" order, and later in the "newest" order.

You can also right-click → inspect element on the time indicator:
 

I agree that this is an inference. I currently think the OP thinks that in the absence of frugality concerns this would be among the most cost-effective uses of money by Open Phil's standards, but I might be wrong. 

University group funding was historically considered extremely cost-effective when I talked to OP staff (beating out most other grants by a substantial margin). Possibly there was a big update here on cost-effectiveness excluding frugality-reputation concerns, but currently think there hasn't been (but like, would update if someone from OP said otherwise, and then I would be interested in talking about that).

2
David T
They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.
6
JP Addison🔸
I really appreciate that the comment section has rewarded you both precisely equally.

They both show up as 2:23 pm to me: is there a way to get second level precision?

Copying over the rationale for publication here, for convenience: 

Rationale for Public Release

Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:

1. To prevent accidents and well-intentioned dev

... (read more)

IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail.  Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge.  This situation, although unfortunate, is probably justified by the nature of the risks!  But still, it makes it hard for anyone on the outside to tell how serious ... (read more)

I agree that things tend to get tricky and loopy around these kinds of reputation-considerations, but I think at least the approach I see you arguing for here is proving too much, and has a risk of collapsing into meaninglessness.

I think in the limit, if you treat all speech acts this way, you just end up having no grounding for communication. "Yes, it might be the case that the real principles of EA are X, but if I tell you instead they are X', then you will take better actions, so I am just going to claim they are X', as long as both X and X' include cos... (read more)

6
David T
I think it's fair enough to caution against purely performative frugality. But I'm not sure the OP even justifies the suggestion that the organizers actually are more cost effective (they concluded the difference between paid and unpaid organizers' individual contributions were "substantive, not enormous"; there's a difference between paid people doing more work than volunteers and it being more cost effective to pay...). That's even more the case if you take into account that the primary role of an effective university organizer is attracting more people (or "low context observers") to become more altruistic and this instance of the "weirdness" argument is essentially that paying students undercut the group's ability to appeal to people on altruistic grounds, even if individual paid staff put in more effort. And they were unusually well paid by campus standards for tasks almost every other student society use volunteers for.[1] And that there's no evidence that the other ways CEA proposes spending the money instead are less effective.  1. ^ one area we might agree is that I'm not sure if OpenPhil considered alternatives like making stipends needs-based or just a bit lower and more focused as a pragmatic alternative to just cancelling them altogether.

In survey work we’ve done of organizers we’ve funded, we’ve found that on average, stipend funding substantively increased organizers’ motivation, self-reported effectiveness, and hours spent on organizing work (and for some, made the difference between being able to organize and not organizing at all). The effect was not enormous, but it was substantive.
[...]
Overall, after weighing all of this evidence, we thought that the right move was to stick to funding group expenses and drop the stipends for individual organizers. One frame I used to think about thi

... (read more)

This is circular. The principle is only compromised if (OP believes) the change decreases EV — but obviously OP doesn't believe that; OP is acting in accordance with the do-what-you-believe-maximizes-EV-after-accounting-for-second-order-effects principle.

Maybe you think people should put zero weight on avoiding looking weird/slimy (beyond what you actually are) to low-context observers (e.g. college students learning about the EA club). You haven't argued that here. (And if that's true then OP made a normal mistake; it's not compromising principles.)

I donate more to Lightcone than my salary, so it doesn't really make any sense for me to receive a salary, since that just means I pay more in taxes. 

I of course donate to Lightcone because Lightcone doesn't have enough money. 

Lightspeed Grants and the S-Process paid $20k honorariums to 5 evaluators. In addition, running the round probably cost around 8-ish months of Lightcone staff time, with a substantial chunk of that being my own time, which is generally at a premium as the CEO (I would value it organizationally at ~$700k/yr on the margin, with increasing marginal costs, though to be clear, my actual salary is currently $0), and then it also had some large diffuse effects on organizational attention.

This makes me think it would be unsustainable for us to pick up running Lightspeed Grants rounds without something like ~$500k/yr of funding for it. We distributed around ~$10MM in the round we ran.

3
JJ Hepburn
I’m hesitant to ask you about this so feel free to pass. Can you say more about how it is that your current salary is $0? I think most people would be surprised you are not currently receiving a salary. I also assume that as a not-for-profit founder even when you have had a salary it is lower than most or all of your team.

Some of my thoughts on Lightspeed Grants from what I remember: I don’t think it’s ever a good idea to name something after the key feature everyone else in the market is failing at. It leads to particularly high expectations and is really hard to get away from. (Eg OpenAI) The S-process seemed like a strange thing to include for something intended to be fast. As far as I know the S-process has never been done quickly.

You seem to be misunderstanding both Lightspeed Grants and the S-Process. The S-Process and Lightspeed Grants both feature speculation/ventur... (read more)

1
JJ Hepburn
What kind of numbers are we talking about needing here??? How much did it cost for the last Lightspeed round? How were the operations funded? How much distributed? $5mm? How much would you need for operations in order to run it again? If you had the operations funding would the grants funding still be a problem? How much would you need for operations just for the grant decisions but not the distribution of funds?

A non-trivial fraction of our most valuable grants require very short turn-around times, and more broadly, there is a huge amount of variance in how much time it takes to evaluate different kinds of applications. This makes a round model hard, since you both end up getting back much later than necessary to applications that were easy to evaluate, and have to reject applications that could be good but are difficult to evaluate. 

1
JJ Hepburn
I’m interested to know why we haven’t seen Lightspeed Grants again? Some of my thoughts on Lightspeed Grants from what I remember: I don’t think it’s ever a good idea to name something after the key feature everyone else in the market is failing at. It leads to particularly high expectations and is really hard to get away from. (Eg OpenAI) The S-process seemed like a strange thing to include for something intended to be fast. As far as I know the S-process has never been done quickly. It also seems to be dependent on every parameter being in place in order to be run so can easily be held up by something small. At the time of applications it had a clear date when decisions were expected, this is much better than everyone else’s vague expectations. It ended up taking much longer than expected for decisions but still pretty quick overall. This was managed really well though. Certainly much better than LTFFs handing of a large volume of slow decisions last year. I judge the timing of decisions more on how late they are than the total time, but this is a proxy for the real problem which is uncertainty. Lightspeed got the basics right when it comes to comms. I think it was really good over all and I’d expect that the major issues not accounted for in the first round can be managed. I would really like to see this run again or a variation of it.
1
JJ Hepburn
Yeah, I have personally appreciated the short turn around when needed. And seen plenty of situations where people need funds quickly. I expect there are a lot of these tradeoffs. I think these should be solved by different services not by trying to solve all of the different types of funding together. LTFF ended up here for historical reasons, but now seems to be struggling to serve all of these markets while also crowding out any new funders in the space. If I set up a new fund it would have rounds and I would just accept that this fund would not be able to support those kinds of applicants that need a fast turn around or longer investigations. A lot of the problems of the applicant experience with LTFF is uncertainty. Having one application form that try’s to solve all of these problems means that the expectations can’t be specific enough to be meaningful.

I do actually have trouble finding a good place to link to. I'll try to dig one up in the next few days.

You cannot spend the money you obtain from a loan without losing the means to pay it back. You can do a tiny bit to borrow against your future labor income, but the normal thing to do is to declare personal bankruptcy, and so there is little assurance for that.

(This has been discussed many dozens of times on both the EA Forum and LessWrong. There exist no loan structures as far as I know that allow you to substantially benefit from predicting doom.)

4
Vasco Grilo🔸
Hello Habryka. Could you link to a good overview of why taking loans does not make sense even if one thinks there is a high risk of human extinction soon? Daniel Kokotajlo said: I should also clarify that I am open to bets about less extreme events. For example, global unemployment rate doubling or population dropping below 7 billion in the next few years.

Most concrete progress on worst-case AI risks — e.g. arguably the AISIs network, the draft GPAI code of practice for the EU AI Act, company RSPs, the chip and SME export controls, or some lines of technical safety work

My best guess (though very much not a confident guess) is the aggregate of these efforts are net-negative, and I think that is correlated with that work having happened in backrooms, often in context where people were unable to talk about their honest motivations. It sure is really hard to tell, but I really want people to consider the hypoth... (read more)

That's their... headline result?  "We do not find, however, any evidence for a systematic link between the scale of refugee immigration (and neither the type of refugee accommodation or refugee sex ratios) and the risk of Germans to become victims of a crime in which refugees are suspects" (pg. 3), "refugee inflows do not exert a statistically significant effect on the crime rate" (pg. 21), "we found no impact on the overall likelihood of Germans to be victimized in a crime" (pg. 31), "our results hence do not support the view that Germans were victim

... (read more)
3
Lauren Gilbert
They say: "We found no impact on the overall likelihood of Germans to be victimized in a crime".  That is, refugees were not any likelier than Germans to commit crimes against Germans. I said: "In Germany, refugees were not particularly likely to commit crimes against Germans".  I have accurately reported their results.   Furthermore, in a post I am working on now, I will discuss why such charts - I look at one simply comparing the % of of a given ethnicity in prison to the % in a population - do not tell you all that much: "We might overestimate the rate of immigrant crime because: * Immigrant and native-born populations differ.  Crime is disproportionately committed by young men (under 30 years old).  If the immigrant population contains a lot of young men, and the native population skews older, one could end up with immigrants overrepresented in the prison system even if natives and immigrants are equally likely to commit crimes over their lifetime. * Racial or ethnic bias in the justice system could lead to more convictions for immigrants than the native-born, even if they are committing crimes at the same rate. * The crimes immigrants may have committed could be immigration offenses.  In the US, 86% of undocumented people charged with a crime are charged not with a violent or property crime, but with being in the country without permission. The native-born cannot commit immigration offenses in their home country, so mechanically, immigrants commit more immigration offenses than the native born. I’m also fairly certain this isn’t the kind of crime most people worry about when they worry about immigrants and crime. On the other hand, this graph might underestimate immigrant crime if: * Criminal immigrants are deported and thus don’t appear in the prison statistics. * Immigrants commit crimes against other immigrants.  There is data suggesting that immigrants are less likely to report crimes to law enforcement; this might allow criminals who target th
Load more