All of Henry Stanley 🔸's Comments + Replies

You do address the FTX comparison (by pointing out that it won't make funding dry up), that's fair. My bad.

But I do think you're make an accusation of some epistemic impropriety that seems very different from FTX - getting FTX wrong (by not predicting its collapse) was a catastrophe and I don't think it's the same for AI timelines. Am I missing the point?

4
Yarrow Bouchard 🔸
The point of the FTX comparison is that, in the wake of the FTX collapse, many people in EA were eager to reflect on the collapse and try to see if there were any lessons for EA. In the wake of the AI bubble popping, people in EA could either choose to reflect in a similar way, or they could choose not to. The two situations are analogous insofar as they are both financial collapses and both could lead to soul-searching. They are disanalogous insofar as the AI bubble popping won’t affect EA funding and won’t associate EA in the public’s mind with financial crimes or a moral scandal.  It’s possible in the wake of the AI bubble popping, nobody in EA will try to learn anything. I fear that possibility. The comparisons I made to Ray Kurzweil and Elon Musk show that it is entirely possible to avoid learning anything, even when you ought to. So, EA could go multiple different ways with this, and I’m just saying what I hope will happen is the sort of reflection that happened post-FTX. If the AI bubble popping wouldn’t convince you that EA’s focus on near-term AGI has been a mistake — or at least convince you to start seriously reflecting on whether it has been or not — what evidence would convince you? 

I might be missing the point, but I'm not sure I see the parallels with FTX.

With FTX, EA orgs and the movement more generally relied on the huge amount of funding that was coming down the pipe from FTX Foundation and SBF. When all that money suddenly vanished, a lot of orgs and orgs-to-be were left in the lurch, and the whole thing caused a huge amount of reputational damage.

With the AI bubble popping... I guess some money that would have been donated by e.g. Anthropic early employees disappears? But it's not clear that that money has been 'earmarked' in t... (read more)

1
Yarrow Bouchard 🔸
This is directly answered in the post. Edit: Can you explain why you don’t find what is said about this in the post satisfactory?

For effect, I would have pulled in a quote from the Reddit thread on akathisia rather than just linking to it.

Akathisia is a inner restlessness that is as far as I know the most extreme form of mental agitation known to man. This can drive the sufferer to suicide [...] My day today consisted of waking up and feeling like I was exploding from my skin, I had a urge that I needed to die to escape. [...] I screamed, hit myself, threw a few things and sobbed. I can’t get away from it. My family is the only reason why I’m alive. [...] My CNS is literally on fire and food is the last thing I want. My skin burns, my brain on fire. It’s all out survival.

if I had kept at it and pushed harder, maybe the project would have got further... but I don't think I actually wanted to be in that position either!

I think this is a problem with for-profit startups as well. Most of the time they fail. But sometimes they succeed (in the sense of “not failing” rather than breakout success which is far rarer), and in that case you’re stuck with the thing to see it through to an exit.

I enjoyed this, and I miss bumping into you on the stairs at house parties!

Honestly, I kind of hated doing [GTP].

Are you willing to share why you hated it?

8
Michael_PJ
Yes, I guess I didn't go into this so much in the project post-mortem. But in short: * The actual work that needed doing was not work that I was very good at or enjoyed. There was a lot of synoptic research and networking. * There was a very high level of uncertainty about what we were doing. I think I deal fairly well with medium-level uncertainty, but much less well with high-level uncertainty. * Much of this could have been overcome, but I think I fundamentally lacked the non-instrumental desire to become the kind of person who was good at the project; and the instrumental need wasn't motivating enough. In practice this manifested as a lack of grit - if I had kept at it and pushed harder, maybe the project would have got further... but I don't think I actually wanted to be in that position either!

people who have strong conviction in EA start with a radical critique of the status quo (e.g. a lot of things like cancer research or art or politics or volunteering with lonely seniors seem a lot less effective than GiveWell charities or the like, so we should scorn them), then see the rationales for the status quo (e.g. ultimately, society would start to fall apart if tried to divert too many resources to GiveWell charities and the like by taking them away from everything else), and then come full circle back around to some less radical position

I agre... (read more)

1
Yarrow Bouchard 🔸
Yes, there is an important difference between doing something yourself or recommending it to others (when you don’t expect to persuade the whole world) vs. a prescription for the whole world to universally follow. So, maybe it’s good to stop donating to anything but GiveWell-recommended charities and suggest the same to others, but maybe it would end up being bad if literally the whole world suddenly did this.  It’s also different to say that society’s priorities or allocation of resources, as a whole, should be shifted somewhat in one direct or another than to say, I don’t know, developed countries should abolish their welfare systems and give the money to GiveWell. The real life example that sticks out in my mind is when someone who was involved in our university EA group talked about volunteering with seniors and someone else told her this was self-interested rather than altruistic. To me, that is just a deeply unwise and overzealous thing to say. (In that group, we also discussed the value of novels and funding for cancer research and we had people arguing both sides of each issue.) My attitude on those things was that there is no cost to me at least taking a cautious approach and trying to practice humility with these topics. I wasn’t trying to tell people to devote every ounce of their lives to effective altruism (not that I could convince people even if I wanted to) but actually proposing something much more modest — switching whatever they donated to a GiveWell charity, maybe pledging to give 10% of their income, things of that nature.  If we were pitching the Against Malaria Foundation to a student group planning a fundraiser, then I would see my goal as persuading them to donate to AMF and if they decided to donate to AMF, that would be success. If we did a presentation like a Giving Game, I didn’t mind trying to give people a little razzle dazzle — that was the whole idea. But if someone came to our EA group alone, though, my attitude was more like:

One thing that occurs to me (as someone considering a career pivot) is the case of who someone isn't committed to a specific cause area. Here you talk about someone who is essentially choosing between EtG for AI safety or doing AI safety work directly.

But in my case, I'm considering a pivot to AI safety from EtG - but currently I exclusively support animal welfare causes when I donate. Perhaps this is just irrational on my part. My thinking is that I'm unlikely, given my skillset, to be any good at doing direct work in the animal welfare space, but conside... (read more)

1
Dan MacKinlay
Yes, I sidestepped the details of relative valuation entirely here by collapsing the calculation of “impact” into “donation-equivalent dollars.” That move smuggles in multiple subjective factors — specifically, it incorporates a complex impact model and a private valuation of impacts. We’ll all have different “expected impacts,” insofar as anyone thinks in those terms, because we each have different models of what will happen in the counterfactual paths, not to mention differing valuations of those outcomes. One major thing I took away from researching this is that I don’t think enough about substitutability when planning my career (“who else would do this?”), and I suppose part of that involves modelling comparative advantage. This holds even relative to my private risk/reward model. But thinking in these terms isn’t natural: my estimated impact in a cause area depends on how much difference I can make relative to others who might do it — which itself requires modelling the availability and willingness of others to do each thing. Another broader philosophical question worth unpacking is whether these impact areas are actually fungible. I lean toward the view that expected value reasoning makes sense at the margins (ultimately, I have a constrained budget of labour and capital, and I must make a concrete decision about how to spend it — so if Bayes didn’t exist, I’d be forced to invent him). But I don’t think it is a given that we can take these values globally seriously, even within an individual. Perhaps animal welfare and AI safety involve fundamentally different moral systems and valuations? Still, substitutability matters at the margins. If you move into AI safety instead of animal welfare, ideally that would enable someone else — with a better match to animal welfare concerns — to move into AI safety despite their own preferences. That isn’t EtG per se, but it could still represent a globally welfare-improving trade in the “impact labour market.” If we tak

I wonder why this hasn't attracted more upvotes - seems like a very interesting and high-effort post!

Spitballing - I guess there's such a lot of math here that many people (including me) won't be able to fully engage with the key claims of the post, which limits the surface area of people who are likely to find it interesting.

I note that when I play with the app, the headline numbers don't change for me when I change the parameters of the model. May be a bug?

7
Dan MacKinlay
Ah that's why it's for draft amnesty week ;-) Somewhere inside this dense post there is a simpler one waiting to get out, but I figured this was worth posting. Right now it is in the form of ”my own calculations for myself” and it’s not that comprehensible nor the model of good transdisiplinary communication to which I aspire. I'm trying to collaborate with a colleague of mine to write that shorter version.  (And to improve the app. Thanks for the bug report @Henry Stanley 🔸 !)

Not an answer to your question, but I also think most futures will be net negative for similar reasons, so it’s not just you!

I can start by giving my own answer to this (things I might do with my time):

  • travel widely and without real goals/timelines (e.g. Interrailing without too many pre-defined stops, just go where your heart takes you and if you like a place then stay longer)
    • perhaps directed a little towards where I have friends, where there are EA hubs that are likely to provide fruitful social interactions
  • do Pieter Levels' 12 startups in 12 months (maybe with Claude Code this could be 12 startups in 12 weeks, who knows) - spend some time building side projects for the sake o
... (read more)

Are you two talking about different Sams?

IIRC this was basically the thesis behind the EA Hotel (now CEELEAR) - a low-cost space for nascent EAs to do a bunch of thinking without having to worry too much about the basics.

More broadly this is also a benefit of academic tenure - being able to do your research without having to worry about finding a job (although of course getting funding is still the bottleneck and a big force in directing where research effort is directed).

Surely both things can be true at once - that it’s been historically very useful and also a shame that it’s available to so few?

4
Larks
It could be the case - but I'm not aware of much evidence to support this. Over the last hundred years we have seen a dramatic increase in the number of people on sinecures and a collapse in the selectiveness - e.g. the rise of state pensions, unemployment insurance, disability insurance. There are a few successes (JK Rowling credits state benefits with allowing her to write Harry Potter, for example) but clearly a much lower rate than under the prior model.

It’s not the ideal movement (i.e. not what we’d design from scratch), but it’s the closest we’ve got

Interested to hear what such a movement would look like if you were building it from scratch.

Probably (even just Amazon price differences, I haven't looked elsewhere). 6200 is £18, one set of filters £11. 4251 is £20. Maybe it's a false economy, just thinking about cost savings if you wanted to buy a handful of masks for family. 

What do we thinking about maintenance-free masks (they're like the half-face respirators but have single-use filters) - seems like better than using N95s/having nothing but worse than having swappable filters?

(The cost of mask + swappable filters seems much higher than the maintenance-free mask, maybe 2x judging by the cost on Amazon UK)

6
Jason
One downside is that you may end up with a suboptimal filter. The one you linked, for instance, is P2 (~N95 in US) rather than P3 (~N100 or P100) -- so less protective against particulates. It does add some protection against organic vapors ("A1") -- but I don't think that would add anything in a pandemic scenario and likely reduces breathability. 3M does make some P3-rated models in that series, but I bet the cost is higher and breathability worse (because they also incorporate ratings against other gases and/or a higher rating against organic vapors).
4
Jeff Kaufman 🔸
This is probably a regional thing: I don't see the 3M 4251 or other disposable respirators for sale in the US. My guess is the cost difference you're seeing is due to comparing a US-market (6200) vs UK-market (4251) masks on UK Amazon?

Looks like the amendment passed, sadly:

Lawmakers at the European Parliament voted by 355 to 247 in favour of an amendment to a regulation designed to give farmers a stronger negotiating position so that powerful companies in the food supply chain do not impose unfavourable conditions.

The text of the final regulation will follow negotiations between representatives of the Parliament, EU governments and the Commission, with the Parliament backing a ban of terms such as "veggie-burger" or "vegan sausage".

You can also do both to some extent - when people query it you can say that you're vegan but that the impact of doing so is far less than e.g. one's own personal giving to animal orgs.

I guess it's an interesting position you're in - you might personally want to be strictly vegan, but also in some ways the whole point of FarmKind is that you don't need to do that/doing that doesn't have all that large an impact.

Which also puts you in a bit of a bind bc as you say there are animal advocates who will see not being vegan as a mark of unseriousness.

Getting FarmKind featured by Sam Harris would be a real coup.

Tempted to write a response post to this (which at the very least collates the responses in the comments re the various weak evidence it cites), especially given how much positive traction it's got on LessWrong. A worthwhile use of time?

Ferrous sulphate is also common but a bit nauseating and poorly absorbed in any case. Ferrous bisglycinate is also found branded as “gentle iron”.

For those very deficient in iron, an iron infusion will give you ~two years’ worth of iron in one go - and skips all the issues with oral bioavailability of iron. You will need to test your iron levels first to avoid iron overload.

I write a bit about iron supplementation in my guide to treating restless leg syndrome (RLS) for which iron deficiency is a common cause: https://henryaj.substack.com/p/how-to-treat-restless-legs-syndrome

4
Jason
 One's view of infusion may depend on what formulations are available and at what cost. IIRC, the older, cheaper ones are more likely to cause serious reactions although the risk is relatively low. E.g., https://www.rutgers.edu/news/risk-severe-allergic-reaction-higher-two-intravenous-iron-boosting-products

Seems like a reasonable distinction - but also not sure how many people move to an EA hub expressly because it binds their future self to do EA work/be in said hub long-term?

I agree that removing the 10% of animal products from your diet that causes the least suffering is not that important, and otherwise clear-headed EAs treat it like a very big deal in a way they don’t treat giving 10% less to charity as a very big deal. ... it’s not clear to me whether the signaling value of being 100% vegan and strict about it ... is positive

Given this, why are you vegan?

(I'm also ~vegan but wrestle with the relative importance of it given how difficult can be; the signalling value to others is one of the reasons I think it's good.)

Good question! I think (a) having to think about which is the 10% and “should I eat this” every meal uses too much bandwidth. I find a simple rule easier overall. It’s kind of like how I don’t calculate the consequences of my actions at every decision even though I’m consequentialist. I rely on heuristics instead. (b) I found it really hard to get to my current diet. It took me many years. And I think that personally I’ll find it hard to re-introduce 10% of the animal products without being tempted and it becoming 50%. (c) I think the things I say about veganism to other vegans / animal people are more credible when I’m vegan [as I’m clearly committed to the cause and not making excuses for myself].

your future self is a separate person. If we imagine the argument targeting any other person, it's horrible - it states that should lock them into some state that ensures they're forced to serve your (current) interests

Surely this proves too much? Any decision with long-term consequences is going to bind your future self. Having kids forces your future self onto a specific path (parenthood) just as much as relocating to an EA hub does.

1
Rudstead
Well, between relocating and having kids, one of those decisions is far more irreversible so should be more carefully made. It's one of those rare one-way doors, and you won't pass through many of those over your life.
4
Arepo
I guess in general any decision binds all future people in your lightcone to some counterfactual set of consequences. But it still seems practically useful in interpersonal interactions to distinction a) between those that deliberately restrict their action set/those that just provide them in expectation with a different access set of ~the same size, and b) between those motivated by indifference/those motivated specifically by an authoritarian desire to make their values more consistently with ours.

EU welfare level 0 (organic)

Presumably if the eggs aren't sexed in ovo then the male chicks are getting ground up/gassed?

3
Jan Wehner🔸
Thanks for the pointer Henry! It motivated me to look into culling more and I just wanted to share some EU-specific facts I found: A hen produces ~350 eggs, so consuming one egg is ~1/350th of culling a male chicken. 28% of chicken in Europe have in-OVO sexing, with Germany having ~80%. The numbers are lower for organic eggs because for some reasons in-ovo sexing was forbidden for organic eggs until this year (stupid much???). Overall, I find it difficult to weigh male-chicken-culling morally. Do they have strong conscious experience at that time? How much suffering is there involved in their deaths?

(A potential lesson from Wytham Abbey: You don't need to buy a fancy church building, you just need to befriend the people who currently own it!)

On that note - seems like an enormous waste that Wytham Abbey sits empty when it could be used (presumably at very little marginal cost?) while EVF works on selling it.

2
Kestrel🔸
While I cannot hope to comment on the specifics of EVF's situation, I assume there are aspects of the selling process complicating that. Being a good landlord of a fancy building does require quite a lot of outreach to potential tenants, not all of whom will be your favourite tenants, and it is quite easy to fall into the trap of thinking that people who aren't your favourite people shouldn't come into your fancy building because they might mess up your fancy floor. This is of course a horrible trap because fancy buildings fall into disrepair from lack of use all the time and from too much use almost never. I spend a nontrivial amount of time on the upkeep-related decision making circles for fancy buildings and not once have I ever heard the sentence "we have too many bookings and it's hurting our finances". (Despite hearing lots of worries about the ways this could, hypothetically, somehow be the case.)

I’m going to bow out - wasn’t my intention to try to “silence” anybody and I’m not quite sure how we got there!

I'm still skeptical of using 'obviousness'/'plausibility' as evidence of a theory being correct - as a mental move it risks proving too much. Multiple theories might have equally obvious implications. Plenty of previously-unthinkable views would have been seen to be deeply un-obvious.

You have your intuitions and I have mine - we can each say they're obvious to us and it gets us no further, surely? Perhaps I'm being dense.

In Don't Valorize The Void you say:

Omelas is a very good place, and it's deeply irrational to condemn it. We can demonstrate this by noti

... (read more)
3
Richard Y Chappell🔸
This is bad reasoning. People vary radically in their ability to recognize irrationality (of various sorts). In the same way that we shouldn't be surprised if a popular story involves mathematical assumptions that are obviously incoherent to a mathematician, we shouldn't be surprised if a popular story involves normative assumptions that others can recognize as obviously wrong. (Consider how Gone with the Wind glorifies Confederate slavery, etc.) It's a basic and undeniable fact of life that people are swayed by bad reasoning all the time (e.g. when it is emotionally compelling, some interests are initially more salient to us than others, etc.). Correct; you are not my target audience. I'm responding here because you seemed to think that there was something wrong with my post because it took for granted something that you happen not to accept. I'm trying to explain why that's an absurd standard. Plenty of others could find what I wrote both accurate and illuminating. It doesn't have to convince you (or any other particular individual) in order to be epistemically valuable to the broader community. If you find that a post starts from philosophical assumptions that you reject, I think the reasonable options available to you are: (1) Engage in a first-order dispute, explaining why you think different assumptions are more likely to be true; or (2) Ignore it and move on. I do not think it is reasonable to engage in silencing procedural criticism, claiming that nobody should post things (including claims about what they take to be obvious) that you happen to disagree with. [Update: struck-through a word that was somewhat too strong. But "not the sort of thing I usually expect to find on the forum" implicates more than just "I happen to disagree with this," and something closer to "you should not have written this."]

This got added as a comment on the original Substack article; think it's worth reading in this context:

https://expandingcircle.substack.com/p/the-dark-side-of-pet-ownership

Not what I was saying. More like, it’s a weak argument to merely say “my position generates a sensible-sounding conclusion and thus is more likely to be true”, and it would surprise me if eg a highly-upvoted EA Forum post used this kind of circular reasoning. Or is that what you’re defending?

I suppose I agree that we’re not obliged to give every crackpot view equal airtime - I just disagree that “pets have net negative lives” is such a view.

9
Richard Y Chappell🔸
To be clear: the view I argued against was not "pets have net negative lives," but rather, "pets ought not to exist even if they have net positive lives, because we violate their rights by owning/controlling them." (Beneficentrism makes no empirical claims about whether pets have positive or negative lives on net, so it would make no sense to interpret me as suggesting that it supports any such empirical claim.) It's not "circular reasoning" to note that plausible implications are a count in favor of a theory. That's normal philosophical reasoning - reflective equilibrium. (Though we can distinguish "sensible-sounding" from actually sensible. Not everything that sounds sensible at first glance will prove to be so on further reflection. But you'd need to provide some argument to undermine the claim; it isn't inherently objectionable to pass judgment on what is or isn't sensible, so objecting to that argumentative structure is really odd.)

I think the “pathology” comment is probably a norm violation. The “sensible” comment feels more like circular reasoning I guess? (Or maybe it doesn’t feel obvious to me, and perhaps therefore it irks me more than it does others.)

3
Richard Y Chappell🔸
I think it's very strange to say that a premise that doesn't feel obvious to you "is not the sort of thing [you] usually expect to find on the forum." (Especially when the premise in question would seem obvious common sense to, like, 99% of people.) If an analogy helps, imagine a post where someone points out that commonsense requires us to reject SBF-style "double or nothing" existence gambles, and that this is a good reason to like some particular anti-fanatical decision theory.  One may of course disagree with the reasoning, but I think it would be very strange for a bullet-biting Benthamite to object that this invocation of common sense was "not the sort of thing I usually expect to find on the forum." (If true, that would suggest that their views were not being challenged enough!) (I also don't think it would be a norm violation to, say, argue that naive instrumentalism is a kind of "philosophical pathology" that people should try to build up some memetic resistance against. Or if it is, I'd want to question that norm. It's important to be able to honestly discuss when we think philosophical views are deeply harmful, and while one generally wants to encourage "generous" engagement with alternative views, an indiscriminate demand for universal generosity would make it impossible to frankly discuss the exceptions. We should be respectful to individual interlocutors, but it's just not true that every view warrants respect. An important part of the open exchange of ideas is openness to the question of which views are, and which are not, respectable.)
7
JoA🔸
To be precise, I didn't say the post committed any norm violation (and Henry Stanley didn't either), I made the vaguer claim that it doesn't fit the standards of discussion that are often seen on the EA Forum (a "generous" approach, scout mindset).

Regarding the latter: calling a philosophical position "a pathology" with no further justification is not the sort of thing I usually expect to find on the forum

Agreed; same for the reference to the position here being strong because "it straightforwardly verifies sensible views on the topic".

5
Richard Y Chappell🔸
You think it's a norm violation for me to say that it's "sensible" to allow happy pets to exist? Or, more abstractly, that it's good for a theory to have sensible implications?

Although they're presented in adjacent sentences, this:

I’d say that domesticated life seems both (i) clearly good overall, and (ii) the best form of life that’s realistically available for many non-human animals.

seems distinct from:

(I know I’d much rather be reincarnated as a well-cared-for companion animal than as a starving, parasite-ridden stray. Yeah, even at the cost of a minute spent tied to a lamppost!)

I would also prefer to be a companion animal over being a stray – but I would probably prefer not to exist than exist as a companion animal.

N... (read more)

I tend to agree; better to be explicit especially as the information is public knowledge anyway.

It refers to this: https://forum.effectivealtruism.org/posts/HqKnreqC3EFF9YcEs/

Interesting that you chose not to name the org in question - I guess you wanted to focus on the meta-level principle rather than this specific case

4
Chris Leong
Maybe I should have. I honestly don't know. I didn't think deeply about it.

Interesting stuff but I think this is a bit too technical/in the weeds for the average EA forum reader - and it's not super clear what the EA angle is here.

most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here

Exactly. Daniela and the senior leadership at one of the frontier AI labs are not the same as someone's random office colleague. There's a clear public interest angle here in terms of understanding the political and social affiliations of powerful and influential people - which is simply absent in the case you describe.

From an animal welfarist perspective you could even have the recipe contain a message about how making chicken soup is unethical and should not be attempted.

Oddly I used to work at Pivotal[1] - have very fond memories of the London office with its full breakfast every morning...

Has since been acquired by VMware and gradually killed, and then VMware was acquired again by Broadcom who have really really killed it

  1. ^

    There's almost no mention of it online now as the brand has been killed off

To what extent do you think this fits the typical EA brief of 'important, neglected, tractable'? Even if we think that supporting sturgeon populations is intrinsically valuable - given that the fall in sturgeon populations is caused by overfishing for caviar, isn't is more obvious to just... not consume it?

You've mentioned your experience with burnout in a previous post - I wondered if you were willing to share more about that, and how it influenced your approach to EtG if at all.

I sometimes think about whether we have or should have language for a mental health equivalent of Second-Impact syndrome. At the time I burned out I would say I was dealing with four ~independent situations or circumstances that most people would recognise as challenging, but my attitude to each one was 'this is fine, I can handle this'. Taken one at a time that was probably true, all at once was demonstrably false. 

Somehow I needed to notice that I was already dealing with one or two challenging situations and strongly pivot to a defensive posture to... (read more)

Very impressive - I don't think I have the stomach (so to speak) to put myself through this kind of suffering. Thanks for doing something so selfless and unpleasant for the benefit of anonymous others ❤️

Hooray - this is awesome work. Fight the good fight.

I donated to THL last year because of this case; was advised by Founders Pledge that without funding the appeal might fall through and was keen for that not to happen. I wonder what other time-sensitive efforts in this space with room for more funding?

6
Molly Archer-Zeff
We are extremely grateful for your donation Henry, our supporters make this work possible!  In response to your question about room for funding, we’re currently facing a funding gap for this year and donations are vital for allowing us to continue our work on our priorities over the next 3 years. Our key priorities are:  Priority 1) We want to see tangible changes in chicken farming practices, with companies making new commitments, increased transparency, and a shift from standard factory farming of chickens towards the Better Chicken Commitment. We aim to shift even more of the UK market share towards the Better Chicken Commitment standards - from approximately 28% in 2023, to 40% in 2027.   Priority 2) We will make sure that companies who pledged to go cage-free by 2025 are following through on this promise. We aim to ensure that by 2027, 90% of hens are free from cages in the UK, ideally in free-range and organic systems. Priority 3) Our goal is to ensure the UK and Scottish governments have incorporated stunning for farmed fishes into legislation by 2027; and build towards an impactful corporate meat-reduction campaign by 2030. 

Thanks for all the hard work that went into building/rebuilding/maintaining EA Hub!

It's always sad to see old projects get shuttered, especially ones that were a labour of love, so kudos on recognising that it's the right time to do this.

Not a wholly-unserious suggestion. SWP could do a tie-in with the artist creating these fun knock-offs, capitalise on Swift madness, rehabilitate shrimp as cute in the process.

2
Henry Stanley 🔸
Not a wholly-unserious suggestion. SWP could do a tie-in with the artist creating these fun knock-offs, capitalise on Swift madness, rehabilitate shrimp as cute in the process.
Load more