Thanks for sharing and great work, I'm inspired! I'm starting a new role at a large company in a few weeks after working at smaller organisations/academia for a while, and I'm excited to explore what's possible once I settle in.
I did a limited version of this 10 years ago at my first full time job at a large Australian company. A few colleagues came to a giving game I co-organised with a local EA chapter. I spoke to the company's philanthropic giving lead - I didn't make any headway, and found out that the company's corporate giving was based predominately on supporting communities they operated in (I was a bit naive).
I'm really excited about this! I'll be watching it closely, because starting something similar here in Australia could be interesting.
My experience working in policy has been that it can either be surprisingly tractable or surprisingly intractable. Achieving change in energy policy in Australia has been surprisingly easy, and achieving change in farmed animal policy in Australia has been surprisingly hard.
I'm not sure yet which of the two would be most analogous to wild animal welfare. Farmed animal policy has strong entrenched interests, but perhaps wild animal welfare doesn't because many don't care about the issue as much one way or the other. It could be easy to get some quick wins.
A lot of people are talking about data centres in space in the last few weeks. Andrew McCalip built a model to see what it would take for space compute to get cheaper than terrestrial compute.
This quote stood out:
...we should be actively goading more billionaires into spending on irrational, high-variance projects that might actually advance civilization. I feel genuine secondhand embarrassment watching people torch their fortunes on yachts and status cosplay. No one cares about your Loro Piana. If you've built an empire, the best possible use of it is to bur
Thanks for writing about this! I've thought about this as well, but there are a couple of reasons I haven't done this yet. Primarily, I've been thinking more lately about making sure my time is appropriately valued. I'm still fairly early-mid career, and as much as it shouldn't matter, taking a salary reduction now probably means reduced earnings potential in the future. This obviously matters less if you don't plan on working for a non-highly impactful non-profit in the near future or if you're later in your career, but I think this is worth thinking abou...
As someone who is not an AI safety researcher, I've always had trouble knowing where to donate if I wanted to reduce x-risk specifically from AI. I think I would have donated quite a larger share of my donations to AI safety over the past 10 years if something like an AI Safety Metacharity existed. Nuclear Threat Initiative tends to be my go to for x-risk donations, but I'm more worried about AI specifically lately. I'm open to being pitched on where to give for AI safety.
Regarding the model, I think it's good to flesh things out like this, so thank you fo...
Applying remote sensing to fish welfare is a neat idea! I've got a few thoughts.
I’m surprised that temperature had no/low correlation with the remote sensing data. My understanding is that using infrared radiation to measure water surface temperature was quite robust. The skin depth of these techniques are quite small, e.g., measuring the temperature in the top 10 μm. Do you have a sense of the temperature profile with respect to depth for these ponds? Perhaps you were measuring the temperature below the surface, and the surface temperature as predicted by...
Point 4, Be cautious and intentional about mission creep, makes me think of environmental- and animal-focused political parties such as the Greens and Animal Justice Party in Australia, and the Dutch Party for the Animals in the Netherlands. The first formed as as an environmental party, and the latter two formed as animal protection parties.
All three of these have experienced a lot of mission creep since then (Animal Justice Party to a lesser extent than the other two). The prevailing wisdom from many is that this is a good thing. A serious political part...
Thanks for writing this! I had one thought regarding how relevant saying no to some of the technologies you listed is to AGI.
In the case of nuclear weapons programs, the use of fossil fuels, CFCs, and GMOs, we actively used these technologies before we said no (FFs and GMOs we still use despite 'no', and nuclear weapons we have and could use at a moments notice). With AGI, once we start using them it might be too late. Geo-engineering experiments is the most applicable out of these, as we actually did say no before any (much?) testing was undertaken.
I supplement iron and vitamin C, as my iron is currently on the lower end of normal (after a few years of being vegan it was too high, go figure).
I tried creatine for a few months but didn't notice much difference in the gym and while rockclimbing.
I drink a lot of B12 fortified soy milk which seems to cover that.
I have about 30g of protein powder a day with a good range of different amino acids to help hit 140g a day.
I have a multivitamin every few days.
I have iodine fortified salt that I cook with sometimes.
I've thought about supplementing omega 3 or eating more omega 3 rich foods but never got around to it.
8 years vegan for reference.
I strongly agree that current LLM's don't seem to pose a risk of a global catastrophe, but I'm worried about what might happen when LLM's are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I'm envisioning something like a LLM virtual assistant which leads to a lot...
This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself.
Thanks for the great question. I'd like to see more attempts to get legislation passed to lock in small victories. The Sioux Falls slaughterhouse ban almost passing gives me optimism for this. Although it seemed to be more for NIMBY reasons than for animal rights reasons, in some ways that doesn't matter.
I'm also interested in efforts to maintain the lower levels of speciesism we see in children into their adult lives, and to understand what exactly drives that so we can incorporate it into outreach attempts targeted at adults. Our recent interview w...
People more involved with X-risk modelling (and better at math) than I could better say whether this is better than existing tools for X-risk modelling, but I like it! I hadn't heard of the absorbing state terminology, that was interesting. When reading that, my mind goes to option value, or lack thereof, but that might not be a perfect analogy.
Regarding x-risks requiring a memory component, can you design Markov chains to have the memory incorporated?
Some possible cases where memory might be useful (without thinking about it too much) might be:
Fair enough! I probably wasn't clear - what I had in mind was one country detecting an asteroid first, then deflecting it into Earth before any other country/'the global community' detects it. Just recently we detected a 1.5 km near Earth object that has an orbit which intersects with Earth. The scenario I had in mind was that one country detects this (but probably a smaller one ~50 m) first, then deflects it.
We detect ~50 m asteroids as they make their final approach to Earth all the time, so detecting one first by chance could be a strategic advantage.
I take your other points, though.
"(b) Secondly, while the great powers may see military use for smaller scale orbital bombardment weapons (i.e. ones capable of causing sub-global or Tunguska-like asteroid events), these are only as destructive as nuclear weapons and similarly cannot be used without risking nuclear retaliation."
I don't think this is necessarily right. First, an asteroid impact is easier to seem like a natural event, therefore being less likely to result in mutually assured destruction. Also, just because we can't think of a reason for a nation to use an asteroid strike, do...
Cost is one factor, but nuclear also has other advantages such as land use, amount of raw material required (to make the renewables and lithium etc. for battery storage), and benefits for power grid.
It's nice that renewables is getting cheaper, and I'd definitely like to see more renewables in the mix, but my ideal long term scenario is a mix of nuclear, renewables and battery. I'm weakly open to a small amount of gas being used for power generation in the long term in some cases.
Hm, good to know and fair point! I wonder if we can test the effect of extra funding over what's needed to run a passable campaign by investing say $5,000 in online ads etc. in a particular electorate, but even that is hard to compare to other electorates given the number of factors. If anyone else has ideas for measuring impact of extra funding, I'd love to hear it!
Seeking grants from EA grant makers is something I haven't at all considered. I wonder if there are any legal restrictions on this as a political party recipient (I haven't looked into this but could foresee some potential issues with foreign sources of funding). On the one hand, AJP can generate its own funds, but I feel like we are still funding constrained in the sense that an extra $10,000 per state branch per election (at least) could almost always be put to good use. Do you think we should we look into this, particularly with the federal election coming up?
"This being said, the format of legislative elections in France makes it very unlikely that a deputy from the animalist party will ever be elected, and perhaps limits our ability to negotiate with the other parties."
This makes some sense, as unfortunate as it is. Part of the motivation for other parties being willing to negotiate with you or adopt their own incrementally pro-animal policies is based on how worried they are that they might lose a seat to your party. If they're not at all worried, this limits your influence.
But I wouldn't say it entirely voi...
I just want to add that I personally became actively involved with the AJP because I felt that political advocacy from within political parties had been overly neglected by the movement. My intuition was that this is because some of the earlier writings about political advocacy/running for election work by 80,000 Hours and others focused mostly on the US/UK political systems, which I understand are harder for small parties to have any influence (especially the US).
One advantage of being in a small party is that it's relatively easy to become quite senior q...
Thank you so much for the feedback!
I did think about working for a government department (non-partisan), but I decided against it. From my understanding, you can't be working for 'the crown' and running for office, you'd have to take time off or quit.
The space agency was my thinking along those lines, as I don't think that counts as working for the crown.
I hadn't thought about the UK Civil Service. I've never looked in to it. I don't think that would affect me too much, as long as I'm not a dual citizen.
I haven...
Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).
When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.
My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.
So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical im...
Self-plugging as I've written about animal suffering and longtermism in this essay:
http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/
To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.
People are already talking about introducing plants, insects and animals to Mars as a means of terr...
Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.
I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.
First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.
But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game whe...
Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.
I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.
I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.
Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's prob...
Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.
A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.
The model is largely se...
This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.
Any thoughts on the worst possible ethical theory?
Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?
Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally....
People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).
However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.
How can we get everyone to agree on the best ethical theory?
Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!
I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions max...
Thanks for the detailed response! I've included a few reflections on the work in the conclusion section. Fair point on the internal costs - I was thinking about this as a cost but not as an impact multiplier from funding. With some more work it could be used as justification for the existence of ECA and why consumers pay their salary. ~$200k seems right for staff time plus overhead.
Yeah, "over half" was quite surprising to me too. I wonder how much of this is because organisations may only lodge a rule change request if they have a decent sense that it is ... (read more)