I supplement iron and vitamin C, as my iron is currently on the lower end of normal (after a few years of being vegan it was too high, go figure).
I tried creatine for a few months but didn't notice much difference in the gym and while rockclimbing.
I drink a lot of B12 fortified soy milk which seems to cover that.
I have about 30g of protein powder a day with a good range of different amino acids to help hit 140g a day.
I have a multivitamin every few days.
I have iodine fortified salt that I cook with sometimes.
I've thought about supplementing omega 3 or eating more omega 3 rich foods but never got around to it.
8 years vegan for reference.
I strongly agree that current LLM's don't seem to pose a risk of a global catastrophe, but I'm worried about what might happen when LLM's are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I'm envisioning something like a LLM virtual assistant which leads to a lot...
This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself.
Thanks for the great question. I'd like to see more attempts to get legislation passed to lock in small victories. The Sioux Falls slaughterhouse ban almost passing gives me optimism for this. Although it seemed to be more for NIMBY reasons than for animal rights reasons, in some ways that doesn't matter.
I'm also interested in efforts to maintain the lower levels of speciesism we see in children into their adult lives, and to understand what exactly drives that so we can incorporate it into outreach attempts targeted at adults. Our recent interview w...
Thank you for the feedback! I just wanted to let you know that while I haven't had time to write a proper response, I've read your feedback and will try to take it on board in my future work.
People more involved with X-risk modelling (and better at math) than I could better say whether this is better than existing tools for X-risk modelling, but I like it! I hadn't heard of the absorbing state terminology, that was interesting. When reading that, my mind goes to option value, or lack thereof, but that might not be a perfect analogy.
Regarding x-risks requiring a memory component, can you design Markov chains to have the memory incorporated?
Some possible cases where memory might be useful (without thinking about it too much) might be:
Thanks for sharing, I'm looking forward to this! I'm particularly excited about the sections on measuring suffering and artificial suffering.
Thanks for sharing! I love seeing concrete roadmaps/plans for things like this, and think we should do it more.
Fair enough! I probably wasn't clear - what I had in mind was one country detecting an asteroid first, then deflecting it into Earth before any other country/'the global community' detects it. Just recently we detected a 1.5 km near Earth object that has an orbit which intersects with Earth. The scenario I had in mind was that one country detects this (but probably a smaller one ~50 m) first, then deflects it.
We detect ~50 m asteroids as they make their final approach to Earth all the time, so detecting one first by chance could be a strategic advantage.
I take your other points, though.
"(b) Secondly, while the great powers may see military use for smaller scale orbital bombardment weapons (i.e. ones capable of causing sub-global or Tunguska-like asteroid events), these are only as destructive as nuclear weapons and similarly cannot be used without risking nuclear retaliation."
I don't think this is necessarily right. First, an asteroid impact is easier to seem like a natural event, therefore being less likely to result in mutually assured destruction. Also, just because we can't think of a reason for a nation to use an asteroid strike, do...
If anyone is still reading this today and is curious where I ended up, I just took a job with Sentience Institute as a Strategy Lead & Researcher.
Cost is one factor, but nuclear also has other advantages such as land use, amount of raw material required (to make the renewables and lithium etc. for battery storage), and benefits for power grid.
It's nice that renewables is getting cheaper, and I'd definitely like to see more renewables in the mix, but my ideal long term scenario is a mix of nuclear, renewables and battery. I'm weakly open to a small amount of gas being used for power generation in the long term in some cases.
Hm, good to know and fair point! I wonder if we can test the effect of extra funding over what's needed to run a passable campaign by investing say $5,000 in online ads etc. in a particular electorate, but even that is hard to compare to other electorates given the number of factors. If anyone else has ideas for measuring impact of extra funding, I'd love to hear it!
Seeking grants from EA grant makers is something I haven't at all considered. I wonder if there are any legal restrictions on this as a political party recipient (I haven't looked into this but could foresee some potential issues with foreign sources of funding). On the one hand, AJP can generate its own funds, but I feel like we are still funding constrained in the sense that an extra $10,000 per state branch per election (at least) could almost always be put to good use. Do you think we should we look into this, particularly with the federal election coming up?
"This being said, the format of legislative elections in France makes it very unlikely that a deputy from the animalist party will ever be elected, and perhaps limits our ability to negotiate with the other parties."
This makes some sense, as unfortunate as it is. Part of the motivation for other parties being willing to negotiate with you or adopt their own incrementally pro-animal policies is based on how worried they are that they might lose a seat to your party. If they're not at all worried, this limits your influence.
But I wouldn't say it entirely voi...
I just want to add that I personally became actively involved with the AJP because I felt that political advocacy from within political parties had been overly neglected by the movement. My intuition was that this is because some of the earlier writings about political advocacy/running for election work by 80,000 Hours and others focused mostly on the US/UK political systems, which I understand are harder for small parties to have any influence (especially the US).
One advantage of being in a small party is that it's relatively easy to become quite senior q...
Thank you so much for the feedback!
I did think about working for a government department (non-partisan), but I decided against it. From my understanding, you can't be working for 'the crown' and running for office, you'd have to take time off or quit.
The space agency was my thinking along those lines, as I don't think that counts as working for the crown.
I hadn't thought about the UK Civil Service. I've never looked in to it. I don't think that would affect me too much, as long as I'm not a dual citizen.
I haven...
Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).
When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.
We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.
I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don't seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.
My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.
So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical im...
Self-plugging as I've written about animal suffering and longtermism in this essay:
http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/
To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.
People are already talking about introducing plants, insects and animals to Mars as a means of terr...
Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?
For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?
Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.
Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.
I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.
First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.
But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game whe...
Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.
I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.
I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.
Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's prob...
Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.
A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.
The model is largely se...
This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.
Any thoughts on the worst possible ethical theory?
Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?
Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally....
People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).
However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.
How can we get everyone to agree on the best ethical theory?
Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!
I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions max...
Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!
Your third point is well taken - I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.
I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn't win (and probably even if it does) I'll share it here.
I think it's a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I'm not sure that's totally obvious. When you throw wild animals and digital systems into the mix, things get scary.
Thanks, there are some good points here.
I still have this feeling, though, that some people support some causes over others simply for the reason that 'my personal impact probably won't make a difference', which seems hard to justify to me.
Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is.
Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.
Thanks for writing this. One small critique:
"For example, Brian Tomasik has suggested paying farmers to use humane insecticides. Calculations suggest that this could prevent 250,000 painful deaths per dollar."
I'm cautious about the sign of this. Given that insects are expected to have net negative lives anyway, perhaps speeding up their death is actually the preferable choice. Unless we think that an insect dying of pesticide is more painful than them dying naturally plus the pain throughout the rest of their life.
But overall, I would support the recommendation that OPP supports WAS research.
It looks like you're subscribing to a person-affecting philosophy, whereby you say potential future humans aren't worthy of moral consideration because they're not being deprived, but bringing them into existence would be bad because they would (could) suffer.
I think this is arbitrarily asymmetrical, and not really compatible with a total utilitarian framework. I would suggest reading the relevant chapter in Nick Beckstead's thesis 'On the overwhelming importance of shaping the far future', where I think he does a pretty good job at showing just this.
I did earning to give for 18 months in a job that I thought I would really enjoy but after 12 months realised I didn't. I'm now doing a PhD.
I think personal fit is pretty important, but at the end of the day it's still just another thing to consider, and not the be all end all. I think its a pretty valid point that you will perform better in a role that you enjoy and thus advance further and have more impact, but if you're really trying to maximise impact there are limits to that (e.g. Hurford's example about surfing, unless surfing to give can be a thing)...
I noticed there doesn't seem to be an option to nominate less than 5 people. Not sure if this is a feature but I wanted to just nominate a few people and was unable to.
I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say 'could' only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn't put us off too much, if at all.
Happy to hear what they are Alex.
The final article had a title change and it was made clear numerous times that it was a personal analysis, not necessarily representing the views of Effective Altruism. In fact, we worked off the premise of voting to maximise wellbeing, not to further EA.
I posted it here and shared it with EAs because they are used to thinking about ways to maximise wellbeing, and I've never seen an analysis that looks at multiple parties and policies to try and select the 'best' party (many have agreed that this doesn't seem to have been d...
Regardless of whether or not moral realism is true, I feel like we should act as though it is (and I would argue many Effective Altruists already do to some extent). Consider the doctor who proclaims that they just don't value people being healthy, and doesn't see why they should. All the other doctors would rightly call them crazy and ignore them, because the medical system assumes that we value health. In the same way, the field of ethics came about to (I would argue) try and find the most right thing to do. If an ethicist comes out and says that the mos...
Thanks for everyone's feedback. The article has now been published and is a living document (we will edit daily based on feedback) until the election.
Thanks for writing this! I had one thought regarding how relevant saying no to some of the technologies you listed is to AGI.
In the case of nuclear weapons programs, the use of fossil fuels, CFCs, and GMOs, we actively used these technologies before we said no (FFs and GMOs we still use despite 'no', and nuclear weapons we have and could use at a moments notice). With AGI, once we start using them it might be too late. Geo-engineering experiments is the most applicable out of these, as we actually did say no before any (much?) testing was undertaken.