All of Davis_Kingsley's Comments + Replies

Congratulations Niel! Best of luck with the future of 80k!

Whatever happened to AppliedDivinityStudies, anyway? Seemed to be a promising blog adjacent to the community but I just checked back to see what the more recent posts were and it looks to have stopped posting about a year ago?

I am around!
https://twitter.com/alexeyguzey/status/1668834171945635840

In general I think "TESCREAL" is a bad term that conflates a bunch of different things in order to attack them all as a group and I'd prefer not to see it used.

I consider this sort of "oh, I have a take but you guys aren't good enough for it" type perspective deeply inappropriate for the Forum -- and I say that as someone who is considerably less "anti-Anthropic" than some of the comments here.

0
trevor1
7mo
That's plausibly good for community-building, but from the infosec approach, you don't really know what kinds of people are reading the comments, or what kind of person they will be in a year or so. In an extreme scenario, people could start getting turned. But the more likely outcome is that people hired by various bigcorps (and possibly intelligence agencies) are utilizing EAforum for open-source intelligence; this is far more prevalent than most people think.

I interpret it as broadly the latter based on the further statements in the Twitter thread, though I could well be wrong.

Congrats Ben, and count me in as another voice in favor of this type of humor on the Forum!

Yes, to be clear I don't think Oli was necessarily claiming that -- I was replying to Jonas here, who listed Tara as one of "the Leverage people" in his own comment.

Wait, was Tara a Leverage person? Kerry and Larissa work for Leverage now and Tyler was affiliated in the past, but I wasn't under the impression Tara was particularly involved with Leverage -- though I could of course be wrong!

1
DanielFilan
1y
I do not read Oli as saying that Tara was at Leverage, and I've never heard that she was.

A while ago I remember seeing some  discussion of EA analysis of Ukraine relief following the Russian invasion -- perhaps some EAs from Poland were involved? Did this ever get comprehensively written up anywhere?

2
Andy_Schultz
1y
Here was one project: https://efektywnyaltruizm.org/blog/help-for-ukrainians/. I found that link on https://forum.effectivealtruism.org/posts/gacpE79RKke2foG9K/rough-attempt-to-profile-charities-which-support-ukrainian.

I quite suspect people at Anthropic are already thinking of considerations like this when deciding what to do and am not sure that an anonymous post is needed here.

While I don't like this post, I think someone should be writing a more detailed post along these lines to provide more context for people outside of Anthropic. It feels like many newer people in AI safety have positive feelings about Anthropic by default because of its association with EA and a post that causes people to think some more about it could be good. 

Thanks for posting this! I appreciate the legibility and insight into the process here, especially during a stressful time in EA/on the Forum. 

Thanks for posting this. I think giving detailed reflections and "lessons learned" like this can be really helpful in these sorts of situations, but I also recognize it can be tough to do in public. Positive reinforcement for this openness and frank discussion!

Historical note: If EA had emerged in the 1970s era of the gay rights movement rather than the 2010s, I can imagine an alternative history in which some EAs were utterly outraged and offended that gay or lesbian EAs had dared to invite them to a gay or lesbian event. The EA community could have leveraged the latent homophobia of the time to portray such an invitation as bizarrely unprofessional, and a big problem that needs addressing. Why are we treating polyamory and kink in 2023 with the same reactive outrage that people would have treated gay/lesbian s

... (read more)

Side-note: the OP says "Wildly unusual social practices like polyamory", but I think poly is fairly common in the Bay Area outside of EA/rat circles.

I suspect it's fairly common in other young, blue-tribe, urban contexts in the US too? (Especially if we treat "polyamorous", "non-monogamous", and many "monogamish" relationship styles as more-or-less the same phenomenon.)


I've heard this argument before but I think it's quite overstated. I grew up in the SF Bay Area and still am in touch with many friends from childhood. They are generally young, blue-tribe, ... (read more)

-4
Anthony Repetto
1y
Third Generation Bay Area, here - and, if you aren't going to college at Berkeley or swirling in the small cliques of SF among 800,000 people living there, yeah, not a lot of polycules. I remember when Occupy oozed its way through here that left a residue of 'say-anything-polyamorists' who were excited to share their 'pick-up artist' techniques when only other men where present. "Gurus abuse naïve hopefuls for sex" has been a recurring theme of the Bay, every few decades, but the locals don't buy it.

It's also worth noting that I am an adult convert to Catholicism and was involved with the Bay Area rationalist and EA community (and uncomfortable with the "polyamory pressure" in that community) for years before joining the Church, including some time when I didn't take religion seriously much at all. Claiming or implying that I hold my views (or faced backlash against them) just because I'm Catholic does me a disservice.

I note also that others in the community who are not (as far as I know) Catholic have faced backlash for their views against polyamory ... (read more)

No, but if you say "polyamory has been a problem in the EA (and rationalist) communities for a long time" and people know that you do in fact believe polyamory to be immoral, it's completely reasonable for them to respond as Kelsey did?
 


Most people don't know that and I wasn't asserting it here -- that would be much more controversial and much more of a debate than I wanted to have, and further one that I don't think is very appropriate for the EA Forum! My hope is (was?) that even people who quite disagree with me -- including many polyamorous people -- would have common cause in opposing the pressure to be polyamorous that has been prevalent.

Imagine I wrote:

I think veganism has been a problem in the EA community for a long time and has led to some bad dynamics where people have been pressured to go without food that meets their nutritional needs, including residential multi-day events where only vegan food was served.

If someone, knowing my views on animals that are probably about as well known as your views on sexual morality, responded as if I was saying animal welfare doesn't matter, I think that would be pretty reasonable. And if I didn't want that interpretation I'd need to drop the "veganism has been a problem" bit and just talk about the particular bad dynamics I was opposed to.

I am a Catholic -- though I would not call myself a traditionalist -- and I believe what the Church teaches, including on matters of sexuality. Bringing my religion up in this way feels like a character attack that ought to be below the standards of the EA Forum though, and I'm grieved to see it.

My posts here are not saying "Polyamory is a sin, convert to Catholicism." They are not saying "you should be pressured into monogamy." Those things seem much more contentious than what I'm going for here. Instead, I am saying that there has long been in fact the e... (read more)

I also think it’s quite reasonable for a religious person to give secular arguments for worldviews which also happen to be held in their religion.

For example, if Davis was making a humanistic argument for why people should take Giving What We Can’s 10% pledge, then accusing him of disingenuously trying to sneak in the “Catholic agenda” of giving a tithe to the poor doesn’t seem fair.

Or imagine if a Jain was giving a humanistic argument for why people should be vegetarian, and they were accused of disingenuously trying to sneak in the “Jain agenda” of animal welfare.

It's also worth noting that I am an adult convert to Catholicism and was involved with the Bay Area rationalist and EA community (and uncomfortable with the "polyamory pressure" in that community) for years before joining the Church, including some time when I didn't take religion seriously much at all. Claiming or implying that I hold my views (or faced backlash against them) just because I'm Catholic does me a disservice.

I note also that others in the community who are not (as far as I know) Catholic have faced backlash for their views against polyamory ... (read more)

My posts here are not saying "Polyamory is a sin, convert to Catholicism."

No, but if you say "polyamory has been a problem in the EA (and rationalist) communities for a long time" and people know that you do in fact believe polyamory to be immoral, it's completely reasonable for them to respond as Kelsey did?

If you want people only to respond to the more limited "people should not be pressured into polyamory" perhaps you should say that explicitly?

Yes, I'm not sure this needs to be said but just to be clear -- I also don't think CEA or whatever should have a "talking people out of polyamorous relationships" department, and this would seem like a bizarre overreach to me.

I'm thinking of things much more along the lines of "discourage the idea of polyamory as 'more rational' and especially polyamory pressure in particular", not "make EA institutions formally try to deconvert people from polyamory" or whatever.

To be clear, the thing I was wishing we had resolved internally was much more the widespread pressure to be polyamorous in (at least some parts of?) EA rather than individual people's relationships; as you say, it would not be appropriate for the EA community to have a discussion about how to "resolve" your personal relationships.  What would that even mean?

However, I think that this is far from the first time that major cultural issues with polyamory and unwelcome pressure to be polyamorous have been brought up, and it does seem to me that that's the... (read more)

In the article, Gopalakrishnan mentions having raised her concerns earlier only to be dismissed and attacked, told that she was "bigoted" against polyamorous people

The article has "One commenter wrote that her post was 'bigoted' against polyamorous people."

While Gopalakrishnan has deleted the post and the comments are no longer visible, my memory is that the comment describing her as saying something bigoted was reasonable?

I think polyamory has been a problem in the EA (and rationalist) communities for a long time and led to both some really uncomfortable and concerning community dynamics and also just a lot of drama and problems. Multiple high-profile women have told me that they felt pressured to be polyamorous by men in the community and/or felt that polyamory was bad but they didn't feel comfortable speaking up against it, and I've faced some degree of community social backlash myself for speaking out (even informally!) against polyamory. 

In general I think this has been kind of an ongoing issue for quite some time, and I wish we had resolved it "internally" rather than it being something exposed by outside investigators.

I think that relevant context for backlash against Davis Kingsley's anti-polyamory views is that he is an orthodox Catholic. His anti-polyamory views are part of a set of fairly extreme views about sexuality, including being opposed to homosexuality, masturbation, contraception, premarital sex, and any sexual intercourse other than PIV. He has also expressed the viewpoint that polyamory should be socially stigmatized and people should be pressured into monogamy. I believe that much, perhaps most, of the backlash he has faced is due to the overall set of hi... (read more)

Multiple high-profile women have told me that they felt pressured to be polyamorous by men in the community

I too have (consistently) seen this, so I am grateful to hear it being brought up publicly

I am very bothered specifically by the frame "I wish we had resolved [polyamory] "internally" rather than it being something exposed by outside investigators."

I am polyamorous; I am in committed long-term relationships (6 years and 9 years) with two women, and occasionally date other people. I do not think there is anything in my relationships for "the community" to "resolve internally". It would not be appropriate for anyone to tell me to break up with one of my partners. It would not be appropriate for anyone to hold a community discussion about how to '... (read more)

You say :

Whenever someone in your life asks you half-jokingly asks "how can I become smart like you?", you no longer need to answer "Have you ever read Harry Potter?" because Projectlawful.com does not have Harry Potter in it.

On the contrary, this is a work I strongly wouldn't recommend, and especially not to newcomers. It's highly sexualized, contains descriptions of awful torture and various other forms of extreme misconduct, has a bunch of weird fetish material that more or less immediately disqualifies it as an intro rec in my opinion (far more so than... (read more)

I recognize this comment may not be received well here, but I think things like this are quite bad for EA to support -- there are very substantial political skew issues in the movement already, and running political candidates as a EA intervention seems like another step down a road I think the movement needs to quickly depart.

The "Organizations vs. Getting Stuff Done" post is about anarchist political activism. This is a rather unusual area -- under normal circumstances organizations are a relevant tool to aid in getting things done, not an obstacle to it.

1
quinn
2y
to partially rehash what was on discord and partially add more:  * I don't think saying that institutions have benefits and are effective is at all an argument against specific drawbacks and failure modes. Things that are have pros can also have cons, pros and cons can coexist, etc.  * I agree that a portion of the criticism is moot if you don't on priors think hierarchy and power are intrinsically risky or disvaluable, but I think having those priors directs one's attention to problems or failure modes that people without those priors would be wise to learn from. Moreover, if you look at the four points in the article, I don't think those priors are critical for any of them. * specifically, I think a variety of organizations are interested in trading off inefficiency problems of bottom-up against the information bottleneck problems of top-down. People who are motivated by values to reject the top-down side would intuitively have learned lessons about how to make the bottom-up side function. * If I find the name of the individual, I'll return to thread to make my point about the german scientist who may have prevented the nazis from getting nukes by going around and talking to people (not by going through institutional channels)

To me this seems like essentially a "cheap shot" -- you could write basically this story in support of very many positions. Imagine a story that's like "wow, this guy was a utilitarian, even back then people knew utilitarianism could lead to unacceptable conclusions, we're getting rid of his statue" or whatever. In fact, you could probably write a story like this against certain ideas in EA animal thought.

Yeah, IIRC both G.K. Chesterton and C.S. Lewis wrote about how anyone can just say "the future will agree with me," as a way of getting support for your ideas, but nobody really knows about the future and probably everyone is wrong because the future will be more complicated than anyone thinks, and so arguments from the future are bad logic and invalid. (I think that Lewis's is a bit of the Screwtape Letters and that Chesterton's essay is in "What's Wrong With The World.") So I endorse this complaint.

But I didn't include that in my description because I do in fact think veganism will take over the world once the technology gets far enough, so that wasn't my true objection to the story.

One relevant concept might be that of the feedback loop, where the output of a process affects the input. For instance, if you survey only people who are already attending your events as to how to improve them, you might wind up missing ways to improve it for those who didn't attend. After several cycles of this you might wind up with an event that is very appealing for the "in crowd" but which doesn't much appeal to newcomers.

Note that Torres was banned from the forum for a year following a previous discussion here where he repeatedly called another member a liar and implied that member should be fired from his job.

Good point re: Charity Entrepreneurship.

I'm somewhat more skeptical of the grantmaking thing though because there are few enough positions that it is not very legible who is good at it, whether others currently outside the field could do better, etc.

I could be wrong -- I can point to specific things from some grantmakers that I thought were particularly good, for instance -- but it doesn't feel to me that it's the most amenable field for such a program. 

(Note that this is low-confidence and I could be wrong -- if there are more objective grantmaking skill metrics somewhere I'd be very interested to see more!)

5
Kirsten
3y
Some trainable things I think would help with grantmaking: -knowledge of the field you're making grants in -making a simple model to predict the expected value of a grant (looking for a theory of change, forecasting the probability of different steps, identifying the range of possible outcomes) -best practices for identifying early signs a grant won't be worth funding, to save time, without being super biased against people you don't know or from a different background to you who eventually could do good work -giving quality feedback to successful and unsuccessful applicants -engaging with donors (writing up summaries of why you gave different grants, talking to people who are considering donating through your fund) -evaluating your grants to learn how closely what really happened matched your model It doesn't seem to me obviously less trainable then being a Navy seal

My impression is that the people who end up working in EA organizations are not on the same tier of discipline, work ethic, commitment, etc. as elite military forces and are not really even very close?

I don't say that to disparage EA direct workers, I'm involved in direct work myself  -- but my sense is that much more is possible. That said, as you mention the amount of discipline needed may simply not be as high.

4
AppliedDivinityStudies
3y
Yeah again, for highly creative intellectual labor on multi-decade timescale, I'm not really convinced that working super hard or having no personal life or whatever is actually helpful. But I might be fooling myself since this view is very self-serving.

For some reason I can't see the draft, when I click on the notification I received for it it says "Error: app.operation_not_allowed" and kind of glitches out the interface until I refresh. Apologies!
 

(edit: fixed now, thanks!)

Thanks, I'm impressed by this reply and your willingness to go out there and do a survey. I will have more substantive feedback later as I want to consult with someone else before making a further statement -- ping me if I haven't replied by Friday.

1
QubitSwarm99
3y
Thank you for your kind words. I will ping you midday-evening Eastern time on Friday if I see no reply. I am going to make a full post (probably by this evening), so please reply to that instead of in this comment thread, if possible. Hope you have a nice day.

I (very anecdotally) think there are lots of people who are interested in donating to quite specific cause areas, e.g. "my father died of cancer so I donate to cancer charities" or "I want to donate to help homelessness in my area" -- haven't studied that in depth though.

Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:

"The term "longtermism" is misleading because in practice "longtermism" means "concern over short AI timelines", and in fact many "longtermists" are concerned with events on a much shorter time scale than the rest of EA."

I thought that was a surprising and interesting argument, though I don't recall who initially made it. Does anyone remember?

This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI's long lasting influence into the far future.

The most important thing in life is to be free to do things. There are only two ways to insure that freedom — you can be rich or you can you reduce your needs to zero. I will never be rich, so I have chosen to crank down my desires. The bureaucracy cannot take anything from me, because there is nothing to take.

Colonel John Boyd

6
Ben_West
5y
This is great. Much more eloquent than my post.

I think this comment, while quite rude, does get at something valuable. There's an argument that goes "hmm, the outside view says this is absurd, we should be really sure of our inside view before proceeding" and I think that's sometimes a bit of a neglected perspective in rationalist/EA spaces.

I happen to know that the inside view on HPMoR bringing people into the community is very strong, and that the inside view on Eli Tyre doing good and important work is also very strong. I'm less familiar with the details behind the other gra... (read more)

I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think weirdness is overall a pretty bad predictor of impact, both in the positive and negative direction. I think it's a good emotion to pay attention to, because often you can learn valuable things from it, but I think it only sometimes tends to give rise to real arguments in favor or against an idea.

It is also v... (read more)

I don't agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.

Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I'm not clear on is to what extent CEA will "automatically" follow these recommendations and to what extent there will be significant further review.

I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka

Hear, hear.

I feel proud of the commitment to epistemic integrity that I see here.

I will make sure to update this post with any new information about whether CEA can actually make these grants. My current guess is that maybe 1-2 grants will not be logistically feasible, but the vast majority should have no problem.

Just posting to acknowledge that I've seen this - my full reply will be long enough that I'm probably going to make it a separate post.

Neither is poverty alleviation or veganism or anything else in practice.

Again, strong disagree - many things are not politicized and can be answered more directly. One of the main strengths of EA, in my view, is that it isn't just another culture war position (yet?) - consider Robin Hanson's points on "pulling the rope sideways".

2
kbog
5y
I think I'm losing track of the point. What does it mean to answer something "more directly"? I'm not sure how that's relevant here since I'm clearly saying that we're not taking a position on abortion.
You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you're saying that the actual conclusions matter.

Sorry, I perhaps wasn't specific enough in my original reply. The "less authoritative" thing was meant to apply to the entire document, not just this one section - that's why I also said I wasn't sure documents like this are good for EA as a movement.

I think there's something unhealthy and self-reinforcing about tiptoeing around like that. The
... (read more)
3
kbog
5y
In the preface I state that hedging language is minimized for the sake of readability. Neither is poverty alleviation or veganism or anything else in practice.

Like I said, that's not really the point - it also doesn't meaningfully resolve that particular issue, because of course the whole dispute is whose well-being counts, with anti-abortion advocates claiming that human fetuses count and pro-abortion people claiming that human fetuses don't.

I dunno, maybe I'm overly cautious, but I'm not fond of someone publishing a well-made and official-looking "based on EA principles, here's who to vote for" document, since "EA principles" quite vary - I think if EA becomes seen as politically aligned (with either major US party) that constitutes a huge constraint on our movement's potential.

2
kbog
5y
You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you're saying that the actual conclusions matter. The document has sufficient disclaimers as it is, I mean the preface clearly says EAs could disagree. You don't see Givewell writing "assuming that poverty is the #1 cause area, which EAs may disagree on" multiple times and I don't treat politics with special reverence as if different rules should apply. I think there's something unhealthy and self-reinforcing about tiptoeing around like that. The point here is to advertise a better set of implicit norms, so that maybe people (inside and outside EA) can finally treat political policy as just another question to answer rather than playing meta-games. If I care about total well-being, then of course people who say that some people's well being doesn't count are going to be wrong. This includes the pro lifers, who care about the future well being of a particular fetus but not the future well being of any potential child (or not as much, at least).

I don't think there's much practical difference between "intrinsic moral interests" and "intrinsic moral rights", but that's not really the point - it's more that I think given such differences in perspective between EAs, I'm not sure that documents like this are great for EA as a movement. I would at least prefer to see them presented less... authoritatively?

2
kbog
5y
OK fine, in CSS3 it now simply says " Absolutist arguments for or against abortion disappear once we focus on well-being. "

I like that you've put the effort into creating this, but I'm not fond of the background assumptions here - there seem to be some elements that not all EAs might necessarily share. For instance, one section begins "Intrinsic moral rights do not exist" - that's certainly not what I believe and it seems inconsistent with other sections that talk about the "intrinsic moral weight" of animal populations, etc.

While the fact that you've "shown your work" with the Excel spreadsheet helps people evaluate the same i... (read more)

6
kbog
5y
It's definitely consistent - animals can have interests without having rights, just like humans. Rights can point in a bunch of different ways depending on the moral inclinations of the reader. And integrating and applying them to policy is a very murky issue. So even if I wanted to investigate that side of things, I would have little ability to provide useful judgments to EAs. At some point, it would be nice to include full arguments about morality. But that's pretty low on my priorities, I don't expect to add it in the foreseeable future. Those arguments already exist elsewhere. You can add a column besides the other topics, then insert a new row into the weight table (select three adjacent cells and press insert...). True it's a little complicated - but I have to make the spreadsheet this way in order to make the sensitivity analysis work well.

I actually quite disagree - I believe history indicates national militaries very frequently miss effective ways to conduct war. There's a famous phrase, "fighting the last war", that describes how military planners almost always miss innovations and changes in conditions during peacetime and only adapt when forced to by direct conflict.

For example, between World War One and World War Two, the world's militaries converged on several dangerously false theories with respect to what the next war would look like, and many weapons and strateg... (read more)

1
aogara
5y
Good point, I hadn't considered that. If I were to try to fit this to my model, I would say that there's nobody really looking to produce the best military technology/tactics in between wars. But if you look at a period of sustained effort in staying on the military cutting edge, i.e. the Cold War, you won't see as many of these mistakes and you'll instead find fairly continuous progress with both sides continuously using the best available military technology. I'm not sure if this is actually a good interpretation, but it seems possible. (I'd be interested in where you think we're failing today!) But even if this is true, your original claim remains true: if it takes a Cold War-level of vigilance to stay on the cutting edge, then terrorists probably aren't deploying the best available weaponry, just because they don't know about it. So maybe an exceptional effort can keep you on the cutting edge, but terrorist groups aren't at that cutting edge?

Good points!

One note I'll add is that similar attacks with vehicles or bladed weapons were used against Israel prior to their adoption by ISIS, though these attacks are not as widely reported by Western media since they don't happen in Europe or the US; that said, it's quite possible that ISIS themselves got the idea from Palestinian attackers, especially if the "copycat hypothesis" is true.

One thing I've noticed is that direct work tends to put you much more in contact with reality (for lack of a better term) than community-building; it's much easier to see what you're accomplishing and what is and isn't working. This can be especially important for people trying to build and/or demonstrate skills.

3
Aaron Gertler
6y
I strongly second this. This doesn't even have to mean direct EA work -- I think you learn a lot even by volunteering for non-EA causes (a few hours knocking on doors for a political candidate, an evening at a soup kitchen, etc.). It's good to see how nonprofits of all stripes organize their events and volunteers, and also good to be able to discuss the different nonprofit experiences you've had. (It's easy to come across as "do-nothing philosopher idly speculating" when you talk about EA with someone who spends every weekend volunteering, and that's not a good look.)

It seems clear to me that in some cases positive systemic change is possible, even with relatively limited teams working on them.

However, systemic changes can also lead to substantial problems. Even some of the examples you gave here are far from objectively good - I note with some worry that historical attempts to place the means of production into the hands of the people have led to some of the greatest disasters of human history.

The "downside risks" of these sorts of approaches seem very high. That isn't to say that nobody should do them, but ... (read more)

0
kbog
7y
Just because some of the cases where means of production were publicly owned happened in a country where lots of people were killed doesn't imply that placing the means of production in public hands is always likely to get lots of people killed. For one thing, charity has been responsible for disasters, as well as capitalism, but we don't think that those things are always bad just because of that. Western democracies have a far more advanced political culture and civil society than early 20th century Asia and eastern Europe did, and we also have democratic governments rather than dictatorships, so worries over mass murder can be pretty easily tossed out the window. Calculation problems are more interesting, and I'm not well read on the literature behind them, but the one thing I can say pretty confidently is that 21st century America/Europe doesn't have to worry about food in the way that the 1920s USSR or 1970s China did. I don't think anything will satisfy all four categories well, but some could fit pretty well. Many cases of public ownership of the means of production are so ideologically safe that people don't even realize that they are cases of public ownership of the means of production: national parks, NASA, highways, etc. Others might be more controversial (OPEC, some airlines) but still have good reasons behind them.