Ariel Pontes

Group organizer @ EA Romania
293 karmaJoined Seeking workBucharest, Romania
about.me/arielpontes

Bio

Participation
6

Developer, community organizer, philosophy content creator @ ghostlessmachine.com. Originally from Rio, based in Bucharest.

How others can help me

  1. Help me find funds to do either community building or content creation full-time.
  2. Give me feedback and ideas for my Podcast.
  3. Suggest guests for my Podcast.
  4. Help me find a high impact job in tech.

How I can help others

  1. I can share my experience promoting secular humanism and effective altruism in Brazil. I'm the president of Humanistas Brasil and, although it's challenging to run a group from abroad, we're at least managing to keep the movement alive.
  2. I can share my experience promoting secular humanism and effective altruism in Romania, where I started the first EA group from scratch.
  3. I can host you on my couch in Bucharest.

Comments
15

The "illicit financial flows" link is broken. Does anybody know where I can get more info on this?

"these are very difficult and emotionally costly conversations to have"

I don't think this has to be the case. These things can usually be circumvented with sufficiently impersonal procedures, such as rating the application and having a public guide somewhere with tips on how to interpret the rating and a suggested path for improvement (e.g. talking to a successful grant recipient in a similar domain). A "one star" or "F" or "0" rating would probably still be painful, but that's inevitable. Trying to protect people from that strikes me as paternalistic and "ruinously empathetic".

As you imagined, the blog post does respond to your argument. If you don't think the response is satisfactory, I'd be curious to hear your thoughts :)

I agree that donating to an ACE top charity doesn't mean offsetting. I didn't mean to suggest that, I'm sorry if it sounded like that. What I mean is that it should be in principle possible to offset meat consumption. I didn't get into the practicalities of how this would actually work for the sake of brevity, but I can do it here:

Imagine a food delivery app that works like this:

  • When people  buy vegan/vegetarian food, in the checkout process they have an option to donate to a meat offset fund. This option can be checked by default with a suggested donation amount.
  • When people are ordering food with meat, in the checkout process they have the option to offset their meal, which means basically donating an amount equivalent to their order to the meat offset fund.
  • Sometimes, randomly, when somebody clicks the "proceed with order" button and they have meat in their order, they are prompted with a pop up telling them "You were randomly selected for a free vegan meal! If you accept the offer, your X$ order will be cancelled and you will get a voucher of X$ that expires in an hour and can be used to order vegan food.

I think this app would come quite close to actually implementing a legitimate meat offsetting feature. Every time a meat eater takes the offer, they give up a meat meal and eat vegan instead.

Hi Richard, thanks for the input :) That's a fair point, and I have considered it. The problem is that, after meditating over this, talking to friends, etc, I just cannot bring myself to believe that I am more confrontational than average, or even than the average community organizer. I just don't see evidence of this beyond this incident. This is a difficult thing for me to say because I feel that I will be perceived as stubborn, as somebody who is engaged in motivated reasoning, etc. But if I said anything else I would be lying. I don't know if I'm the only one who feels this, but sometimes I fear that we are creating an environment in EA where people don't have the space to respond sincerely to criticism. I sometimes feel a bit like I'm forced to accept any criticism immediately without questioning it because that's what it means to have a scout mindset. This cannot be a good thing. There must be a balance between resisting criticism too much or too little.

Besides, even if I am a bit more confrontational that the average organizer, I'm not convinced that I should give up and choose something else that's a better fit for me. I'm not perfect at anything. no matter what path I choose, I will have to work on myself in someways to become better at that job. I would only give up on a path if I'm a sufficiently bad fit. And I don't think I'm bad enough at community building that I should just give up on it.  Moreover, at this point I'm kinda in too deep into community organizing. If I stop now the community will die. I considered this possibility with people from the community and they encouraged me to continue. The community is growing, there are often new people who show up excited to the meetups, it feels unreasonable to disappoint everyone and stop everything just because 2 people who don't know me concluded that I am too confrontational after one isolated incident.

Also, I am a rather insecure and risk-averse person in general, so usually when I'm excited and confident about something it actually means I can do a good job at it. I guess I can accept that it's a bit useless to spend too much time trying to figure out  whether I really am or not more confrontational than it's ideal for a community organizer, but I think the implication is that I should look for ways to be friendlier and more agreeable, after all it can't hurt to improve in those dimensions, no matter how unconfrontational I may already be.

Hi Harrison, thanks for the detailed feedback. I take your point and I will try to edit the article to make it less "shocking", since there seems to be consensus that I went a bit too far. There are a couple of considerations though that I think might be relevant for me to raise. I'm not trying to excuse myself, but I do think they provide more context and might help people understand why I wrote the article in this style.

  1. The only reason why I brought up rape in the article at all was that the vegans who opposed my argument for meat offsets explicitly used the "rape argument": If meat offsets are permissible then rape offsets are permissible. Rape offsets can't possibly be permissible because rape is too shocking and disgusting. Therefore, meat offsets are not permissible. I could have used another example rather than rape, but the whole point of using the rape analogy is that rape is shocking and invokes moral disgust in most of us. If I used another example, I felt it would be dishonest. I was afraid that it would look like I am evading their argument. Don't you think this is a reasonable concern? How could I avoid the "rape" example without looking like I'm evading their argument?
  2. Sometimes, when discussing moral philosophy, I find it important to evoke some degree of shock or disgust. Again, I write to the general public, and there are many people in the lay audience who casually espouse relativistic views, but if I press them hard enough with sufficiently gory examples, they agree that certain things are "objectively bad". But I guess I have now learned to avoid gender-based violence in my examples. I do think "murder" is not good enough though. Would "torture" or "child decapitation" be OK? Or still too much?
  3. I don't have an official diagnosis but I've been called autistic many times in my life and after reading about the topic I concluded that people might be on to something. I'm a typical nerdy IT guy who struggled with social skills for most of my youth, and I've never been particularly good at reading how people feel, predicting how they're gonna react to certain things, etc. With time, however, I've learned how to mask my weirdness by following certain simple algorithms and I now have a very active social life and I would say I get into conflict less often than the average person. I'm just saying this because I've noticed that people often assume that I use shocking language because I am callous and insensitive and don't care about how people feel, but the truth is that I do care about how they feel, I just fail to predict how they will feel. Sure, at the end of the day the harm caused might be the same, but I do hope people will see this as an attenuating circumstance because a person whose heart is in the right place is more likely to improve their behavior in the future in response to feedback.
  4. Another factor that I think is tricky is cultural differences. So far my experience of EA is that the cultural norms are very much dictated by the US/UK, because this is where most people are and it's only natural that these cultural norms come to dominate. In progressive circles in the US/UK it seems that it has become mainstream to believe that people should be protected from any potential discomfort, that words can be violence, etc. Jonathan Haidt calls this the coddling of the American mind. I don't want to argue here that people should be more resilient. I haven't read enough about this so I prefer to refrain from judgement and remain agnostic. But my point is that in other cultures this phenomenon is not so mainstream. In Romania for example people are pretty callous comparatively and there is some tolerance for this in the culture, even in progressive circles. My girlfriend for example read the article and didn't say anything about it being too graphic or callous. Sure, American culture influences both Brazil (where I'm originally from) and Romania (where I've been living for 8 years), but I think it's a bit unfair to expect people from everywhere to flawlessly respect the sensibilities of the anglosphere without ever making any mistake.
  5. Besides, there is the even more complicated issue of subcultures. I'm into extreme metal and gory horror movies, and people in these communities have a different relationship to violence. We tend to talk about it callously and the less triggerable you are the more metal you are. I've also been active in the secular humanist movement, where many people identify as "free-speech fundamentalists", and there is more tolerance for "offensive content" than in other more mainstream progressive movements. Because of the strong rationalist component of EA, I've always assumed it overlapped a lot with secular humanism, but lately I am realizing that this overlap is a bit smaller than I assumed it to be.

Again, I'm not saying these things to excuse myself, I appreciate the feedback and I will adjust my behavior in response to it. At the end of the day I will always have to adopt one imperfect set of cultural norms or another, so if I want to get more involved in EA I might as well adopt EA norms. I guess I just felt the need to explain where I'm coming from so that people don't leave with the impression that I'm a callous person who doesn't care about how others feel. I made a mistake, I failed to predict that my article would be seen as too callous to EAs,  and hopefully with this new data point I can recalibrate my algorithms and minimize the chances that I will make the same mistake in the future. I cannot promise I will never make a mistake again, but I still hope my reputation won't be forever damaged by one honest mistake.

PS: What is the infamous Robin Hanson post? I'm curious :)

My hypothesis is that focusing on donations is a good strategy in Romania because of the tax incentives I mentioned. Because this is a new project, I am approaching everything experimentally. The plan is to test my hypothesis by estimating how many hours per week I spend on activities related to fundraising, and then measuring how much money I raised in the end of those six months. If the amount of money per unit of effort seems too small, then I will conclude that my hypothesis was wrong. If the amount is decent, then I will conclude that it was right. Of course, it's hard to put a number on what "decent" is in advance. It's also worth noting that I expect this effort to be cumulative: if during year 1 I make effort E1 and money M1, I would expect to have M1 again in year 2 without all that effort because once we make it to the list of NGOs a company contributes to, it's easy to stay there. Therefore, even a modest return in the end of 6 months can be enough encouragement to continue the experiment.

The only case I can make in favor of this hypothesis a priori is to say that the people in my company have experience both raising funds and offering them and that they estimated that I could easily raise $5-10k in my first year. And I think their estimate is plausible because I think I could easily find 10 members in our community who can convince their companies to donate $1k per year to us. Then as the community grows we should develop relationships with people in more companies, perhaps bigger ones that can donate larger sums, especially if we focus on tech outreach.

This is another reason why I think these 2 goals are actually interconnected: the activities involved in achieving one goal are also helpful in achieving the other. In some way, if I pursue only the tech goal, I feel I will be wasting opportunities to raise funds. Every developer that I attract to EA is somebody that I can both add to a database of EA-aligned developers, and also ask them to convince their employers/companies to donate to us. The actual hard part of the work is attracting these developers to EA, once they're part of the community, asking them to talk to their employers is the easy part.

I mean, sure, I could be wrong about all that, but this is something we can only find out if we try it. Do you think my hypothesis is so implausible that it's not worth testing?

Load more