If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


Open threads are also a place to share good news, big or small. See this post for ideas.

14

0
0

Reactions

0
0
Comments14
Sorted by Click to highlight new comments since: Today at 2:36 PM
Caro
3y22
0
0

I found the EA forum really lively and thriving these last few months. It's really a pleasure hanging out here!  I also feel more at ease to comment/post thanks to the aliveness and welcoming community. Congrats to the CEA team for making an awesome job at developing a great space for EA discussions!

Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism. 

Since then, I often take a look at this forum and have read 'The Precipice' by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that we can agree on many things. 

My highest priorities are avoiding existential risks and improving decision making. Moreover, I think about the consequences of technological stagnation and the question if there are possible events far in the future that can only be influenced positively if we start working soon. At the moment my  time is very constrained, but I hope that I will be able to participate in the discussion.  

Hello! I'm one of the folks who only recently found out about EA but have been hacking my own independent version for years... It's incredible to find what I had hoped was in existence after so long. (My portal key was this podcast episode: https://samharris.org/podcasts/228-doing-good/

It feels like walking into a room of strangers but realizing, 'Now these are my people'. The most parallel experience I've had was being invited to an Ecoversities gathering a few years back, arriving in a Costa Rica and within a few hours feeling a sense of soul belonging. (If you believe in rooted, whole-self development.. Here they are http://ecoversities.org/

My passion is supporting young people in finding their path and their autonomy - and I'm in the process of interviewing folks who do feel they've found a way to live an inspired, impactful life about how they got there. Anyone here want to participate? *

*Disclaimer: You don't to have all your everything figured out, would love to talk to most anyone who's found their way here.

While in my last year of high school, I independently came up with the idea that we should try to maximize aggregate utility over time: . A few weeks later, I heard about EA from a teacher.

I would love to see how you'd solve that equation now, compared to when you first wrote it. Glad your teacher knew where to point you!

I felt the same thing when I discovered (and met) EAs :-). Welcome!

Hi everyone, I'm Marta (she/her) and I work as a PhD student at University of Groningen in the Netherlands. I study psychology of creativity and the mechanisms through which people generate creative ideas. I write about how effective altruists, advocates and activists can use creativity to make the world a better place and I share findings from creativity and animal advocacy research at my website www.bullshitfreecreativity.com

I discovered EA almost four years ago, when I started doing my PhD. I wanted to find out how my research could contribute to decreasing animal suffering and ending factory farming, and so I joined a local EA group here in Groningen, started reading some books and listening to podcasts. I also gave a talk about creativity and factory farming at Conference on Animal Rights in Europe 2019: https://www.youtube.com/watch?v=KK7113XPkLg&t=1405s 

My WANBAM mentor suggested that the creativity-related knowledge might be important in the EA community, so I decided to join the forum! I'm also curious to read more about topics related to factory farming, social change and behavioural change, and perhaps get some inspiration for my future research :) 

Welcome Marta! :)

Bill Gates has been under fire for inappropriate behavior toward women. While I admire Bill Gates as an entrepreneur and philanthropist, I don't condone those actions and I hope this community doesn't either.

Personal views, not speaking for/about CEA.

Epistemic status: I haven't read much about this story and I don't have a considered opinion about the allegations. My prior is that these things usually turn out to be true after more investigation, and the below was written from the perspective of "I assume that Gates did in fact behave inappropriately in some way."

The link is paywalled to me, but I'm disappointed to see the news. (Though I'm happy to see that Bill and Melinda say they plan to continue the Gates Foundation's work.)

This kind of incident often makes me think of this quote from Holden Karnofsky:

In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities; I wouldn’t skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work.

I think this is a very common position within EA — that we should behave ethically in "standard" ways and avoid using altruistic work to cover or excuse unethical behavior. (See this great comment from Julia Wise or "Everyday Longtermism" for more on that view.)

I don't remember seeing anyone in the community condone someone's unethical behavior on the basis of their impact (vs. contesting whether the behavior itself was unethical, as in debates over Peter Singer's most controversial views). Are there any examples I'm missing? 

*****

The story also makes me think of Thomas Pogge, who was involved in EA early on but doesn't seem to have been involved after being accused of sexual harassment. I'd guess that wasn't a coincidence, though I only know my own story: the Yale EA group, which I led at the time, dropped him as an advisor after this happened. (It never occurred to us to defend his behavior.) 

This isn't to say that EA should avoid future contact with Gates. But I don't expect to see anyone say "it's fine he did that stuff, because he saved so many lives".

first, not condoning bill's behavior. My intuition is that it is good to be trustworthy, not sexually harass anyone, etc. That being said, I didn't find any of the arguments linked particularly convincing. 

"In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it." 

Sure generally you shouldn't be a jerk, but generally being kind isn't mutually exclusive to achieving goals. Beyond that what does 'overlay' mean? The statement is quite vague, and I'm actually sure there is some bar of family event that he would skip. I'm sure 99%+  of his work w/ givewell is not time sensitive in the way a family event is, so this statement somewhat amounts to a perversion of opportunity cost. In fact, Holden even says in the blog that nothing is absolute. It's potentially presentist also because I would love for people to treat me with respect and kindness, but I would probably prefer if past people just built infrastructure. 

And again with julia's statement, she's just saying "Because we believe that trust, cooperation, and accurate information are essential to doing good". Ok, that could be true but isn't that the core of the questions we are asking- When we talk about these types of situations we are to some extent asking: is it possible x person or group did more good by not being trustworthy, cooperative, etc. Maybe this feels less relevant for EA research, but what about EAs running businesses? Microsoft got to the top with extremely scummy tactics, and now we think bill gates may be on of the greatest EAs ever, which isn't supposed to be a steel counterargument but I'm just pointing out its not that hard to spin a sentence that contradicts that point. And to swing back to the original topic, it seems extremely unlikely that sexually harassing people is ever essential or even helpful to having more impact, so it seems fair to say don't sexually harass people, but not under the grounds that "you should always default to standard generosity, only overlaying your biased agenda on top of the first level generosity." However, what about having an affair? What if he was miserable and looking for love. If the affair made him .5% more productive, there is at least some sort of surface level utilitarian argument in favor. The same for his money manager, If he thought Larson was gonna make .5% higher returns then the next best person, most of which is going to high impact charity stuff, you can once again spin a (potentially nuance-lacking) argument in favor. And what is the nuance here? Well the nuance is about how not being standardly good affects your reputation, affects culture, affects institutions, hurts peoples feelings, etc.  

*I also want to point out that julia is making a utilitarian backed claim, that trust, etc. are instrumentally important while Holden is backing some sort of moral pluarlism (though maybe also endorsing the kindness/standard goodness as instrumental hypothesis).

So while I agree with Holden and Julia generally on an intuitional level, I think that it would be nice if someone actually presented some sort of steelmanned argument (maybe someone has) for what types of unethical behavior could be condoned, or where the edges of these decisions lied.  The EA brand may not want to be associated with that essay though. 

It feels a bit to me like EAs are often naturally not 'standardly kind' or at least are not utility maximizing because they are so awkward/bad at socializing (in part due to the standard complaints about dark-web, rational types) which has bad affects on our connections and careers as well as EAs general reputation, and so Central EA is saying, lets push people in the direction so that we have a reputation of being nice rather than thinking critically about the edge cases because it will put our group  more at the correct value of not being weirdos and not getting cancelled(+ there are potentially more important topics to explore when you consider that being kind is a fairly safe bet). 

This is a good comment! Upvoted for making a reasonable challenge to a point that often goes unchallenged.

There are trade-offs to honesty and cooperation, and sometimes  those virtues won't be worth the loss of impact or potential risk. I suspect that Holden!2013 would endorse this; he may come off as fairly absolutist here, but I think you could imagine scenarios where he would, in fact, miss a family event to accomplish some work-related objective (e.g. if a billion-dollar grant were at stake).

I don't know how relevant this fact is to the Gates case, though.

While I don't have the time to respond point-by-point, I'll share some related thoughts:

  • My initial comment was meant to be descriptive rather than prescriptive: in my experience, most people in EA seem to be aligned with Holden's view. Whether they should be is a different question. 
    • I include myself in the list of those aligned, but like anyone, I have my own sense of what constitutes "standard", and my own rules for when a trade-off is worthwhile or when I've hit the limit of "trying". 
    • Still, I think I ascribe a higher value than most people to "EA being an unusually kind and honest community, even outside its direct impact".
  • I don't understand what would result from an analysis of "what types of unethical behavior could be condoned":
    • Whatever result someone comes up with, their view is unlikely to be widely adopted, even within EA (given differences in people's ethical standards)
    • In cases where someone behaves unethically within the EA community, there are so many small details we'll know about that trying to argue for any kind of general rule seems foolhardy. (Especially since "not condoning" can mean so many different things -- whether someone is fired, whether they speak at a given event, whether a given org decides to fund them...)
    • In cases outside EA (e.g. that of Gates), the opinion of some random people in EA has effectively no impact.

All in all, I'd rather replace questions like "should we condone person/behavior X?" with "should this person X be invited to speak at a conference?" or "should an organization still take grant money from a person who did X?" Or, in a broader sense, "is it acceptable to lie in a situation like X if the likely impact is Y?"

As a very little boy I learned of my patron saint's story: Child Saint Dominic Savio intervened between two warring families (Think of "Romeo and Juliet") and brought them to sensible dialogue.
That touched me. Our soon to be Prime Minister had been awarded the Nobel Peace Prize for having organized UN military forces to intervene in the "Suez Crisis". I was just very young, but that made sense. What I couldn't explain to myself? why France was inserting itself to violently re-impose colonialism in Indo-China, after WWII. (As a French Canadian, that came home to me. Also, I was born 5MAY1954. Dien Bien Phu surrened 2 days after my birth.) Fore-shadowing the ghastly war to come.

Hungary ... Soviet tanks rolling in to crush democracy. 
Chile ... no assistance in overthrowing the mafia regime, but an invasion to crush the new administration, which effectly put that people's history into the Soviet sphere.

1960s ... murderous in every way.

I turned to Canadian military as a way of turning away from bourgeois society and culture. (My thinking was simply this: perhaps our society would be less bloody-minded if we effectively interdicted Soviet  assets to drive them back.)

I didn't tangle with consumerism and the abuse of Freudian psychiatry. Above my pay grade!
I did tangle with "culture wars" ... as far back as 1970s.
All I could think of was how everyone around me, schoolyard and later, always indulged or at least ignored bullies and other villains.

"Malicious" might be rare. (Trump, however charismatic, is just a gifty psychopath. Nothing mystical here.) But "malilgnance" is not. Sick cultures produce sick individuals.

Bodhisattva aspiration is never other than simply sensible!

mangalam
--KC