sjsjsj

Posts

Sorted by New

Wiki Contributions

Comments

Ways of improving one's empathy and emotional intelligence?

As someone who is not very empathetic by nature, I found Authentic Relating practice (check out, for example, www.authrev.org) very helpful for cultivating empathy, as it literally focuses on and trains "getting someone else's world." It also trains awareness of and ability to share your own emotional and somatic experience, which is central to emotional intelligence more broadly. I liked it because it was fun - it felt very connecting (I would leave events with a feeling similar to having cuddled with people, even when no cuddling had taken place - oxytocin something something), and I found exploring my and other's experience to be a rich and interesting experience.

Nonviolent Communication (NVC) (check out the book by Marshall Rosenberg, and also the online pay what you want Communication Dojo classes by Newt Bailey) is also very helpful for both empathy and emotional intelligence, as it systematically cultivates a empathy for your own and others' feelings and the needs behind those feelings. It was less rewarding as a recreational activity than Authentic Relating, but Newt is hilarious and lovely so his classes are really fun (and you can attend them on a one-off basis without committing to a series).

Both those practices have had a major impact on my ability to navigate relationship challenges (romantic and familial) with less anger and irritation and more success, as well as my own well-being outside of a relational context.

Lastly, I've read in passing (https://docs.google.com/document/d/1nrHi6vTRJI_MELW_gtTiEaaYwK8I82ytMIpHbmM0KNY/edit?usp=drivesdk) that metta (loving kindness) meditation improves empathy. In my experience, it does cultivate a feeling of wa friendliness and care towards other people, but is less helpful in providing insight, compared to the other practices I mentioned. My experience with it is more limited, though. A neat bonus is that once you get some experience with it, it's like a "muscle" you can use in situations where you might otherwise feel anxious or irritable - e.g. I've silently wished loving kindness while going through an airport, listening to a crying baby on a plane, in an annoying or frustrating meeting at work, and before parties where I didn't know too many people, and it really improved the way I felt. It can also feel more immediately rewarding than some other types of meditation.

All these sound super hippie, I grant you, and you may have to hold your nose initially if you're allergic to that sort of thing (I did), but they're well worth it.

College Public Service Pipeline

You may want to look at Teach for America and Venture for America as potential models

Bayesian Mindset

This was really great. As someone who has been lurking around LW/EA Forum for a few years but has never found reading the Sequences the highest-return investment compared to other things I could be doing, I very much appreciate your writing it.

A thought on something which is probably not core to your post but worth considering:

You said:

The dream behind the Bayesian mindset is that I could choose some set of values that I can really stand behind (e.g., putting a lot of value on helping people, and none on things like “feeling good about myself”), and focus only on that. Then the parts of myself driven by “bad” values would have to either quiet down, or start giving non-sincere probabilities. But over time, I could watch how accurate my probabilities are, and learn to listen to the parts of myself that make better predictions.

I think it's perhaps... not feasible, or has long-term side effects, to think that if you currently care about feeling good about yourself, you can just decide you don't and jump immediately to ignoring that need of yours. I would predict that taking that approach is likely to result in resistance to making accurate predictions or doing the things you endorse valuing, and/ or mysterious unhappy emotions because your need to feel good about yourself is not being met.

It seems to me that it would be better to use some method like Internal Double Crux to dialogue between the part of you that wants to generate good feelings by generating skewed predictions and the part of you that wants to help people, and find a way to meet the former part's needs that don't require making skewed predictions. An example of such an approach could be feeling good about yourself for cultivating more effective predictions. I imagine that's implicit in the approach you describe, but it may be more effective to hold explicit space for the part that wants to feel good about itself, rather than making it wrong for generating skewed predictions.

SoGive is hiring! Analysts wanted to lead evaluation of charities

Makes sense, thanks! It may be worth highlighting that more proactively when you do outreach within EA (and there may be nuanced ways to communicate that even generally).

SoGive is hiring! Analysts wanted to lead evaluation of charities

"Our approach has similarities with that followed by charity analysis organisations like GiveWell and Founders Pledge."

To put it bluntly, why should someone go to (work for, consult the recommendations of, support) SoGive vs other leading organizations you mention? Does your org fill a neglected niche, or take a better approach somehow, or do you think it's just valuable having multiple independent perspectives on the same issue?

An 80k for organisations?

There are non-profit consultancies like FSG, Bridgespan, Dalberg, and the tiny Redstone Strategy Group which do this sort of work. I believe they themselves are for profit and so charge significant fees. Not familiar with anything within EA but then I am somewhat on the periphery of EA so there could well be something that exists. Agree that this seems like an intriguing place for organizations with EA expertise to add value!

Time-sensitive, potentially high-impact opportunity to help get passport holders out of Afghanistan

Here's a direct link to the form for people who don't want to hunt through the twitter thread https://docs.google.com/forms/d/e/1FAIpQLSfitym3vRQKDjEMNaK3j5D7SCYVbIhBruIMClUaK0DkP9uO-g/viewform

Thank you for this post and the context on the credibility and impact of this effort!

How can I help first-year university students to settle on their academic interest?

I like Designing Your Life by Bill Burnett - one of the best books I've read on the topic.

Charities I Would Like to See

Given the recent post on the marketability of EA (which went so far as to suggest excluding MIRI from the EA tent to make EA more marketable - or maybe that was a comment in response to the post; don't remember), a brief reaction from someone who is excited about Effective Altruism but has various reservations. (My main reservation, so you have a feel for where I'm coming from, is that my goal in life is not to maximize the world's utility, but, roughly speaking, to maximize my own utility and end-of-life satisfaction, and therefore I find it hard to get excited about theoretically utility maximizing causes rather than donating to things which I viscerally care about -- I know this will strike most people here as incredibly squishy, but I'd bet that much of the public outside the EA community probably has a similar reaction, though few would actually come out and say it)

  • I like your idea about high-leverage values spreading.
  • The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody. If it gains widespread popularity among the EA movement, I will move from being intrigued about EA and excited to bring it up in conversation to never raising it in conversations with all but the most open-minded / rationalist people, or raising it in the tone of "yeah these guys are crazy but this one idea they have about applying data analysis to charity has some merit... Earning to Give is intriguing too...." I could be wrong, but I'd strongly suspect that most people who are not thoroughgoing utilitarians find it incredibly silly to argue that creating more beings who experience utility is a good cause, and this would quickly push EA away from being taken seriously in the mainstream.
  • The humane insecticides idea doesn't seem AS silly as those two above, but it places EA in the same mental category as my most extreme caricature of PETA (and I'm someone who eats mostly vegan, and some Certified Humane, because I care about animal welfare). I don't think insects are a very popular cause.

Just my 2 cents.