My best tool is to become a connoisseur of what it's like to be shifting into reactive / fire-fighting mode, and make a craft of switching back to prioritizing.
(responding to this post has the sort of dizzy pulling-away feeling that reactivity has, so I'm going to yolo submit and try to shift back to proactive mode)
Thanks for writing this up! It also strikes me as a good overview and will probably be my default link if people ask me what ops is like. I like the specific examples across different orgs. It varies a lot!
My ops buckets:
Ah - maybe your post is making the point "if they would make a good senior hire, it seems fine to hire them in a junior position". Maybe I was getting confused by the term, I've seen people labelled 'overqualified' when they are above average on a few dimensions but not all of them.
I'd have a harder time steel-manning a counterpoint to that. Maybe something about it not being stimulating enough so risking turnover... but that doesn't hold much water in my mind.
Would you agree that, if Bob was more politically skilled, he would be a better fit for this position?
Yes... and no?
Yes: it would be better re. 'overhead required'. If Bob foresees Carol's objections and takes her out to lunch and convinces her, this could save a bunch of management/board time.
... and no: maybe Carol's concerns were legitimate and Bob was just very convincing, but not actually right. Fade to: Bob becomes CEO and the org is thriving but it's not really following the original mission anymore.
I'm guessing Steve Jobs wanted p...
Thanks Andrés, this helped me get oriented around the phenomenological foundations of what y'all are exploring.
Apparently my comment won a comment prize, which nudges me to carry on this conversation.
In general, I'm skeptical about "putting" people in leadership positions, especially when their colleagues don't want to be led by them
What if Bob has an ambitious project he's excited to run, and 4 out of 7 of his colleagues are excited by this project and want to be led by Bob on this, and Alice thinks it couldn't hurt to try, but Alice's cofounder Carol really doesn't like the idea and 2 of the 3 board members also don't like it? Carol et al. surface objections like...
One angle on how this could go poorly is something I call 'failure cascades' (a la information cascade). I'm excited that this has been incorporated as a concept in the EA Ops channel, and I think it would be valuable for EA consultants to keep it in mind.
Roughly, a failure cascade could be:
> An EA consultancy conducts a search for a really good immigration law firm that they can use when helping EA orgs with immigration. They find a good law firm and proceed to help a dozen EA orgs with visas. Unfortunately it turns out this law firm misunderstoo...
Just got off the phone with Fidelity Charitable, they accept ETH with no minimums. (Also the two agents I spoke to were smart and efficient, average wait time of 8 min)
FYI @MichaelDickens I just heard from Vanguard Charitable:
> At this time, Vanguard Charitable only accepts contributions of Bitcoin and Bitcoin Cash that are valued over $100,000.00.
Might be worth mentioning in the post.
Another point I've heard made a few times (and at-least-a-little agree with):
Let's say Bob transitions from COO at a mid-sized org to finance manager at a small org. Bob has done finances before, and within a few months has set up some excellent systems. He now only needs to spend 10 hours a week on finances, and tells his manager (Alice) that he's interested in taking on other projects.
Alice doesn't currently have projects for Bob, but Alice and Bob saw this coming and set clear expectations that Bob would sometimes run out of things to do. Bob was fine w...
Ah whoops, thanks for the clarification. I'm glad that delineation was made during the session!
Hmm so maybe some weaker point: perhaps banners like 'atheism' and 'feminism' have the property 'blend me with your identity or consequences', whereas EA doesn't as much, and maybe that's better. ¯\_(ツ)_/¯
Anyway, thanks for the post Jonas, I agree with many points and have had similar experiences.
at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists
Small note that this could also be counter evidence - these are folks that are doing a good job of 'keeping their identity small' yet are also interested in gathering under the 'effective altruism' banner. (edit: nevermind, seems like they identified with other -isms) .
Somehow the EA brand is threading the needle of being a banner and also not mind-killing people ... I think.
Would EA be much worse if we removed ...
We’re particularly keen to reconnect with people who have been active EAs in the past but have drifted away from the community.
I have a number of friends that fall into this bucket, but when I think of inviting them I hesitate because I'm not sure what value they would get from it. Does anyone have a sense why attending this event would be good for someone who has 'drifted away from the community'?
+1 to stretching and mobilization, helped for me. Rock climbing helped my partner.
(the best theory I've found so far, but hard to tell if true) Often times muscle injury prevention is helped with teaching your brain/body how to activate the muscle in healthy ways (in addition to rest/stretching/etc). Sometimes much of the problem is your brain/muscles are trying to protect other muscles that are being used poorly, and this compounds (the 'helping' muscles get overworked, and other muscles try to save those ones, etc).
One thing that helped a lot for me was using a keyboard with thumb keys (especially replacing keys where I typically used my pinkies like backspace, enter, cmd/ctrl, and shift). Faster to learn than dvorak and imo a more effective intervention.
I used an Ergodox EZ, there's also Keyboardio, Kinesis, and others.
This feels like an area where our society is insane: our strongest, most dextrous fingers share A SINGLE KEY on the keyboard.
Thanks for putting this together Michelle, and congrats! Would love to see something like annual updates :)
Hey all,
The EA Book Club will be meeting before this assembly (2:30pm Eastern time) to discuss Will MacAskill's new book, Doing Good Better. RSVP Here.
p.s. I'm excited to hear about projects during the Assembly, and possibly share my own (just applied!)
Does anyone know of good investigations of the impact of technological unemployment? Any EA people/orgs that have looked at it?
One aspect of my moral uncertainty has to do with my impact on other people.
If other people have different moral systems/priorities, then isn't 'helping' them a projection of your own moral preferences?
On the one hand, I'm pretty sure nobody wants malaria - so it seems simple to label malaria prevention as a good thing. On the other hand, the people you are helping probably have very different moral tastes, which means they could think that your altruism is useless or even negative. Does that matter?
I think this is a pretty noob-level question, so maybe you can point me to where I can read more about this.
I was unaware there was a desktop app. There is, and they just released global shortcuts, which has helped a lot!
I'll probably write a post on this, but I think it would disappoint people If it's too weird I'll just stop reading! I like my familiarity points system, but when you publish everything online it might be prudent to consider weirdness. Perhaps a shareable document for those interested?
Thank you for posting this Peter, I find these useful!
A couple questions:
1) How do you use Toggl? Desktop app? Mobile? Do you also use Rescuetime? I've found it difficult to use Toggl when I'm away from the computer, during social/break/food/cleaning/interruptions/errands/other time. Maybe I just need to get in the habit of pulling out my phone and start/stopping the timer? 2) How do you do food? I spend a lot of time on food (buying, cooking, eating) each week, and I enjoy it but would be open to time saving techniques (especially when it comes to veggies)
Thanks again!
Commenting here to express my willingness to participate, however this pans out. I expect my contribution won't be more than a couple hundred dollars.
Thanks Robert, this is great!
Do you listen to any media while you're exercising? Music? Books on tape?
I prefer Bitton's, because otherwise it seems like "good" is modifying "figuring out".
Cool topic, but I'm still trying to figure out what EA is about! I have a feeling that I'll be able to articulate a blind spot eventually.
Neat!
If you're looking for cool socially-oriented for-profits in the developing world, maybe we could open up the question to the FB group.
This sounds like a really neat project. A few questions:
Perhaps Ben meant social entrepreneurship, which is often geared towards the developing world? Forbes 30 under 30 has some ideas for what those projects can be. If you don't like those, I recommend filtering through the Ashoka fellows. The most recent Ashoka fellow started a foundation/for-profit combo called Soronko Solutions, which teaches kids programming and sells tech solutions to startups.
--
In general, if you're going to run a normal for-profit business in the developing world that sells products or provides a service, you're probably not going to ma...
Nice summary Ryan.
Indeed, I think the biggest challenge in terms of spreading EA is what I call "extended responsibility." Many people have difficulty taking responsibility for their own lives, let alone their family or community. EA asks you to take responsibility for the whole world, and then carry that responsibility with you for your whole life. Holy crap.
After that, the next big ask is for rational methodologies. Even if people take responsibility for their kids, they probably will rely on intuition and availability heuristics.
So discussion ...
Hey Peter,
I think it's important to highlight that this article is about weirdness in the context of advocacy.
While I enjoyed the message, I'm concerned about the negative approach. People (especially weird ones) tend to be afraid of social rejection, and setting up a framework for failure (spending too many weirdness points) instead of a framework for success (winning familiarity points) can create a culture of fear/guilt around one's identity. I believe this is why some EAs had a cautious reaction to this article.
I love connecting with people, and I've ...
Thanks for flagging this Neil, good find! We've put this on the medium/low priority list for ARC & METR, since the EV is pretty low given our donors (but I think positive!).
One little puzzle: given that METR is trying to stay independent from major AI labs, do we reject donation matches from Google?