All of Kevin Ulug's Comments + Replies

The Double Up Drive is now live, with donation matching and the possibility of a tax receipt when donating from a variety of countries. https://doubleupdrive.org/2025-match-drive-preview/

Since we're on this topic, I recently saw that the Happier Lives Institute estimated that the best charities (based on WELLBYs per dollar) are about 100x more cost-effective than the average charity https://www.happierlivesinstitute.org/world-happiness-report/

I have given a shallow recap of their medium-depth report, so take with a grain of salt.

For what it's worth, https://forum.effectivealtruism.org/users/aaronhamlin has been engaging with the EA community on the topic of electoral reform.

The "free mailing list for new events" aspect of following a city group (depending on your notification settings) could be pretty useful.

I wonder if we could make posts in a city group and have that be emailed to group followers (depending on settings), basically as a mailing list? I don't currently have something like a mailing list. Our group has an increasing number of platforms - a mailing list would be one more ... signing up to the forum and following the group is a bit more work than signing up for than a mailing list but would save me one additional platform and potentially a monthly fee, etc.

As a group organizer, I want to know how many people are following our city group on the forum and find out when a new person starts following it. E.g., how many people are following our city group on the forum now compared to before a recent EAGx event?

As a group organizer, it might be nice to be able to DM people who follow our local group, though this may have privacy implications I have not thought through.

2
Sarah Cheng 🔸
I appreciate this suggestion, and the really helpful context! I'll add it to our backlog. The Groups features of the site haven't gotten any love in a while and I hope we can circle back to them soon.

It's also the case that the 10 Percent pledge is not the best course of action for everyone in the EA movement.

Putting an emoji by your name is just a really blunt tool and I'm not sure it's the right tool to encourage people already interested in or part of EA to donate more.

Especially in the absence of other badges my gut is worried about this leading to unhelpful social pressure (though I'm not sure what percentage of users have the emoji etc).

This also makes the EA forum and online social spaces slightly more cult-like via increased social pressure.

A very half-baked thought: I wonder if we should encourage orgs to depend less on networking instead of encouraging applicants to network more? Networking seems to depend, at least partially, on a bias towards people you know and therefore like more. I suppose it may also increase trust in the applicant if mutual contacts can vouch for them and I don't know where the balance of benefits / drawbacks lands.

It may be half-baked, but it strikes me as valid.

This is tricky, and for exactly the reason you would expect. The less networking is involved, the more fair/neutral/unbiased hiring tends to be, and the more fair/neutral/unbiased hiring tends to be, the higher quality employees you will hire (in expectation, of course). However, personal recommendations tend to result in higher quality employees than applications.

I hate the idea that John Doe doesn't get a chance for a job simply because he couldn't attend a conference, or because he wasn't allowed into a g... (read more)

I appreciate the initiative and helpful presentation of results! A lot of people want to work for an EA org, I think on the basis that this action seems extremely EA-approved and charting your own impactful career path seems very nebulous and daunting. I fairly frequently repeat something like "okay but I want you to pay attention to the mountain of rejected EA resumes over here", so I appreciate this resource and novel reporting about how people actually felt about the process.

Her alternative pathways to impact are volunteering for the EA DC group, donating to effective charities, and parenting two children who may someday have impactful careers.


I'm not sure whether 'alternative' was meant to be diminutive but, just in case, I want to say that donating effectively and organizing (and, possibly, other approaches) are fine and good and not merely a fallback approach. Not everyone in the EA movement is going to end up working for the EA movement.

Unfortunately I'm sick and bowing out, but the meetup is still on!

To help you find the group, at least one person should be wearing a shirt with the heart-in-lightbulb logo of effective altruism, and there should be a decent turnout (~8-10 people?) based on RSVPs from the various platforms we advertise the event. The group may be in the upstairs portion of the venue.

I agree that if you're already bought in to moral consideration for 10^umpteen future people, that's longtermism.

3
Yarrow Bouchard 🔸
Sorry for replying to this ancient post now. (I was looking at my old EA Forum posts after not being active on the forum for about a year.) Here's why this answer feels unsatisfying to me. An incredibly mainstream view is to care about everyone alive today and everyone who will be born in the next 100 years. I have to imagine over 90% of people in the world would agree to that view or a view very close to that if you asked them. That's already a reason to care about existential risks and a reason people do care about what they perceive as existential risks or global catastrophic risks. It's the reason most people who care about climate change care about climate change. I don't really know what the best way to express the most mainstream view(s) would be. I don't think most people have tried to form a rigorous view on the ethics of far future people. (I have a hard enough time translating my own intuitions into a rigorous view, even with exposure to academic philosophy and to these sorts of ideas.) But maybe we could conjecture that most people mentally apply a "discount rate" to future lives, so that they care less and less about future lives as the centuries stretch into the future, and at some point it reaches zero. Future lives in the distant future (i.e. people born significantly later than 100 years from now) only make an actionable difference to existential risk when the estimated risk is so low that it changes the expected value math to account for 10^16 or 10^52 or whatever it is hypothetical future lives. That feels like an important insight to me, but its applicability feels limited. So: people who don't take a longtermist view of existential risk already have a good reason to care about existential risk. Also: people who take a longtermist view of ethics don't seem to have a good reason to think differently about any other subject than existential risk. At least, that's the impression I get from trying to engage open-mindedly and charitably with thi

Yes. I think your list of commonsense priorities are even more beneficial in the view of longtermism. Factors like "would this have happened anyway, just a bit later" may still apply and reduce the impact of any given intervention. Then again, notions like "we can reach more of the universe the sooner we start expanding" could be an argument for sooner being better for economic growth.

The topics of working for an EA org and altruist careers are discussed occasionally in our local group. 

I wanted to share my rough thoughts and some relevant forum posts that I've compiled in this google doc. The main thesis is that it's really difficult to get a job at an EA org, as far as I know, and most people will have messier career paths.

Some of the posts I link in the doc, specifically around alternate career paths:

The career and the community

Consider a wider range of jobs, paths and problems if you want to improve the long-term future

My curre... (read more)

One takeaway, I think, is that these things which already seem good under common sense are much more important in the longtermist view.  For example, I think a longtermist would want extinction risk to be much lower than what you'd want from a commonsense view.

1
Yarrow Bouchard 🔸
Does this apply to things other than existential risk?

I believe that was discussed in the episode with Spencer. Search for 'threatened' in the transcript linked here.
 

00:22:30 Spencer Greenberg

And then the other thing that some people have claimed is that when Alameda had that original split up early on, where some people in the fact about trans community fled, that you had somehow threatened one of the people that had left. What? What was that all about?

00:22:47 Will MacAskill

Yeah. I mean, so yeah, it felt pretty.

00:22:50 Will MacAskill

This last when I read that because, yeah, certainly didn't have a me... (read more)

This doesn't feel like a great response to me.

Keeping Absolutes in Mind - I think donating money is still somewhat underrated in discussions like this, though I was happy to see it brought up in several comments.

Consider taking the GWWC pledge or TLYCS pledge (easier / more flexible) or some other pledge if you feel like that would help with keeping motivation up. 


You could also organize or contribute to a local group. Regular local group attendance could also keep motivation up (and would be a lot less costly for your budget).

 

Even a small donor can make a real impact for individuals direct... (read more)

The Nonlinear Library podcast reads upvoted posts on the EA Forum, Lesswrong, and Alignment forum with an AI voice (that's not bad): Listen to more EA content with The Nonlinear Library

Some other career orgs:
 

And for what it's worth, 80,000 Hours has a bunch global health & animal related postings on their job board.

We could still use more short, casual videos to win tens of thousands of dollars for effective charities! See Project for Awesome 2023: Make a short video for an EA charity!

There's a subreddit: https://www.reddit.com/r/EffectiveAltruism/

I think the comments aren't exactly what you'd get on this forum but some of them are helpful and accurate.

2
JakubK
Thanks! 1st link is my doc, 2nd + 3rd + 4th are on Stafforini's list of EA syllabi, and the 5th link is in my doc.
1
Kyle Smith
Perfect!! Thank you.
4
High Impact Professionals
Yes! This is the right answer. We have a question about this on the sign up form and plan on forwarding those interested to the EA good governance project.

The EA subreddit is getting more participation from critics and/or people understandably upset about FTX. This is resulting in some low quality posts hitting the frontpage of the subreddit, since it's so small and not very active.

I used to donate monthly instead of at the end of the year. I eventually decided there were advantages to donating at the end of the year* , though there may be ways to seek both benefits like donating a small portion monthly to get the good good feelings more often.

* orgs have a more complete picture of their funding need, donation matching opportunities, maybe you'd benefit from something like donating stock which may have some overhead you don't want to repeat, you have the most information available, evaluators have put out their new recommendations, ...

1
Max Pietsch
Cool thanks for your thoughts KevinO. Those are good points.

The Effective Institutions Project might count as this. There may be more relevant projects, depending on what counts - like the Simon Institute for Longterm Governance, the Center for Election Science.

The kinds of things filed under "Broad Longtermism", perhaps.

Maybe work on impact markets and prediction markets.
(For some reason I didn't fully read acylhalide's answer and I see that I listed some of the same things.)

Bonus, from the EA Newsletter: If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

I don't want to imply that this must be a barrier to action, but how much time have you spent digging in to questions relevant to cause prioritization? Your priorities might change as you investigate more.

Here are a couple flowcharts - if you haven't engaged with a particular question before, like really grappled with whether animals have moral status, you might find your priorities change as you think through these considerations.

https://forum.effectivealtruism.org/posts/TCtbuGC3yBisToXxZ/a-guided-cause-prioritisation-flowchart

http://globalprioritiesproje... (read more)

1
Fabien Le Guillarm
Thanks, I have started to dig into the causes mainly through listening to podcasts and it really shifted my perspective on many causes; and actually let me to that post, but these flowcharts are new to me - I’ll dive in, thanks !

Local or online groups may have career workshops or 1-1s available with people who could offer advice.

I'd say start with this opportunities board https://ea-internships.pory.app/ - you can filter for volunteer opportunities. Heck, maybe some part-time work would be relevant as well.


There's also this Facebook group for EA volunteering https://www.facebook.com/groups/1392613437498240

You could also try things like applying to EAGx Virtual and try to find out about projects by just asking people.

1
Fabien Le Guillarm
Thanks a lot! I will check this out.

I'd guess that a lot of non-longtermist, non-EA-meta charities are more more likely to be funding constrained and less likely to be topped up by FTX. I also suspect FTX isn't taking up all the opportunities for organizations to spend money, even for the ones it supports.

I suspect organizations with a research focus, such as Sentience Institute, ALLFED, and other answers on this post, are often happy to hire more researcher time with marginal donations.

Organizations that do marketing probably have room to spend more there, such as 80,000 Hours and Giving Wh... (read more)

What's the minimum sized audience that you'd be happy to present to?

2
Giving What We Can🔸
Hi Kevin, we'd probably be able to arrange for someone to speak if there was an audience of 10 or more! Obviously the bigger the better :) Or we could try and combine several smaller groups for a virtual event!

Will is promoting longtermism as a key moral priority - merely one of our priorities, not the sole priority. He'll say things like (heavily paraphrased from my  memory) "we spend so little on existential risk reduction - I don't know how much we should spend, but maybe once we're spending 1% of GDP we can come back and revisit the question".

It's therefore disappointing to me when people write responses like this, responding to the not-widely-promoted idea that longtermism should be the only priority.

A bit of a sidestep but there there is also the new Longtermism Fund , for more legible longtermist donations that are probably easier to justify.

I think that is discussed in https://forum.effectivealtruism.org/posts/dsCTSCbfHWxmAr2ZT/open-ea-global (perhaps more directly in the comments if only indirectly in the main post, I don't quite recall).

I think it's because the conferences are networking-focused and the organizers want the attendees to be likely to have productive meetings (like if you physically bump in to someone, CEA wants high odds that they  can help you or you can help them).

(Please correct me if I am wrong.)

I assume the broad categories for rejection from EAG are that CEA doesn'... (read more)

8
Fermi–Dirac Distribution
I think this part is wrong.  Eli Nathan has said the following: So it seems that they do not explicitly compare applicants with each other when making admissions decisions. [1]   1. ^ Which, unrelatedly, is very confusing. My EAG SF 2020 rejection email said  The email also linked to this EA Forum post from December 2019, which says  and  I'm not sure if Eli Nathan's comment is implying that these statements I quoted were false at the time they were made, or if the CEA has changed its mind since EAG SF 2020 about whether to limit the number of attendees or not. ... okay, so I just read a few more of Eli Nathan's comments and I am now really confused. For instance, he's said the following (emphasis mine) This appears to directly contradict the December 2019 EA Forum post I linked to.  
Load more