All of Jeroen Willems🔸's Comments + Replies

For me, it doesn't need to be hard-working or smarter people. Anyone you can cowork with who is supportive will do. But my challenge is to actually create such an environment! Online doesn't work that well for me, it needs to be in-person. It's so much more impactful than any other productivity hack. 

Jeroen Willems🔸
3
0
0
30% disagree

It's OK to eat honey

I try to avoid it, but it's hard for me to believe it's as bad or worse than most animal products. Especially in the quantities it's usually consumed. Who eats a kg of honey per year? I do think the treatment of bees is very unclear. But I've also heard that some non-animal products involve a lot of insects, like avocados, so I'm curious how it compares.

I checked parts of the study, and the 0.12% figure is for P(AI-caused existential catastrophe by 2100) according to the "AI skeptics". This is what is written about the definition of existential catastrophe just before it: 

Participants made an initial forecast on the core question they disagreed about (we’ll call this U, for “ultimate question”): by 2100, will AI cause an existential catastrophe? We defined “existential catastrophe” as an event in which at least one of the following occurs:

  1. Humanity goes extinct
  2. Humanity experiences “unrecoverable colla
... (read more)
2
Lukas Finnveden
Bostrom defines existential risk as "One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." There's tons of events that could permanently and drastically curtail potential without reducing population or GDP that much. For example, AI could very plausibly seize total power, and still choose to keep >1 million humans alive. Keeping humans alive seems very cheap on a cosmic scale, so it could be justified by caring about humans a tiny bit, or maybe justified by thinking that aliens might care about humans and the AI wanting to preserve the option of trading with aliens, or something else. It seems very plausible that this could still have curtailed our potential, in the relevant sense. (E.g. if our potential required us to have control over a non-trivial fraction of resources.) I think this is more likely than extinction, conditional on (what I would call) doom from misaligned AI. You can also compare with Paul Christiano's more detailed views.

Interesting, I thought p(doom) was about literal extinction? If it also refers to unrecoverable collapse, then I'm really surprised that takes up 15-30% of your potential scenarios! I always saw that part of the existential risk definition as negligible.

6
Peter Wildeford
p(doom) is about doom. For AI, I think this can mean a few things: * Literal human extinction * Humans lose power over their future but are still alive (and potentially even have nice lives), either via stable totalitarianism or gradual disempowerment or other means The second bucket is pretty big

You're right that this is an important distinction to make.

You make a fair point, but what other tool do we have than our voice? I've read Matthew's last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?

Perhaps instead of trying to change someone's moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitaria... (read more)

Good point, I guess my lasting impression wasn't entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn't feels discouraged from actively participating in EA. 

On top of mentioning a specific opportunity, I think this post makes a great case in general for considering work like this (great wage & benefits, little experience necessary, somewhat mundane, shiftwork). I do feel a bit uncomfortable about the part where you mention using personal sway to influence the hiring process though, as this could undermine fair hiring practices, but I could be overreacting. 

Yeah it’s definitely something I thought about how to explain, I wasn’t sure how to do so succinctly so I kinda just cut the section. 

I’m not willing to recommend people who are unqualified, but I am trying to help people study and prepare for the job, which makes them a more qualified candidate generally! 

I can also pass along a resume and help people prepare for the interview. I’m pretty respected (I hope!) so my testimony as to your capability has some good weight. 
 

I think those things are normal, I’m distinctly aware of not violati... (read more)

Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don't want to interact with the community anymore, I just want to share that I believe y... (read more)

My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow's comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb

I don't think people dropped the ball he... (read more)

I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there.

Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.

I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company... (read more)

1
Yarrow Bouchard 🔸
Two of the Mechanize co-founders were on Dwarkesh Patel’s podcast recently to discuss AGI timelines, among other things: https://youtu.be/WLBsUarvWTw (Note: Dwarkesh Patel is listed on Mechanize’s website as an investor. I don’t know if this is disclosed in the podcast.) I’ve only watched the first 45 minutes, but it seems like these two co-founders think AGI is decades away (e.g. one of them says 30-40 years). Dwarkesh seems to believe AGI will come much sooner and argues with them about this.

The situation doesn't seem very similar to Anthropic. Regardless of whether you think Anthropic is good or bad (I think Anthropic is very good, but I work at Anthropic, so take that as you will), Anthropic was founded with the explicitly altruistic intention of making AI go well. Mechanize, by contrast, seems to mostly not be making any claims about altruistic motivations at all.

joko
26
10
1

What concerns are there that you think the mechanize founders haven't considered? I haven't engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can't think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don't know what you would expect to change in further discussions?

No guest bedrooms. We encouraged tents and sleeping bags. Some people just went home for the night, while others came only for one day. This meant for both editions only 5-8 people ended up staying overnight, with most of them sleeping indoors in the living room.

What are some reasons to remain optimistic about the world from an EA perspective? Or how can we keep up with the most important news (ex. USAID / PEPFAR) without drowning in it? 

The news is just incredibly depressing. The optimism I once had before the pandemic is just gone. Yeah, global health and development may still continue to improve. And that's not insignificant. But moral circle expansion? Animal welfare? AI risks? 

3
Joseph
I'm going to draw an analogy to finance/investments. If I check the level of the stock market every day or multiple times a day, I become acutely aware of increases and decreases. I might feel a rush of adrenaline when the stock market goes up by 2%, and an overwhelming feeling of despair if it drops by 2%. But if I stop checking it frequently, I can "zoom out" and see that the broader trend is upward. It is true that there is a lot of variation on a short timeline, but over decades the trend is quite clearly upward. Like all analogies, this falls somewhat short in a variety of ways, but the idea I want to drive home is that "the news is just incredibly depressing" because we look at the short-term news. We allow ourselves to be emotionally buffeted and battered by what is happening this day or this week rather than paying attention to larger trends. If it really is vital for a job to stay up to date on the latest news, then at least try to keep some perspective: what is and isn't within your control, and this too shall pass. One useful framing can be asking yourself if there is anything you can do to affect this, asking why you care about this particular issue, and asking if there is any purpose/outcome in focusing it. I think that people dying in a civil war in Yemen is horrible because I detest suffering in general, but I have no influence to affect that at all, and my worrying about it doesn't serve any purpose. I think that the world will be a worse place if USAID funding is reduced, but there isn't any benefit to me stressing out about that. There are a million things that I would like to see different in the world, but most of them are very much outside my scope of influence.

Same, I love it as well. Though my Facebook connection is broken and will likely never be fully repaired. I can remain logged in until I send a picture, then the connection breaks. And I keep forgetting. I've talked with the support team about it and it seems quite hopeless.

Yeah, and even when finding a classic EA "high impact job" doesn't work, finding a good E2G job may not work either. And you may not find the time to volunteer. It sucks, but you just try with what you have and what you can. This will be different for everybody. It may require a lot of self-forgiveness. I sure struggle(d) with it. But this is different from completely giving up on having an impact! 

My guess is, but I could be wrong, EA forum content is often just difficult to share with a broader audience as it's usually not the target audience? And even when it's ideas worth sharing with a broader audience, it may still be filled with EA jargon / way of speaking that's difficult to follow for a lot of people. I am saying this assuming most people's followers aren't EAs but friends, colleagues and family. Even within EA, people are focused on different cause areas and many may not priorize reading stuff outside their cause area. I am not saying all o... (read more)

5
Sarah Cheng 🔸
Nice — I like the way you described this, and I broadly agree. It's possible that it's more effective for the Forum Team to be doing a lot of the work to share on external sites (like via our Twitter account) so that others can contribute by lower-effort actions like retweeting, liking, and commenting (as opposed to us trying to get individual readers to share content more).

Thanks for the write-up! This is a very useful post. 

I have been wondering though, since the shift in strategy, is EA outreach still a priority? Such as YouTube channels, podcasts, other online media,... targeting a broader audience. And if not, why? 

Even though I have a personal vested interest in this topic (running a YouTube channel previously funded by EAIF), I do believe that projects like these could be highly effective and worth funding, regardless of my own involvement.

6
Jamie_Harris
Apologies, missed this comment! EA outreach is still in-scope, it just wasn't an area we highlighted in this post. That's partly because we tend to get quite a few applications of this sort anyway. (I'm not sure but my vague impression is that the average quality of such applications is lower, too.)

My main point of criticism, that I didn't see anyone else mention in the top-level comments, is that the pledge just seems too vague and broad. A 10 percent pledge is very concrete and measurable. Of course there is a difference in opinion in terms of what charities count as impactful, just like with careers. But with careers the difference in opinion is too broad for this pledge to be useful. Some could just interpret this pledge as "I'll become a doctor or work for an ngo" without giving much extra thought. While with the 10% pledge there is a clear sign... (read more)

I'm finishing up a video covering NAO and wastewater monitoring, though not based on this specific talk.

4
Jeff Kaufman 🔸
Wow! If you'd like me to review it for accuracy before you publish it I'd be happy to!

Thanks for pointing this out! I wasn't really sure where my question fell on the axis of "general EA animal welfare knowledge" (ex. prioritizing chickens > cows) to "specific detail about how ACE evaluates charities". By posting a quick take on the forum, I was hoping it was closer to the former, that I was just missing something obvious and that ACE wouldn't even have to be bothered. I shouldn't have overlooked the possibility that it might be more complicated!

Thank you so much for this elaborate and insightful response, Max! I understand the argument much better now.

I was going through Animal Charity Evaluators' reasoning behind which countries to prioritize (https://animalcharityevaluators.org/charity-review/the-humane-league/#prioritizing-countries) and I notice they judge countries with a higher GNI per capita as more tractable. This goes against my intuition, because my guess is your money goes further in countries that are poorer. And also because I've heard animal rights work in Latin America and Asia is more cost-effective nowadays. Does anyone have any hypotheses/arguments? This quick take isn't meant as criti... (read more)

9
Julia_Wise🔸
Glad this question-and-answer happened! A meta note that sometimes people post questions aimed at an organization  but don't flag it to the actual org. I think it's a good practice to flag questions to the org, otherwise you risk: - someone not at the org answers the question, often with information that's incorrect or out of date - the org never sees the question and looks out-of-touch for not answering  - comms staff at the org feel they need to comb public spaces for questions and comments about them, lest they look like they're ignoring people (This doesn't mean you can't ask questions in public places, but email the org sending them the link!)

Hey Jeroen! I'm a researcher at ACE and have been doing some work on our country prioritization model. This is a helpful question and one that we've been thinking about ourselves.

The general argument is that strong economic performance tends to correlate with liberalism, democracy, and progressive values, which themselves seem to correlate with progressive attitudes towards, and legislation for, animals. This is why it’s included in Mercy For Animals’ Farmed Animal Opportunity Index (FAOI), which we previously used for our evaluati... (read more)

Giving What We Can has grown tremendously over the past couple of years under your leadership. It’s been inspiring to witness how the organization has flourished! The redesign, the video content, the doubling in pledges, the fundraising feature, the donation platform, all the new research,... these are real milestones to be proud of. Thank you so much for the important work you've done! I am confident that Sjir and the rest of the team will continue building on the strong foundation you’ve created. I’m excited to see what you’ll do next, but make sure to take the well-earned rest you deserve!

I can't figure out how to change it on the EA Forum. Perhaps because I've already changed my name once before and there's a limit?

But I understand that there are many people who take the pledge but don't feel comfortable sharing it publicly. I think different circles and different cultures look differently towards "bragging" about donating. I know I don't feel comfortable doing it on LinkedIn or Instagram. Mostly out of fear or judgement I guess, so my mind could easily change.

2
Joel Tan🔸
They're working on creating an option to make it easy for posters to add the diamond, but in the meantime you can DM the forum team (I did!) 
4
NickLaing
I'm the same, have no idea how to put it on the forum.

Unfortunately, it looks like even in bio it's not possible! It says: 

Account update failed: Description can't include "🔹".

It's not the biggest deal, the orange one is cooler anyway, so it's an extra reason to take the 10% pledge! 😉 In the meantime I'll use the blue one on swapcard. 

Unfortunately the 🔹 emoji doesn't work on Twitter, I assume so it wouldn't be confused with the verification badge.

4
GraceAdams🔸
This is a bummer - we didn't realise! But it can still be added to the bio if you like - hopefully that's a reasonable alternative!

Agreed. There is a major difference between thinking someone should be deplatformed just because they have opposing views (e.g., pause AI vs. accelerationist, libertarian vs. communist) and thinking someone should be deplatformed because they promote discriminatory views.

There's nothing inherently wrong with being controversial or outside of the Overton window. Many important ideas were once controversial, and many still are. But it is wrong to actively promote views that are racist, transphobic or sexist and to platform those who do. Not because these vie... (read more)

I've had admonynous for ages, my guess would be 2018-2020, but I've only had 2 submissions so far. One was recently, because I mentioned it in my recent fundraising post. This was actually useful feedback. I felt slightly frustrated because I wanted to refute some points, but I believe it's best to just let it be. This post does inspire me to promote/highlight it more.

0[anonymous]
Thanks for the feedback Jeroen!

In case you're interested in supporting my EA-aligned YouTube channel A Happier World:

I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.

Manifund fundraising page
EA Forum post announcement

8
Jason
At this point, I'd be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn't fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap. (Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)

As additional sources of funding, I agree they're good ideas!

Hi Jamie!

You're right, output is definitely the biggest bottleneck. Right now, I'm focusing on making shorter videos that cover narrower, more specific topics. I'm also trying to incorporate more real-world footage to keep things visually interesting without requiring so much editing time. Unfortunately, my lead poisoning video and the video I'm currently working on turned out to be a lot more ambitious than I expected.

I'm already working on your first four suggestions. I'm hesitant about the fifth point. I've tried the last point many times, but it never ... (read more)

5
Jamie_Harris
Oh my suggestion wasn't necessarily that they'realternatives to receiving any donations; they could be supplements. They could be things you experiment with that could help to make the channel more sustainable and secure.

Thanks for the write-up! I agree with Chris that the natural functions can vary substantially, and Ulrik's comment shows how (another example could be a policy focus in EA DC and EA Brussels). But there are for sure many universal things like the ones you mentioned.

My main nitpick is with the term. I don't see why "local EA groups" isn't good enough. There are already so many abbreviations within the EA movement that it gets overwhelming. If you read "MEAROs" you have no idea what you're talking about without prior context, while "local EA groups" is very ... (read more)

2
Arthur Malone🔸
I'm ambivalent about jargon; strongly pro when it seems sufficiently useful, but opposed to superfluous usage. One benefit I can see for MEARO is that it isn't nominatively restricted to community building like most "local EA groups."  I recently attended a talk at EAGxLatAm by Doebem, a Brazilian based and locally focused equivalent of GiveWell, that made a decent case for the application of EA principles to "think global, act local." Their work is very distinct from EA Brazil, but it falls solidly into regional and meta EA, and I think there is strong potential for other similar orgs that would work tightly with local CB groups but have different focus.

Don't forget to go to http://www.projectforawesome.com today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!

3
ramekin
How does one vote? (Sorry if this is super obvious and I'm just missing it!)

Not very active, but asking feedback on any type of EA-related writing is also welcome in the #role-writers channel in the EA Anywhere slack: https://www.effectivealtruismanywhere.org/get-involved

Ruining the person's life, their job prospects, their relationships (family, friends, partners),... while having little to no impact on the business of drug cartels. I'm not saying that's what definitely would happen, but I think the odds are uncomfortably high to risk it.

If I can stop a coke addiction, I can effectively save a life (without donating like 5000 dollars to a charity).

 

  1. It's unclear whether reporting would stop a coke addiction at all
  2. It's unclear whether stopping a coke addiction saves a life, since I assume most coke users don't die from overdose
  3. You could easily do more harm than good
1
dstudioscode
Elaborate on 3)

I believe you're getting downvoted because this question isn't very relevant for the EA forum, which I think is understandable. Perhaps it would be better suited for Reddit or Twitter and maybe a quick take here. But to answer your question: I would not get involved, mostly because you don't know this person. There are so so many people doing cocaine and other drugs, reporting them doesn't really solve anything. You might ruin someone's life. You only get involved in case you know the person really well and then you just start with personally talking to them. Even then I'm not sure I'd report it. They're often a victim of their addiction, so they need help and support rather than punishment.

1
dstudioscode
Yeah, I know it is not really related to EA but I need to talk with consequentialist like members. Its interesting, because I thought it would be my moral obligation to report it - less so to save some person's life more so to reduce funding for drug cartels. But it seems majority of the comments are telling me to not get involved - which is just fine with me because I would feel awkward getting involved

The location has changed! A lovely EA couple will host the event at their large apartment, located near Abbaye de la Cambre (between Flagey & Bois de la Cambre). Contact me (WhatsApp/Signal/text +32499401427) for the exact address.

Like Joseph says, conventional meditation doesn't work for everyone. Don't force yourself to try and do it. It doesn't work well for me either. Maybe less conventional forms of meditation would work for you: walking meditations, meditations targeted towards neurodiverse people (which is more literal) or just your own interpretation/take. I personally prefer focusing on mindfulness more broadly than meditation. And in my experience, cardio exercises and isolating myself to enjoy music are clearly more effective mindfulness exercices than meditation ever was for me.

This is great stuff, thank you for writing this up and sharing! Your extrapolations align with what I've learned/discovered over the past few years (but I don't have any scientific evidence to prove them either). It's sad to see such a small amount of upvotes on this post, but I hope you know that this write up is greatly appreciated and valuable for us writers in EA!

1
Timon Renzelmann
Wow, nice to hear this, thank you Jeroen :)

I was going to make the same comment as Dominic. This is a great tip! Overpromise with friends, underpromise with stakeholders.

I haven't read the other comments yet but I just want to share my deep appreciation for writing this post! I've always wondered why animal welfare gets so little funding compared to global health in EA. I'm thankful you're highlighting it and starting a discussion, whether or not OP's reasons might be justified.

Thanks for sharing Grace. I think it's interesting you mention "that I could always resign if needed to". I'm also still on the fence of pledging, but I wonder if I should look at it similar as going vegan. Like, right now my goal is to be vegan for the rest of my life. So in a way I've pledged to that. But something could always happen later in life, perhaps health reasons, that would result in me 'resigning' from veganism.

4
GraceAdams🔸
I think of my veganism in the same way! 

Today we celebrate Petrov day: The day that Stanislav Petrov potentially saved the world from a nuclear war. 40 years ago now.

I made a quick YouTube Short / TikTok about it: https://www.youtube.com/shorts/Y8bnqxAbMNg https://www.tiktok.com/@ahappierworldyt/video/7283112331121347873

I'd love to do more weekly coworkings with people! If you're interested in coworking with me, you can book a session here: https://app.reclaim.ai/m/jwillems/coworking

We can try it out and then decide if we want to do it weekly or not.

More about me: I run the YouTube channel A Happier World (youtube.com/ahappierworldyt) so I'll most likely be working on that during our sessions.

Some people might not be a fan of AR or circling, so other methods of mediation should be considered too.

There are still a lot of young EAs that aren't into AuthRev and circling, so I think as a mediator it's important to take this into account.

2
Severin
I don't understand how this is relevant to what I'm writing, as I don't intend to do mediation only for people who know AR or circling. But the number of upvotes indicates that others do understand, so I'd like to understand it, too. Jeroen, would you mind elaborating?

I think having paid (part time or full time) fund managers with less expertise makes sense. Having such a high turnover of fund managers isn't great for grantees either. I'm not really sure what the cons of paid fund managers are, but I can imagine that there's a good argument against it that would change my mind. Having less expertise could be a great thing, as your mind isn't set on a particular view and you can still gather insights from people who do have expertise. And while they perhaps won't be experts in AI safety or biosecurity, they could be(come... (read more)

8
Linch
LTFF (and I think EAIF as well) already offers pay to fund managers. Some fund managers take them up on it; I personally didn't until recently (when I started investing more time into LTFF than RP work, mostly on the communications front). 

Great post! I've been applying the same metaphor to my life. But I like to think of it more as a phone than a computer since it has a battery that often needs recharging (my laptop is basically always plugged it so I like it less as a metaphor). Also just like not every phone has the same specs and battery, people don't either. So just because one person is able to do a crazy amount of things, don't feel bad that you can't.

3
Deena Englander
I like that phone metaphor better.... I think I'll switch to that! Thanks for the idea.

I would like to add that it might be important to communicate this in an email to all currently funded projects by EAIF/LTFF ;)

Load more