All of Jeroen Willems's Comments + Replies

I'd love to do more weekly coworkings with people! If you're interested in coworking with me, you can book a session here: https://app.reclaim.ai/m/jwillems/coworking

We can try it out and then decide if we want to do it weekly or not.

More about me: I run the YouTube channel A Happier World (youtube.com/ahappierworldyt) so I'll most likely be working on that during our sessions.

Some people might not be a fan of AR or circling, so other methods of mediation should be considered too.

There are still a lot of young EAs that aren't into AuthRev and circling, so I think as a mediator it's important to take this into account.

2
Severin
1mo
I don't understand how this is relevant to what I'm writing, as I don't intend to do mediation only for people who know AR or circling. But the number of upvotes indicates that others do understand, so I'd like to understand it, too. Jeroen, would you mind elaborating?

I think having paid (part time or full time) fund managers with less expertise makes sense. Having such a high turnover of fund managers isn't great for grantees either. I'm not really sure what the cons of paid fund managers are, but I can imagine that there's a good argument against it that would change my mind. Having less expertise could be a great thing, as your mind isn't set on a particular view and you can still gather insights from people who do have expertise. And while they perhaps won't be experts in AI safety or biosecurity, they could be(come... (read more)

8
Linch
1mo
LTFF (and I think EAIF as well) already offers pay to fund managers. Some fund managers take them up on it; I personally didn't until recently (when I started investing more time into LTFF than RP work, mostly on the communications front). 

Great post! I've been applying the same metaphor to my life. But I like to think of it more as a phone than a computer since it has a battery that often needs recharging (my laptop is basically always plugged it so I like it less as a metaphor). Also just like not every phone has the same specs and battery, people don't either. So just because one person is able to do a crazy amount of things, don't feel bad that you can't.

3
Deena Englander
1mo
I like that phone metaphor better.... I think I'll switch to that! Thanks for the idea.

I would like to add that it might be important to communicate this in an email to all currently funded projects by EAIF/LTFF ;)

I like seeing this. It's a great thing to try out and seems like a good idea overall, mostly to decrease the unipolarity of the funding ecosystem. Having no overlap in people who work for Open Phil and help out with EAIF/LTFF seems really important. Though I don't think there's any harm for Open Phil to continue matching donations after the six months, but it definitely doesn't have to be 2:1. 0.5:1 would already be great. I have low confidence in these opinions, just quick thoughts.

Having no overlap in people who work for Open Phil and help out with EAIF/LTFF seems really important.

FWIW, this is not the current plan – the post mentions a desire to avoid having the same person chair a fund at EA Funds while at OPP, but not about someone being a fund manager at EA Funds while at OPP (though presumably this would only be a minority of fund managers).

7
Jeroen Willems
2mo
I would like to add that it might be important to communicate this in an email to all currently funded projects by EAIF/LTFF ;)

I added the transcription of my newest video on sentientism and moral circle expansion to the EA Forum post :) https://forum.effectivealtruism.org/posts/2kNeKoCcHAHQRjRRH/new-a-happier-world-video-on-sentientism-and-moral-circle

Also for passer-by's perhaps having large QR codes that lead to Elwood's website is handy.

7
Benny Smith
2mo
Thanks, I like the QR code idea! I'm not sure about exact numbers but I'd estimate ~30% seemed genuinely interested in trying dog meat

I'm curious: How many people actually tried or wanted to try out the dog meat?

5
Jeroen Willems
2mo
Also for passer-by's perhaps having large QR codes that lead to Elwood's website is handy.

We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment.

This may sound like a lot, but I think it's likely that 4 years from now 20% of currently secured compute will just be a tiny fraction of what they'll have secured by then.

What's the likelihood that even in 'incredible' places there would be electricity? For some reason I always assumed there would basically be no electricity during a major global catastrophe, which is possibly incorrect. But does it make sense to have paper copies too? What's the trade-off here?

4
Aaron Bergman
2mo
Even given no electricity, copies stored physically in e.g. a flash drive or hard drive would persist until electricity could be supplied, I'm almost certain

Reason why I call it a "stream of consciousness": Streams change over time. Conscious beings do too. They can also split, multiply or grow bigger.

One thing I worry about though: Does your consciousness end when sleeping? Does it end when under anesthesia? These thoughts frighten me.

An unpolished attempt at moral philosophy

Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness. 

Under classic utilitarianism, the only thing that matters is hedonic experiences.
People with a person affecting view object to this, but that view comes with issues of its own. 

To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC) 

This is a way to bridge the gap betwe... (read more)

Reason why I call it a "stream of consciousness": Streams change over time. Conscious beings do too. They can also split, multiply or grow bigger.

One thing I worry about though: Does your consciousness end when sleeping? Does it end when under anesthesia? These thoughts frighten me.

I'd personally much rather have agree/disagree on posts than these reactions, if we had to pick. I'm not sure if these reactions add any useful information. But I'm happy to see you're trying to make it simpler than at LessWrong. Curious to hear what the other reasons are for no agree/disagree on posts. I frequently upvote posts I disagree with and would like to express that disagreement without writing a comment. Maybe agree/disagrees can be optionally set by the authors of the post?

Very much agree. I use upvotes/downvotes to indicate what I would like to see more/less of on the forum. Making that clearer in the pop-up would be great!

This is exactly why I mostly give to animal charities. I do think there's higher uncertainty of impact with animal charities compared to global health charities so I still give a bit to AMF. So roughly 80% animal charities, 20% global health.

3
Christian Pearson
3mo
Thank you Jeroen! Your work inspires us!

Thank you so much for sharing this! I'd personally love to read the appendix. I struggle hard with understanding the feelings and sensations in my body, but I feel quite confident that my main cause of fatigue is stress or anxiety. Mostly because relaxation techniques (breathing exercises, chamomile tea,...) seem to increase my energy levels.

Also with big tasks breaking them down really helps :)

3
Luise
4mo
the causes of people's energy problems are so many and varied! It would be great to have many different experiences written up, including stress and anxiety-induced problems. Thanks for feedback re:appendix, will see if others say the same :)

I would suggest keeping the recommended posts optional. I like them a lot, but I worry they might keep me on the forum too long. They can definitely be on by default.

3
Will Howard
4mo
Thanks for the suggestion! We'll add a user setting for this 👍

This isn't about the main question you raise in this post, but I'm curious why you think biorisk reduction would increase expected future suffering.

9
Linch
4mo
Not speaking for Brian, but biorisk reduction increases the probability humanity reaches the stars, which is object-level bad from a negative utilitarian perspective unless you think we're counterfactually likely to encounter worse-than-humans aliens. 

Yeah I think this is a really important part of the discussion. We won't get a world filled with vegan cats unless owners don't have to worry about constantly checking pH levels.

6
Karthik Sekar
5mo
I don't dispute that. We want to make it as convenient for folks as possible to feed their cats vegan. I'll reach out to my vegan pet food contacts and see if they know.

My estimates aren't low, I think there's very roughly about 40% chance we'll die because of AI this century. But here are some reasons why it isn't higher:

  1. Creating a species vastly more intelligent than yourself seems highly unusual, nothing like it has happened before, so there need to be very good arguments for why it's possible.
  2. Having a species completely kill all other species is also very unusual, so there need to be very good arguments for why that would happen.
  3. Perhaps AGI won't be utility maximizing, LLMs don't seem to be very maximizing. If it
... (read more)
1
Bary Levy
5mo
In response to point 2 - if you see human civilization continuing to develop indefinitely without regard for other species, wouldn't other species be all extinct, except for maybe a select few?

Same, been active since 2016 and these seem odd to me. I would say anyone who's really interested in the question of how to help others effectively using reason and evidence is an EA.

Yeah I've asked the same question (why invest in AI companies when we think they're harmful?) twice before but didn't get any good answers.

I think you're right (I don't mind the serif titles within the blog posts, nor do I mind the sans serif use on substack and medium). I am likely just too attached to the previous look, the most important opinion is that of new users :) Thank you for the work you've done!

Yeah I like most of the UI changes but not a big fan of the sans serif font. Indeed weird that the use isn't consistent either. (ETA: don't agree with this sentence anymore). If people are divided on this, perhaps have a setting to bring it back so people can choose?

8
agnestenlund
6mo
Thanks for letting us know! Choice of typeface is no doubt a subjective thing and some will prefer the old font. In terms of inconsistency–one of the most popular principles for typeface combinations is the one I've gone with here–pairing a sans serif header with a serif body. This combination can be found online on places like Medium and Substack, and was already the case inside of Forum posts before this change. Typefaces are often even created in pairs of serif and sans serif that are meant to be paired this way. This is obviously not a hard rule and you may still prefer other combinations (it’s not uncommon to use all sans serif on web, or all serif if it’s a magazine), and I'm definitely open to trying different things to improve legibility and tweak the “personality” of the Forum through typefaces (but it's not something I expect to prioritize changing right now)

Hi Joe, I read your posts twice and I liked many of the things raised but have a bit of a difficult time figuring out your exact positions on these topics. Would it be possible to just write down your views in a few lines? You can leave out the arguments.

I’m not Joe, but I thought I’d offer my attempt. It's a little more than a few lines (~350 words), though hopefully it's of some use.

Moral anti-realists often think about moral philosophy, even though they believe there are no moral facts to discover. If there are no facts to be discovered, we might ask “why bother? What’s the point of doing ethics?”

Joe provides three possible reasons:

  1. Through moral theorizing, we can better understand which sets of principles it’s possible to consistently endorse.
    1. Sometimes, ethical theorizing can help you discover a tensio
... (read more)

I would change 2 under "against a boycott" to not just donations, but having an impact in general. Just like an airplane flight could be offset by a talk on veganism.

MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions.

How long do investigations like these typically take?

Feelings:

My patience is running out... (read more)

I'm struggling to see how releasing information already provided to the investigation would obstruct it. A self-initiated investigation is not a criminal, or even a civil, legal process -- I am much less inclined to accept it as an adequate justification for a significant delay, especially where potentially implicated people have not been put on full leaves of absence.

Is it a pure coincidence that 3 prominent LLMs are announced on the same day?

1
ojorgensen
6mo
Naively, maybe they each thought Pi day (March 14th) would get them more attention? I'd guess it's most likely a coincidence given how many big releases there have been recently, but would be amusing if it was Pi day related.

I personally like Will's writing and I think he's a good speaker. But I do find it weird that millions were spent on promoting WWOTF.[1] I find that weird on its own (how can you be so confident it's impactful?), but even more so when comparing WWOTF to The Precipice which is in my opinion (and from my impression many others' opinion as well) a much better and more impactful book. I don't know if Ben shares these thoughts or if he has any others.

Edit to add: I vaguely remember seeing a source other than Torres. But as long as I can't find it you can d... (read more)

Just to be clear, I think marketing spending for a book is pretty reasonable. I think WWOTF was not a very good book, since it was really quite confused about AI Risk and described a methodology that I think basically no one adheres to and as such gave a lot of people a mistaken impression of how the longtermist part of the EA community actually thinks, but I think if I was in Will's shoes and thought it was a really important book and contribution, I think spending a substantial amount of money on marketing seems pretty reasonable to me.

The only source for this claim I've ever found was Emile P. Torres's article What “longtermism” gets wrong about climate change

It's not clear where they take the information about an "enormous promotional budget of roughly $10 million" from. Not saying that it is untrue, but also unclear why Torres would have this information.

The implication is also, that the promotional spending came out of EA pockets. But part of it might also be promotional spending by the book publisher.

ETA: I found another article by Torres that discusses the claim in a bit mor... (read more)

3
NickLaing
6mo
Thanks Jeroen that's a fair point I think it was weird too.  Even if the wrong book was plugged though, it doesn't feel like a net harm activity though, and surely doesn't negate his good writing and speaking? I'm sure we'll hear more!

I feel quite worried that the alignment plan of Anthropic currently basically boils down to "we are the good guys, and by doing a lot of capabilities research we will have a seat at the table when AI gets really dangerous, and then we will just be better/more-careful/more-reasonable than the existing people, and that will somehow make the difference between AI going well and going badly". That plan isn't inherently doomed, but man does it rely on trusting Anthropic's leadership, and I genuinely only have marginally better ability to distinguish the moral c

... (read more)

To me this seems like either a scam or an example of the unilateralist curse. I would urge people not to invest in this. For something like this to have any potential, it has to be started by a team of people 1) with lots of relevant experience / a good public track record and 2) that have been actively involved in the EA community for at least a while (a year or more). Even then I would be skeptical as this seems like something way too complex for a broad audience and as a strong prior I would not touch anything blockchain/crypto/NFT related with a 10 foot pole.

One reason I think the subforums didn't work well is that there isn't a big difference between having that feature and just customizing your front page to see more of the topics you like.

This test is a great idea and I hope something like this will get implemented. I'm not a big fan of the tab idea, since community posts will then still be very prominent/accessible. But I do think it's better than what we have today. And in case of the section it would still be great if we could remove that section. Maybe neither a tab or a section is necessary, just show that community is hidden under 'customize feed'. But that might make community posts too hidden.

5
Lizka
7mo
Thanks for this feedback; we hadn't considered adding an option to remove the section (if we go with that version), and are now considering it.  (Yeah, we also considered defaulting to hiding Community by default, but I think that would hide "Community" posts too much, and that some people just want to separate the experience of reading object-level posts from the experience of engaging with "Community" posts, and avoid having them compete with each other for attention.)

Added a transcript to this post! Will do so for my other videos as well.

Loved this post, thanks for writing it! I like the reframing to inside/outside games. I guess my main worry is whether outside games are effective. I can imagine them being effective when veganism becomes more popular/mainstream, but at the moment I'm worried they are more aversive than helpful. I remember Tobias Leenaert in his book "How to create a vegan world" talking about the need of adopting different strategies at different stages of a movement.

5
Aidan Kankyoku
8mo
I have often heard this worry that confrontational/attention-grabbing tactics might be counter-productive at an early stage in the movement. Interestingly, in the wake of Just Stop Oil's soup-throwing, @James Ozden shared with me a twitter thread from a leading academic of social movement strategies arguing basically the opposite: that controversy is most productive in a movement's early stage, when it needs to raise awareness, compared to a later stage when it needs to win over skeptical late adopters. I don't think this is necessarily a question of inside vs. outside, but rather that outside game strategies look different at different points in the movement. And indeed the most controversy-oriented tactics might fit best at the beginning, though I'm not necessarily arguing that.

Wasn't it just called GiveWell labs before 2017?

Love seeing this, thanks for sharing!

Cool spreadsheet! Yeah a tool similar to the square one but in a horizontal line instead seems more useful.

I like these ideas, I would especially love to see a global AGI safety conference!

2
Severin
8mo
For anyone interested in doing this: Ollie Base and Linda Linsefors already expressed interest to support it on LessWrong.
1
Wil Perkins
8mo
Thanks for the link! Edit: looks like the total is around $20 mil to land use reform. Still a good amount but I’d think it would be a higher priority.

If I understand correctly the grant would've been for Nya Dagbladet's foundation, not the publication itself.

Still, unlike others I'm not completely reassured yet. I would also like to know why the grant was considered in the first place and I don't think the FAQ clearly answers that.

I really don't think the libertarian "if you don't like it, go somewhere else" works here as the EA forum is pretty much the place where EA discussions are held. Sure, they happen on twitter and reddit too but you have to admit it's not the same. Most discussions start here and are then picked up there.

I agree with your other arguments, I don't want the culture of the site to drift too quickly because of a large influx of new folks. But why wouldn't a cut off be sufficient for that? I don't see why the power has to keep on increasing after, say, a 200 karm... (read more)

Load more