All of lukasb's Comments + Replies

Regarding AI lab coordination, it seems like the governance teams of major labs are a lot better placed to help with this, since they will have an easier time getting buy in from their own lab as well as being listened to by other labs. Also, the frontier models forum seems to be aiming at exactly this. 

Thanks for writing this! If I don't want to sign-up to the finders course, are there any resources you would recommend for doing the one-hour lovingkindness sessions?

3
Kat Woods
9mo
Oddly enough, I haven't found any really good resources on this. Except for this one google doc I found ages ago that I can't seem to find again. I think the explanation I give here might actually be the best I've seen.  But also, the explanation is pretty simple, so it's less about understanding (which is relatively easy) and more about practicing (which is harder, but I find, still way easier than concentration practice).  The only pieces of instruction I'd add are that if you're finding it hard to transfer the feelings of lovingkindness to a new object, that means the object is too hard. You should find an object that's easier. It's the equivalent of lifting weights and jumping up to a weight that's too heavy. Gradual increasing weight is key. 

If taking your lawyer's advice, in this case, means being silent for 5-7 years, it seems like some people should speak openly and bear the costs.

7
Jason
1y
It is really difficult to assess from the outside, because we don't know the facts that the involved individuals (and their lawyers) do.
1
GoodEAGoneBad
1y
Appreciate you ❤️

I'd be interested to hear what you think is going wrong with Paul's writing style, if you want to share.

3
GoodEAGoneBad
1y
Guys, please have a go at people on another person's post. God knows there are enough of them... This is exactly what I'm talking about and I will literally have a coronary. Lol.

Hm, yeah I guess my intuition is the opposite. To me, one of the central parts of effective altruism is that it's impartial, meaning we shouldn't put some people's welfare over other's. 

I think in this case it's particularly important to be impartial, because EA is a group of people that benefitted a lot from FTX, so it seems wrong for us to try to transfer the harms it is now causing onto other people.  

3
Agrippa
1y
(as an aside it also seems quite unusual to apply this impartiality to the finances of EAs. If EAs were going to be financially impartial it seems like we would not really encourage trying to earn money in competitive financially zero sum ways such as a quant finance career or crypto trading)
1
Agrippa
1y
Aspiring to be impartially altruistic doesn't mean we should shank eachother. The so-impartial-we-will-harvest-your-organs-and-steal-your-money version of EA has no future as a grassroots movement or even room to grow as far as I can tell.  This community norm strategy works if you determine that retaining socioeconomically normal people doesn't actually matter and you just want to incubate billionaires, but I guess we have to hope the next billionare is not so (allegedly) impartial towards their users' welfare.

Maybe I'm misunderstanding bank runs, but as I understand it, they happen because 

  • the institution that is holding other people's money doesn't have all that money in liquid form 
  • they are unable to give it back if everybody tries to deposit it at once
  • when this happens, the institution runs out of money and many people, who didn't withdraw their cash in time, lose all their deposits

I think the reason Richard listed #2 as a preference is that there might still be hope that FTX doesn't run out of money in the first place and no one loses their deposi... (read more)

8
Agrippa
1y
I would like to be involved in the version of EAs where we look after eachother's basic wellness even if it's bad for FTX or other FTX depositors. I think people will find this version of EA more emotionally safe and inspiring. To me there is just no normative difference between trying to suppress information and actively telling people they should go deposit on FTX when distress occurred (without communicating any risks involved), knowing that there was a good chance they'd get totally boned if they did so. Under your model this would be no net detriment, but it would also just be sociopathic.  Yes the version of EA where people suppress this information, rather than actively promote deposits, is safer. But both are quite cruel and not something I could earnestly suggest to a friend that they devote their lives to.

For Level 3: Machine Learning, this document might be useful. It provides a quick summary/recap of a lot of the math required for ML.

This looks exciting! I plan to apply. 

One reaction I have looking at the syllabus is that it's too theoretical for me in the beginning. I feel like it would be better to have an applied component from the start. Maybe the first two weeks could be theory paired with writing a short distillation in the first week, getting feedback, and then refining it. The feedback loop of actually writing is probably by far the best way to improve distillation skills.  This is just an impression I have though, so I could be wrong.

1
michel
2y
thanks for the flag! we’re working on fixing this now

Personally, I don't have a problem with the title. It clearly states the central point of the post. 

Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change.

How much of the impact generated by the career change are you attributing to CEA spending here? I'm just wondering because counterfactuals run into the issue of double-counting (as discussed here). 

Unsure but probably more than 20% if the person wouldn't be found through other means. I think it's reasonable to say there are 3 parties: CEA, the group organizers, and the person, and none is replaceable so they get 33% Shapley each. At 2% chance to get a career change this would be a cost of 750k per career which is still clearly good at top unis. The bigger issue is whether the career change is actually counterfactual because often it's just a speedup.

I agree that there is an analogy to animal suffering here, but there's a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.

Thanks for writing this! It seems like you've gone through a lot in publishing this. I am glad you had the courage and grit to go through with it despite the backlash you faced. 

9
Patrick
2y
I would've found it helpful if the post included a definition of TUA (as well as saying what what it stands for). Here's a relevant excerpt from the paper:

Techno-utopian approach (via paper abstract)

Answer by lukasbDec 18, 20212
0
0

Not sure if this fits, but it seems like 80,000 hours started as somewhat of a side project. This 2015 article it says 80k started with Will MacAskill and Ben Todd “forming a discussion group and giving lectures on the topic, then eventually creating 80,000 Hours to spread their ideas.” (They link to this vintage lecture they gave.)

I’m not sure how much of a “side project” this was to Ben and Will. Maybe others know more about that era.

I agree with Aaron! Given the little time you have, I would make the pitch as simple as possible.

Fair enough. I would guess you can usually have a higher impact through your career since you are doing something you've specialized in. But the first two examples you bring up seem valid.

This seems like a good idea.

I submitted the following comment:

I urge the FDA to schedule its review of Paxlovid and to make the timeline 3 weeks or less, as it did with the COVID vaccine.

1000 people are dying of COVID in the US every day. With an efficacy of 89%, Paxlovid could prevent many of these deaths. The earlier Paxlovid is approved, the more lives will be saved.

Thank you for your consideration.

I wasn't sure what topic to put it under so I chose "Drug Industry - C0022." 

2
DirectedEvolution
2y
Thank you for taking action!

I like this framing a lot. I particularly like the idea of replacing the phrase "doing good" with "helping others" and "maximization" with "prioritization."

I understand the impulse to mention volunteering before donations and careers because people naturally connect it with doing good. But I think it would be misleading for the following reasons:

  • As you said, there is currently very little emphasis on volunteering in EA
  • In most cases, individuals can do much more good by changing their career path or donating

I  think we should be as accurate as we can w... (read more)

8
GidiKadosh
2y
Thank you for this feedback lukasberglund and Maricio, I think I underestimated the misrepresentation argument, so I highly appreciate this.  About your second argument on the impact of volunteer guidance, and the discussion with Mauricio: I entirely agree with your opinion on the impact of volunteering, but I think that the main case for including volunteering in the pitch (and in general, investing in guidance for effective volunteering) is that it for specific individuals, who are interested in volunteering, this can be the entry point that would attract them to learn more about EA - whether we eventually help them with prioritizing volunteer opportunities or with career/donation decisions. For this reason (and because specific volunteering opportunities can be highly impactful, as you both discussed), I still think it's beneficial to include volunteering on EA pitches. I believe that the argument about misrepresentation makes a good case for not mentioning volunteering as the first on the list, but I don't think that the order is of high significance.  I'll soon make some updates to the post about that. Thank you both again for your feedback!
9
Mau
2y
Yup, this also lines up with how (American) undergrads empirically seem to get most enthusiastic about career-centered content (maybe because they're starved for good career guidance/direction). And a nitpick: I initially nodded along as I read this, but then I realized that intuition came partly from comparing effective donations with ineffective volunteering, which might not be comparing apples to apples. Do effective donations actually beat effective volunteering? I suspect many people can have more impact through highly effective volunteering, e.g.: * Volunteering in movement-building/fundraising/recruitment * High-skill volunteering for orgs focused on having positive long-term impacts, or potentially for animal advocacy orgs (since these seem especially skill-constrained) * Volunteering with a mainstream policy org to later land an impactful job there (although this one's iffy as an example since it's kind of about careers) (Still agree that emphasizing volunteering wouldn't be very representative of what the movement focuses on.)

Is there evidence/theoretical reason to believe that not experimenting in governance leads a movement to become slow over time?

thank machine doggo

[Comment pointing out a minor error]  Also, great post!

3
Davidmanheim
3y
Whoops! My apologies to both individuals - this is now fixed. (I don't know what I was looking at when I wrote this, but I vaguely recall that there was a second link which I was thinking of linking to which I can no longer find where Peter made a similar point. If not, additional apologies!)

I'm impressed with the success you guys had! I'm excited to see your organization develop.

2
aaronhamlin
3y
Thanks! We look forward to continuing our impact. I'm always impressed with our team and what we're able to do with our resources.

Good point. I'll bring this up with other group leaders.

This approach is compelling and you make a good case for it, but I think what Lynch said about how not supporting a movement can feel like opposing it is significant here. On our university campus, supporting a movement like Black Lives Matter seems obvious, so when you refuse to, it makes it looks like you have an ideological reason not to.

What is the best leadership structure for (college) EA clubs?


A few people in the EA group organizers slack (6 to be exact) expressed interest in discussing this.

Here are some ideas for topics to cover:

  • The best overall structure (What positions should there be etc.
  • Should there be regular meetings among all general members/ club leaders?
  • What are some mistakes to avoid?
  • What are some things that generally work well?
  • How to select leaders

I envision this as an open discussion for people to share their experiences. At the end, we could compile the result of our discussion into a forum post.

In the beginning of the Christiano part it says

There can't be too many things that reduce the expected value of the future by 10%; if there were, there would be no expected value left.

Why is it unlikely that there is little to no expected value left? Wouldn't it be conceivable that there are a lot of risks in the future and that therefore there is little expected value left? What am I missing?

2
Rohin Shah
4y
See this comment thread.
2
Liam_Donovan
4y
I think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.