All of slg's Comments + Replies

Without saying much about the merits of various commenters' arguments, I wanted to check if this is a rhetorical question:

Is anyone on this forum in a better position than the Secretary-General of the UN to analyze, for example, the impact of Israel's actions on future, unrelated conflicts?

If so, this is an appeal to authority that isn't very helpful in advancing this discussion. If it's an actual question, never mind. 

What’s the lore behind that update? This was before I followed EA community stuff

My understanding, though I'm not sure the board ever publicly confirmed this, was they decided that Larissa was acting on behalf of Leverage Research, and hence contrary to the best interests of CEA, and they wanted to stop the entryism.

Thanks for writing this up, I was skeptical about Scott‘s strong take but didn’t take the time to check the links he provided as proof.

That's a good pointer, thanks! I'll drop the reference to Diggans and Leproust for now.

8
jtm
7mo
To be clear, I definitely think there's a spectrum of attitudes towards security, centralisation, and other features of hazard databases, so I think you're pointing to an important area of meaningful substantive disagreement!
slg
7mo11
0
0

Thanks for the write-up. Just adding a note on how this distinction has practical implications for how to design databases containing hazardous sequences that are required for gene synthesis screening systems.

With gene synthesis screening, companies want to stop bad actors from getting access to the physical DNA or RNA of potential pandemic pathogens. Now, let's say researchers find the sequence of a novel pathogen that would likely spark a pandemic if released. Most would want this sequence to be added to synthesis screening databases. But some also want ... (read more)

6
jtm
7mo
Hi slg — great point about synthesis screening being a very concrete example where approaches to security can make a big difference. One quibble I have: Your hyperlink seems to suggest that Diggans and Leproust advocate for a fully “public” database of annotated hazard sequences. But I think it’s worth noting that although they do use the phrase “publicly available” a couple of times, they also pretty explicitly discuss the idea of having such a database be accessible to synthesis providers only, which is a much smaller set and seems to carry significantly lower risks for misuse than truly public access. Relevant quote: “Sustained funding and commitment will be required to build and maintain a database of risk-associated sequences, their known mechanisms of pathogenicity and the biological contexts in which these mechanisms can cause harm. This database (or at a minimum a screening capability making use of this database), to have maximum impact on global DNA synthesis screening, must be available to both domestic and international providers.” Also worth noting the parenthetical about having providers use a screening mechanism with access to the database without having such direct access themselves, which seems like a nod to some of the features in, eg, SecureDNA’s approach.
8
Jeff Kaufman
7mo
Yup! (I expected to see your comment link to SecureDNA, which has a cryptographic solution to screening synthesis without either (a) sending hazards to synthesizers or (b) sending reconstructible synthesis orders to others.)

I skimmed it, and it looks good to me. Thanks for the work! A separate post on this would be cool.

I set a reminder! Also, let me know if you do end up updating it.

3
Pablo
10mo
I have now uploaded a new deck with the relevant figures updated. Would you mind checking it out and telling me if it's working correctly? I might create a separate post to announce this new version, once I add a bunch of new cards people suggested, but feedback from early testers would be valuable.

Is there an updated version of this? E.g., GDP numbers have changed.

4
Pablo
10mo
It's on my TODO list. Feel free to leave another comment in a month if I don't update it (and keep leaving comments until I do). EDIT (3 June 2023): Done.

Flagging that I approve this post; I do believe that the relevant biosecurity actors within EA are thinking about this (though I'd love a more public write-up of this topic). Get in touch if you are thinking about this!

I'm excited that more people are looking into this area!

Flagging that I only read the intro and the conclusion, which might mean I missed something. 

High-skilled immigration

From my current understanding, high-skilled immigration reform seems promising not so much because of the effects on the migrants (though they are positive) but mostly due to the effect on the destination country's GDP and technological progress. The latter has sizeable positive spillover effects (that also accrue to poorer countries).

Advocacy for high-skilled immigration is less c... (read more)

6
JoelMcGuire
1y
I agree that advocacy for high skilled immigration is more likely to succeed, and that the benefits would probably come more from technological and material progress. The problem is we currently aren't prepared to try and estimate the benefits of these society and world wide spillover effects.  Maybe we will return to this if (big if) we explore policies that may cost-effectively increase GDP growth (which some argue is = tech progress in the long run?), and through that subjective wellbeing [1] .  Regarding Malengo, I asked Johannes a few questions about it and I'm referencing that post whenever I cite Malengo numbers. I didn't add it here because most of our work was already done when they wrote a post about it, and I was too lazy, and I didn't think it looked particularly promising in my initial estimates. However, I now notice that in my previous calculations I didn't consider remittances, which seems like an omission. As we discussed in the report, it's unclear how remittances balance the negative effects of separation from the immigrant, but I think that separation pains are less of a concern if it's a young adult leaving -- as that's pretty normal in many cultures.  So here's a BOTEC with remittances considered.  As Johannes said, they expect that 64% of students will settle permanently in Germany (or a similar country) after graduating. I interpret this to imply an expected stay of 38.4 years, which if life-satisfaction difference  between the countries closes slightly to 2 life-satisfaction points, will mean 2 * 38.4 = 78.8 WELLBYs per student sent. It costs $15,408 [2]to fund a student to matriculate in Germany.  If we're only concerned with the student, this implies a cost-effectiveness of 76.8 / $15,408 = ~5 WELLBYs per $1000. Which is a bit less than 8 WELLBYs per $1k that I estimate come from GiveDirectly cash transfers. But this excludes remittances.  They expect Malengo participants to remit ~$2k a year. If assume a 1 to 1 equivalence between $

Comment by Paul Christiano on Lesswrong:

 

""RLHF and Fine-Tuning have not worked well so far. Models are often unhelpful, untruthful, inconsistent, in many ways that had been theorized in the past. We also witness goal misspecification, misalignment, etc. Worse than this, as models become more powerful, we expect more egregious instances of misalignment, as more optimization will push for more and more extreme edge cases and pseudo-adversarial examples.""

These three links are:

... (read more)
slg
1y22
7
2

This post reads like it wants to convince its readers that AGI is near/will spell doom, picking and spelling out arguments in a biased way. 

Just because many ppl on the Forum and LW (including myself) believe that AI Safety is very important and isn't given enough attention by important actors, I don't want to lower our standards for good arguments in favor of more AI Safety.

Some parts of the post that I find lacking:

 "We don’t have any obstacle left in mind that we don’t expect to get overcome in more than 6 months after efforts are invested to

... (read more)

Thanks for writing this up. I just wanted to note,  the OWID graph that appears while hovering over a hyperlink is neat!  @JP Addison or whoever created that, cool work.

slg
1y32
7
0

Flagging that I'm only about 1/3 in.

Regarding this paragraph:

" An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]"

When saying that the science doesn't bea... (read more)

Appreciated this post! Have you considered crossposting this to Lesswrong?  Seems like an important audience for this. 

3
NunoSempere
1y
Hey, I considered it and decided not to, but you are welcome to cross post it (or the original blog post <https://nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/>)

I just wanted to note that I appreciated this post and the subsequent discussion, as it quickly allowed me to get a better model of the value of antivirals. Publicly visible discussions around biosecurity interventions are rare, making it hard to understand other people's models. 

I appreciate that there are infohazards considerations here, but I feel it's too hard for people to scrutinize the views of others because of this.

Appreciated the 5-minute summary; I think more reports of this length should have two summaries, one TL;DR, the other similar to your 5 min summary.

slg
1y15
4
1

Let's phrase it even more explicitly: You trust EVF to always make the right calls, even in 10 years from now.

 

The quote above (emphasis mine) reads like a strawman; I don't think Michael would say that they always make the right call. My personal view is that individuals steering GWWC will mostly make the right decisions and downside risks are small enough not to warrant costly governance interventions.

slg
1y16
6
0

This point is boring, but I don't think Twitter gives an accurate picture of what the world thinks about EA. I still think there is a point in sometimes reacting to bad-faith arguments and continuing to i) put out good explanations of EA-ish ideas and ii) writing up thoughts on what went wrong. But communicating too fast, before, e.g., we have an improved understanding of the FTX situation, seems bad.

Also, as a semi-good analogy for the Wytham question, the World Economic Forum draws massive protests every year but is still widely respected among important circles.

Probably fits max 50-100 people, though I have low certainty and this might change in the future. I think it‘s designed to host smaller events than the above, e.g., cause-area specific conferences/retreats.

On my end, the FLI link is broken: https://futureoflife.org/category/laws/open-letters-laws/

Agreed that their research is decent, but they are post-graduate institutes and have no undergraduate students.

Thanks, I saw a similar graph on Twitter! Wondering what kind of measurements would most clearly indicate more in-depth-engagement with EA—though traffic to the Forum likely comes close to that.

2
EcologyInterventions
2y
Total donations, total donors, number of positive articles written, number of ea adjacent orgs, number of organizations mentioning their dalys/qalys? I'm not sure, but those are some ideas.
slg
2y12
0
0

I liked it a lot. Given that he probably wasn't involved beforehand, the author got a detailed picture of EA's current state.

That makes sense; thanks for expanding on your comment.

slg
2y13
0
0

I appreciate that many EA's focus on high IQ and general mental ability can be hard to deal with. For instance, I found this quite aversive when I first got into EA. 

But I'm unsure why your comment has 10 upvotes, given that you do not give many arguments for your statements.

Please let me know if anything below is uncharitable of if I misread something!

Focusing on elite universities

[...] why EA's obsession with elite universities is sickening.

The share of highly talented students at elite universities is higher. Thus, given the limited number of indiv... (read more)

1
Fasc
2y
 I was under the impression that the Max Planck Society and Helmholtz Association are fairly comparable to elite universities in the US or GB in most but being called university.
8
Karthik Tadepalli
2y
Since two comments have understood me as claiming that intelligence doesn't matter in general, I think I just communicated my point very badly. I accept the general arguments that intelligence matters for people's achievements and such. My claim is that EA as a movement requires all kinds of skills, not just analytical intelligence. This is not my sense of the bottlenecks in EA. I have the impression that EA has a lot of analytically intelligent people already and is bottlenecked by communicators, organizers, etc - people who have strong social and emotional skills and can grow EA as a movement. But if you are right that this is the most important bottleneck it is, then I would agree that selecting for high IQ individuals is a pretty good step.

I was very happy to read this, great to hear that your switch to direct work was successful!

Noting my excitement that you picked up on the idea and will actually make this happen!

The structure you lay out sounds good.

Regarding the winning team, will there be financial rewards? I’d give it >70% that someone would fund at least a ~$1000 award for the best team.

3
Cillian_
2y
Thanks Simon! Currently, we don't plan to provide a financial reward to the winning team (though I must admit, we haven't given this much thought). It's a good point though & we'll consider it further in the coming weeks.  If anyone reading this is interested in funding an award for the winning team, please do get in touch.

Do you know which funder is supporting the EA Hotel type thing?

3
Chris Leong
2y
Apparently they have support from a private donor.
slg
2y25
0
0

Maybe you’re already considering this but here it goes anyway:

I‘d advise against the name ‚longtermist hub‘. I wouldn‘t want longtermism to also become an identity, just as EA is one.

It also has reputational risks—which is why new EA-oriented orgs do not have EA in their name.

8
Jonathan_Michel
2y
Very strong upvote. Thanks for commenting this Simon.
4
Severin
2y
Yes, we are currently working on a better name. Thanks for the input, and feel free to send me a message if you have a great idea.

As far as I understand sessions will be fully subsidised by TfG. If you can’t afford them you can choose to pay 0$—unsure if this is standard among EA coaches.

I also think centralisation of psychological services might be valuable as it makes it easier to match fitting coaches/coachees and assess coaching performance.

Practical advice for how to run EA organisations is really valuable, thanks for writing this up.

Hey, I just wanted to leave a note of thanks for this excellent write-up!

I and some other EAs are planning an event with a similar format—your advice is super helpful to structure our planning and avoid obvious mistakes. 

In general, these kinds of project management retrospectives provide a lot of value (e.g., EAF's hiring retrospective).

This is cool, I had no idea you were also working on this.

This could be easier, yes. I know of one person who models the defensive potential of different metagenomic sequencing approaches, but I think there is space for at least 3-5 additional people doing this. 

I think he was explicitly addressing your question of sexually-transmitted diseases being capable of triggering pandemics, not if they can end civilization. 

Discussing the latter in detail would quickly get into infohazards—but I think we should spend some of our efforts (10%) on defending against non-respiratory viruses. But I haven't thought about this in detail.

I do mean EAs with a longtermist focus. While writing about highly-engaged EAs, I had Benjamin Todd's EAG talk in mind, in which he pointed out that only around 4% of highly-engaged EAs are working in bio.

And thanks for pointing out I should be more precise. To qualify my statement, I'm 75% confident that this should happen.

Despite how promising and scalable we think some biosecurity interventions are, we don’t necessarily think that biosecurity should grow to be a substantially larger fraction of longtermist effort than it is currently.

 

Agreed that it shouldn't grow substantially, but ~doubling the share of highly-engaged EAs working on biosecurity feels reasonable to me. 

6
MichaelA
2y
FWIW, I don't actually know what you mean/believe here and whether it's different to what the post already said, because: * The post said "fraction of longtermist effort" but you're saying "share of highly-engaged EAs". Maybe you're thinking the increased share should mostly come from highly engaged EAs who aren't currently focused on longtermist efforts? That could then be consistent with the post. * You said "feels reasonable", which doesn't make it clear whether you think this actually should happen, it probably should happen, it's 10% likely it should happen, it shouldn't happen but it wouldn't be unreasonable for it to happen, etc.
slg
2y25
0
0

I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It's a lot less risky to openly talk about those than about technologies like antivirals or vaccines.

I'm happy to see this shift, as concrete lists like this will likely motivate more people to enter the space. 

slg
2y70
0
0

@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.

I also wanted to express that I'm sorry this experience has been so stressful. It's crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.

I particularly agree with the last point on focussing on purely defensive (not net-defensive) pathogen-agnostic technologies, such as metagenomic sequencing and resilience measures like PPE, air filters and shelters. 

 If others share this biodefense model in the longtermist biosecurity community, I think it'd be important to point towards these countermeasures in introductory materials (80k website, reading lists, future podcast episodes) 

I do wonder what the downside is here. It's a fleeting, low-fidelity impression of EA that will probably not stick in most minds. However, if 10-20 people donate money after hearing about it through Patrick, it might already be positive in sum.

2
DavidNash
2y
I'd be slightly surprised if it led to a single donation, I'm not even sure how many searches it would lead to

Do you specifically object to the term megaproject, or rather to the idea of launching larger organizations and projects that could potentially absorb a lot of money?

If it's the latter, the case for megaprojects is that they are bigger bets, with which funders could have an impact using larger sums of money, i.e., ~1-2 order of magnitudes bigger than current large longtermist grants. It is generally understood that EA has a funding overhang,  which is even more true if you buy into longtermism, given that there are few obvious investment opportunities... (read more)

Hey Ludwig, happy to collaborate on this. A bunch of other EAs and I analyzed the initial party programs under EA considerations; this should be easily adapted to the final agreement and turned into a forum post.

Caveat: I work in Biosecurity.

I agree with the last point. Based on Ben Todd's presentation at EAG,

  • 18% of engaged EAs work on AI alignment, while
  • 4% work on Biosecurity.

Based on Toby Ord's estimates in the Precipice,  the risk of extinction in the next 100 years from

  • Unaligned artificial intelligence is ∼ 1 in 10, while
  • the risk from engineered pandemics is ∼ 1 in 30.

So, the stock of people in AI is 4.5x higher than Biosecurity, while AI is only  3x as important.

There is a lot of nuance missing here, but I'm moderately confident that this dysbalance... (read more)

Is there a historical precedent for social movements buying media? If so, it'd be interesting to know how that influenced the outlet's public perception/readership.

As of now, it seems like movements "merely" influence media, such as the NYTimes turning more leftward in the last few years or Vox employing more EA-oriented journalists.

Spencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile. I'd consider him a generator.

Load more