All of MathiasKB's Comments + Replies

I'm awestruck, that is an incredible track record. Thanks for taking the time to write this out.

These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.

I think I'm sympathetic to Oxford's decision.

By the end, the line between genuine scientific inquiry and activistic 'research' got quite blurry at FHI. I don't think papers such as: 'Proposal for a New UK National Institute for Biological Security', belong in an academic institution, even if I agree with the conclusion.

3
Jason
3d
For the disagree voters (I didn't agreevote either way) -- perhaps a more neutral way to phrase this is might be: Oxford and/or its philosophy department apparently decided that continuing to be affiliated with FHI wasn't in its best interests. It seems this may have developed well before the Bostrom situation. Given that, and assuming EA may want to have orgs affiliated with other top universities, what lessons might be learned from this story? To the extent that keeping the university happy might limit the org's activities, when is accepting that compromise worth it?

One thing that stood out to me reading the comments on Reddit, was how much of the poor reception that could have been avoided with a little clearer communication.

For people such as MacAskill, who are deeply familiar with effective altruism, the question: "Why would SBF pretend to be an Effective Altruist if he was just looking to do fraud?"  is quite the conundrum. Of all the types of altruism, why specifically pick EA as the vehicle to smuggle your reputation? EA was already unlikeable and elitist before the scandal. Why not donate to puppies and Ha... (read more)

I think I am misunderstanding the original question then?

I mean if you ask: "what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students"

then the reach is not the 10 million people watching the show, it's the people you get a chance to speak to.

Wasn't the Future Fund quite explicitly about longtermist projects?

I mean if you worked for an animal foundation and were in a call about give directly, I can understand that somebody might say: "Look we are an animal fund, global poverty is outside our scope".

Obviously saying "I don't care about poverty" or something sufficiently close that your counterpart remembers it as that, is not ideal, especially not when you're speaking to an ex-minister of the United Kingdom.

But before we get mad at those who ran the Future Fund, please consider there's much cont... (read more)

7
NickLaing
16d
There's some truth here, but I think it's part of your job as the head of any EA org to present the best side of all aspects of effective altruism. Even if you disagree with nearterm causes, speaking with grace and understanding of those who work to alleviate poverty will help the PR of your longtermist org I think we should get the context from the future fund people, but really they should probably adjust have already commented here to explain if they were misrepresented, and called Rory Stewart to apologise and clear things up.

I'm working on an article about gene drives to eradicate malaria, and am looking for biology experts who can help me understand certain areas I'm finding confusing and fact check claims I feel unsure about.

If you are a masters or grad student in biology and would be interested in helping, I would be incredibly grateful.

 

An example of a question I've been trying to answer today:

How likely is successful crossbreeding between subspecies of Anopheles Gambiae (such as anopheles gambiae s.s. and anopheles arabiensis), and how likely is successful crossbreed... (read more)

a devastating argument, years of work wasted. Why oh why did I insist that the book's front cover had to be a snowman?

Answer by MathiasKBMar 31, 2024136
40
11

I think it's a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it's currently hard for any single organization to become more transparent without occurring enormous costs.

If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:

  1. Somebody digs deep into a specific analysis carried out. It turns out Open Phil’s analysis has several factual errors that any domain expert cou
... (read more)
4
DPiepgrass
6d
What if, instead of releasing very long reports about decisions that were already made, there were a steady stream of small analyses on specific proposals, or even parts of proposals, to enlist others to aid error detection before each decision?
6
NickLaing
19d
What you say is true. One thing to keep in mind is that academic data, analysis and papers are usually all made public these days. Yes with OpenPhil funding rather than just academic rigor is involved, but I feel like we should aim to at least have the same level of transparency as academia...
cata
19d41
9
2
1

I think you are placing far too little faith in the power of the truth. None of the events you list above are bad. It's implied that they are bad because they will cause someone to unfairly judge Open Phil poorly. But why presume that more information will lead to worse judgment? It may lead to better judgment.

As an example, GiveWell publishes detailed cost-effectiveness spreadsheets and analyses, which definitely make me take their judgment way more seriously than I would otherwise. They also provide fertile ground for criticism (a popular recent magazine... (read more)

As a critic of many institutions and organizations in EA, I agree with the above dynamic and would like people to be less nitpicky about this kind of thing (and I tried to live up to that virtue by publishing my own quite rough grant evaluations in my old Long Term Future Fund writeups)

There's a lot of room between publishing more than ~1 paragraph and "publishing their internal analyses." I didn't read Vasco as suggesting publication of the full analyses.

Assertion 4 -- "The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits" -- seems to be doing a lot of work in your model here. But it seems to be based on assumptions about the nature and magnitude of errors that would be detected. If a number of errors were material (in the sense that correcting them would have changed the grant/no grant decision,... (read more)

Thanks for the thoughtful reply, Mathias!

I think it's a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it's currently hard for any single organization to become more transparent without occurring enormous costs.

I think this applies to organisations with uncertain funding, but not Open Philanthropy, which is essentially funded by a billionaire quite aligned with their strategy?

The internal analyses from open phil I’ve been privileged to see were pretty good. They were also made by humans, who

... (read more)

Agree, I suspect most people downvoted it because they inferred it was a leading question.

5
Lukas_Gloor
19d
I downvoted the question. I'd have found it okay if the question had explicitly asked for just good summaries of the trial coverage or the sentencing report. (E.g., there's the twitter handle Inner city press that was tweeting transcript summaries of every day on trial, or the Carl Reilly youtube channel for daily summaries of the trial. And there's the more recent sentencing report that someone here linked to.) Instead, the question came across as though there's maybe a mystery here for which we need the collective smarts and wisdom of the EA forum. There are people who do trial coverage for a living who've focused on this case. EAs are no longer best-positioned to opine on this, so it's a bit weird to imply that this is an issue that EAs should discuss (as though it's the early days of Covid or the immediate aftermath of FTX, when the EA forum arguably had some interesting alpha). It's also distracting. I think part of what made me not like this question is that OP admits on the one hand that they struggled with finding good info on Google, but then they still give their own summary about what they've found so far. Why give these half-baked takes if you're just a not-yet-well-informed person who's struggled to find good summaries? It feels like "discussion baiting."  Now, if someone did think that SBF/FTX didn't do anything illegal, I think that could be worth discussing, but it should start with a high-quality post where someone demonstrates that they've done their homework and have good reasons for disagreeing with those who have followed the trial coverage and concluded, like the jury, that SBF/FTX engaged in fraud. 

I haven't seen the series, but am currently halfway through the second book.

I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?

But sure, if someone mentioned to me they watched and liked the series and they don't know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.

I think there's a huge difference in potential reach between a major TV series and a LessWrong post.

According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries. 

Whereas a good LessWrong post might get 100 likes. 

We should be more scope-sensitive about public impact!

Relevant to the discussion is a recently released book by Dirk-Jan Koch who was Chief Science Officer in the Dutch Foreign Ministry (which houses their development efforts). The book explores the second order effects of aid and their implications for an effective development assistance: Foreign Aid And Its Unintended Consequences.

In some ways, the arguments of needing to focus more on second-order effects are similar to the famous 'growth and the case against randomista development' forum post.

The west didn't become wealthy through marginal health interven... (read more)

just fyi Dean Karlan doesn't run USAID, he's Chief Economist. Samatha Power is the (chief) administrator of USAID.

I think Bryan Caplan is directionally correct, but his argumentation in this post is incredibly sloppy.

A marxist communist could make the exact same complaint as Bryan Caplan, but with the signs flipped. Why do all these economists focus on RCTs for educational interventions, and never once consider the best educational intervention is to rise up in violent revolution and overthrow our capitalist oppressors?

I don't recall any of the RCT papers I've read being particularly heavy on normative claims. Usually they'll just say:

"this intervention had a measurab... (read more)

Consider joining hackathons such as the ones organized by Apart Research. Anyone can join and get to work on problems directly related to AI Safety.

If you do a good project, you can put that on your resume and have something to speak about at your next interview.

Answer by MathiasKBMar 07, 202412
2
0

I think there's at least two categories:

  1. The beginner who is scared of ridicule.
  2. The senior, who don't have time to write to the forum standard without risking reputation.

I'm more interested in what we can do to encourage the latter group. My impression is that many senior people are reluctant to post, as they don't have time to write something sufficiently well-argued and respond to the comments.

Instead many good discussions take place in signal groups, google docs and email threads. In a perfect world, these discussions would be in the forum. The issue rig... (read more)

Does Claude-3 push capabilities?

I think it can be a fun exercise is to just interpret CEOs statements literally and see what they imply.

If Dario Amodei claims they don't want to push capabilities, I think an interesting question to ask is in what sense releasing the world's best LLM isn't pushing capabilities.

One option that seems possible to me, could be that they no longer consider releasing improved LLMs to meaningfully push the frontier. If Claude-3 spurs OpenAI to push a quicker release of GPT-4.5, this would not be an issue as releasing ever more ref... (read more)

2
JWS
1mo
I think the answer is 'yes' for a general layperson's understanding of 'pushing capabilities', but the emerging EA discourse on this seems to be at risk on inflating several questions: 1. Has Claude-3 shown better capability than other models? Yes under certain specific conditions and benchmarks 2. Do those benchmarks matter/actually capture performance of interest? No, in my opinion. I'd recommend reading Melanie Mitchell's takes on this. 3. Does Claude-3's extra capabilities make it more likely to cause an x-risk event? No, or at least the probability that the current frontier AI model will cause an x-risk event has gone from ~epsilon to ~epsilon 4. Will Claude-3's release increase or decrease x-risk? Very difficult to say, I don't know how people get over cluelessness objections to these questions. So I guess in your post 'frontier' is covering 2 separate concepts, the 'frontier' in terms of published benchmarks and the 'fronitier' in terms of marginal x-risk increase. In my opinion, Claude-3 may be an interesting case where these come apart.
3
Nick K.
2mo
It certainly does seem to push capabilities, although one could argue about whether the extent of it is very significant or not. Being confused and skeptical about their adherence to their stated philosophy seems justified here, and it is up to them to explain their reasoning behind this decision. On the margin, this should probably update us towards believing they don't take their stated policy of not advancing the SOTA too seriously.

I thought the video was excellent, and the highlights of your article were the concrete ideas and examples of good communication.

More concrete ideas please! I don't think anyone will disagree that EA hasn't been the best at branding itself, but in my experience it's easier said than done!

7
blehrer
2mo
If people want more concrete ideas they can hire me to communications work. I don't know how to be more concrete than I did in the article without working for free.

Really cool experiment!

Was it possible to track to what extent the more engaging ads drove conversions? (donations made, pledges taken, etc.)

My hypothesis would be the more engaging ads get more people onto the website, but those people will be much less likely to follow through (and especially with significant amounts), than for example a very targeted and nerdy ad aimed at wealthy tech workers.

6
James Odene [User-Friendly]
2mo
Hey, thanks for reading. The objective for the campaign was increase the brand awareness (taking people from 'never heard of GWWC' to 'remember that they exist') and not conversion (taking people from 'remember that they exist' to 'doing something'). We would never expect people who had never heard of GWWC to hear about them for the first time and then pledge. It's going to take time to warm them up. It's also not part of our campaign test, or within our control the ability of the site to convert traffic. That said, it's important to remember long-term branding work can produce conversion results, and in this case, we delivered 3x more pledge page views (for 4 mins and more) than organic traffic, and ~80% of all traffic hitting the pledge page.  So we were targeting engaged and interested people (as compared to organic traffic). It's too early to know the pledge levels of this new audience as it'll take time and continued engagement to bring them along (we'd expect ~7 interactions before they act), but it's a good story that we're bringing a much larger audience to the table. What's the basis of your hypothesis?

I think this leaves out what is perhaps the most important step in making a quality forecast, which is to consider the baserates!

2
Vasco Grilo
2mo
Nice point, Mathias! I agree reference class forecasting is super important. I think it is supposed to be included in the 3rd commandment about the inside and outside view:

Signal boosting my twitter poll, which I am very curious to have answered:

https://twitter.com/BondeKirk/status/1758884801954582990

Basically the question I'm trying to get at is whether having hands-on experience training LLMs (proxy for technical expertise) makes you more or less likely to take existential risks from AI seriously.

and even if they were solvent at the time, that does not mean they were not fraudulent.

If I took all my customers money, which I had promised to safekeep, and went to the nearest casino and put it all on red, even if I won it would still be fraud.

2
Joel Becker
2mo
Strong agree -- I enjoyed Brad Delong on this point.

In conclusion, I think that rather than being overly focused on finding the most effective means of doing good, we should also be concerned with becoming more altruistic, caring and compassionate.

 

I strongly agree with the last half of this sentence. A rocket engine is only valuable insofar as it is pointed in the right direction. Similarly to how it makes sense to practice using spreadsheets to systematize ones decision-making, I think it make sense to think about ways to become more compassionate and kind.

3
Ulrik Horn
4mo
I disagree for another reason too: I think we should be a movement that is welcoming and feels like home to both the more dispassionate and the more caring among us. I think we might even become stronger by having such a range of emotional drives behind our ambition to do the most good.

We do not know how to make a PAI which does not kill literally everyone.

 

We don't know how to make a PAI that does kill literally everyone either. What would the world have to look like for you to be pro more AI research and development?

0
Greg_Colbourn
4mo
It's pretty much just a point of throwing more money (compute and data) at it now. Current systems are only not killing everyone because they are weak.

Just did it, still works. You can donate to what looks like any registered US charity, so plenty of highly effective options whether you care about poverty or animal welfare.

There's a few I know of:

  • For the new R21 vaccine, WHO is currently conducting prequalification of the production facilities. As far as I understand, African governments have to wait for prequalification to finish for before they can apply for subsidized procurement and rollout through UNICEF and GAVI.
  • For both RTS,S and R21, there are some logistical difficulties due to the vaccines' 4 dose schedule (First three 1 month apart - doesn't fit all too well into existing vaccination schedules) cold-chain requirements, and timing peak immunity with the seasonality
... (read more)

Ah, today I learned! thanks for correcting that. For what it's worth I was vegan for two years, and have been vegetarian for 6.

Do you happen to know about the bioavailability claims of animal versus plant protein?

1
Benny Smith
5mo
Bioavailability stuff is pretty technical and I’m not an expert, but here’s the upshot according to me: Bioavailability is sometimes slightly lower in plants but not enough to matter. For example, a recent review stated: Additionally, combining multiple plant sources in one meal (e.g. soy and potato) often achieves bioavailability competitive with meat (I think this is one reason why many vegan protein powders combine multiple ingredients, e.g. rice & pea protein). So the generic vegan advice of “eat a variety of foods and supplement B12” has this covered. In the rich world, we get way more protein than we need, so vegans are very unlikely to end up protein deficient due to bioavailability issues. And if you’re an athlete or trying to bulk up, I think it’s generally advisable to err on the side of overshooting your protein intake targets, even if you’re eating meat. Slightly overshooting your protein target should more than compensate for any bioavailability gap. We can also measure protein synthesis and muscle strength and mass directly instead of using bioavailability as a proxy, and such studies don’t find downsides to plant protein. Germany’s strongest man can confirm.

They literally don't. Animal proteins contain every essential amino acid, whereas any plant protein will only have a subset.

This is a common misconception!

  • Several plants, including soy and quinoa, are complete proteins.
  • Vegan protein powders contain all the amino acids in appropriate ratios – just check the label of any pea protein powder next time you’re at the store. Pea protein powder is nutritionally identical to whey for all intents and purposes.
  • If you eat enough calories and a variety of legumes and grains as a vegan, it’s basically impossible to be deficient in any amino acid. It’s true that plant foods have amino acids in varying amounts, but they complement each other s
... (read more)

I'm quite excited about cricket protein! Nutritionally it's superior to vegan protein supplements, especially for people who are otherwise vegan and won't get animal protein.

My intuition is that it very much comes down to whether one views an undisturbed cricket life as net-positive or negative. A cricket farm breeds millions of crickets in a 6 week cycle where the crickets are frozen to death not long before they naturally would die of old age.

Rethink Priorities recently incubated the insect institute who I think are exploring insect sentience. They're mo... (read more)

There’s nothing magical about “animal protein.” Plants and plant-based protein powders provide the same nutrients, minus the moral atrocity.

Insect sentience is debated, but I’m not sure why we’d take the risk when we can just go vegan.

I’m highly skeptical that farmed crickets would live “undisturbed” lives, given the historical track record of how animals are treated when we optimize their lives for meat production rather than their own welfare. Generally, we should treat sentient beings as an end in themselves, not as a means to an end.

Bravo! This really sets a bar for the quality of inquiry we should strive for in this community.

Forgive me for having the IQ of a shrimp, but could you spell out a concrete problem that the odyssean process could be used to solve?

ie:

problem: "People disagree over what colors the new metro line should be"

hypothetical process: "12 people sit in a room and hypothesize on color palettes. Those colour palettes are handed out to a panel of 100 randomly picked citizens to deliberate and then finally voted upon"

I skimmed through the report and am pretty confused as to what concretely the process is.

8
Odyssean Institute
5mo
Hi Mathias, To see the Process itself, Page 16 has a diagram following the tables outlining each component of it, and subsequent pages have the commentary.  You’re broadly accurate in your proposed case study of the metro in the form of the process, in that our Process for this problem would entail horizon scanning key uncertainties or trends in metro line design, such as comparative analysis of successful metro redesigns with measurable successes. This is then presented to the 100 citizens through an iterative process of identifying their values, possible solutions, uncertainties, and then using decision making under deep uncertainty (DMDU) to coproduce actionable pathways that fulfil their multiple criteria. However, due to the nature and involved aspect of the process, it is geared explicitly towards challenging, or wicked problems, and existential risks or GCR - rather than simpler or more trivial policy issues. We also have an abstract version of this process in the ‘Combining the Pictures’ section. In short - horizon scan a complex issue or trends, enable deliberation by a wider sample using this, and iterate using DMDU to facilitate finding the win-wins within the solution space that may have been neglected, increasing the tractability of the eventual recommendations. We don't want to pick too specific a use case as we see a great value in the generalisability of this across cause areas. Furthermore, in our commentary on the process, we cite a few concrete examples of deliberation, DMDU, and EEJ and where they have been used, with citations to read further on their applications. Some examples include the Dutch Delta Commissioner's work for DMDU, Irish, Taiwanese, and American uses of deliberation, and the WHO's uses of expert elicitation and horizon scanning, as well as biorisk and ecological cases. We had to lean a little on brevity due to the range of components involved, so ideally the citations can furnish further detail where we couldn't due to leng

That's a really cool point, do share those sources!

Are there any studies on which calories get cut when people go on semaglutide? I imagine it's the empty carbs that would go before the beef, but maybe that's already calculated into the estimation?

The latest reports of CEARCH might be of interest to the new team:

Hypertension reduction through salt taxation:

https://drive.google.com/file/d/1R2ul47NtD-dJ7D7rcHFZ0z7h0JqcFxK_/view

 

Diabetes through sugar-soda tax:

https://drive.google.com/file/d/1UrYZUGbLn5LeTRVRZYdiY2EorsmXxQwR/view

Givedirectly goes into detail in this blogpost: https://www.givedirectly.org/drc-case-2023/

The founder of Givedirectly also the fraud case in this 80k podcast: https://open.spotify.com/episode/4yKwimUbdzPeg9MWTuJOoI?si=0eb1f2d942914963

Perhaps some of his motivation was to keep OpenAI from imploding?

5
Lukas_Gloor
5mo
Hm, very good point! I now think that could be his most immediate motivation. Would feel sad to build something and then see it implode (and also the team to be left in limbo). On reflection, that makes me think maybe Sam doesn't necessarily look that bad here. I'm sure Microsoft tried to use their leverage to push for changes, and the OpenAI board stood its ground, so it couldn't have been easy to find a solution that isn't the company falling apart over disagreements and stuff.
MathiasKB
5mo133
33
0
11

For those who agree with this post (I at least agree with the author's claim if you replace most with more), I encourage you to think what you personally can do about it.

I think EAs are far too willing to donate to traditional global health charities, not due to them being the most impactful, but because they feel the best to donate to. When I give to AMF I know I'm a good person who had an impact! But this logic is exactly what EA was founded to avoid.

I can't speak for animal welfare organizations outside of EA, but at least for the ones that have come ou... (read more)

I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this.

<3 This is super awesome / inspirational, and I admire you for doing this!

Given it is the Giving Season, I'd be remiss not to point out that ACE currently has donation matching for their Recommended Charity Fund.

I am personally waiting to hear back from RC Forward on whether Canadian donations can also be made for said donation matching, but for American EAs at least, this seems like a great no-brainer opportunity to dip your feet in effective animal welfare giving.

For what it's worth, I think saving up runway is a no brainer.

During my one year as a tech consultant, I put aside half each month and donated another 10%. The runway I built made the decision for me to quit my job and pursue direct work much easier.

In the downtime between two career moves, it allowed me to spend my time pursuing whatever I wanted without worrying about how to pay the bills. This gave me time to research and write about snakebites, ultimately leading to Open Phil recommending a $500k investment into a company working on snakebite diagnosti... (read more)

43 1:1s, holy moly surely that must be the record - well done!

6
Harry Luk
5mo
Thank you :) I met someone else at EAG Boston who also did 40+ at the last conference. Definitely something achievable, just have to do the right pre-conference preparations and stay hydrated/fed during the event. If I can do it (I'm mid-career, aka older with less energy), you can too with a BIG ENOUGH WHY!

Is this from ycombinator's podcast or something? I feel like I've read this before

1
John Salter
6mo
I used YC endorsement as a filter to decide what to include, as that way I know it's a common enough mistake to justify talking about. Do you watch their youtube channel? 

Was about to write this! Deeply unserious that something of this poor quality can make it through peer review.

I've noticed a decrease in the quality and accuracy of communication among people and organizations advocating for pro-safety views in the AI policy space. More often than not, I'm seeing people go with the least charitable interpretations of various claims made by AI leaders.

Arguments are increasingly looking like soldiers to me.

Take the following twitter thread from Dr. Peter S. Clark describing his new paper co-authored with Max Tegmark.

The authors use game theory to justify a slew of normative claims that don't follow. The choice of language makes refu... (read more)

2
Chris Leong
6mo
Any suggestions for improving this?

I'd be interested in hearing about why he believes in retributivism!

(he mentions being retributivist in this blogpost

bugged out for me too, showed up when I tried editing the post, so just republished without any changes. seems to have fixed it

I did my bsc. in computer science so it's possible! 

I joined a political party in my country, and started applying for jobs and internships. What got me my first was cold emailing the members of the European Parliament in my party, they put a good word in among the dozens of other people who applied through the official forms.

1
joehindley
6mo
Thanks!  Are there any skills that you gained from your CS degree that you think have put you at an advantage in the policy sphere?
2
NickLaing
7mo
Thanks man will give this a read and get back to you

The minute suffering I experience from the cold is not the real cost!

I'm probably an outlier, given that a lot of my work is networking but I have had to cancel attending an event where I was invited to speak and likely would have met at least a few people who would have been relevant to know for my work, canceled an in-person meeting (though I likely will get a chance to meet them later) and reschedule a third.

The cold probably hit at the best possible time (right after two meetings in parliament), had it come sooner it would have really sucked.

Additional... (read more)

Why is it that I must return from 100% of EAGs with either covid or a cold?

Perhaps my immune system just sucks or it's impossible to avoid due to asymptomatic cases, but in case it's not: If you get a cold before an EAG(x), stay home!

For those who do this already, thank you! 

5
AI Rights Activist
7mo
I would strongly urge people to err on the side of attendance. The value of the connections made at EAGs and EAGxs far exceeds the risks posed by most communicable diseases, especially if precautions are taken, such as wearing a mask.  If you take seriously the value of connections, many of them could very well exceed the cost to save a life. Would you say that your avoiding a cold is worth the death of someone in the developing world, for instance? I think your request fails to take seriously the value of making connections within the EA community.
8
MichaelStJules
7mo
If it's just a cold, or you're testing negative for COVID but still have mild symptoms, I think it should be okay to attend wearing a mask indoors and distanced outside, and eating outside or alone. I did this once for an EAG under the advice of the team with only and multiple negative tests and mild symptoms. I also checked with each of my 1-on-1s if they were still okay meeting and how, and (maybe excessively, and probably not what the team expected) skipped almost all group events and talks I had originally planned to attend. Part of the reason I skipped group events and talks was because I wouldn't be able to check with everyone if they were comfortable with me attending. That being said, I felt pretty self-conscious attending, which was unpleasant, but I also had good 1-on-1s, as well as good interactions outside of the formal events.
3
trevor1
7mo
Do a large proportion of people come back from EAGs infected with a variant of COVID, relative to other large gatherings?

Thanks, you just bought me days of productivity

Load more