Quick takes

(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK? 

I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could be: workshops, conferences, unconferences, retreats, summer schools, coding/data science bootcamps, EtG accelerators, EA charity accelerators, intro to EA bootcamps, AI Safety bootcamps, etc. 

This would be next door to CEEALAR (the building is potentially coming on the market), but mos... (read more)

Points against for me: 
- The hassle of the purchase and getting it up and running (on top of lots of other things I've got going on already).
- Short timelines could make it all irrelevant (unless we get a Pause on AGI).
- If it doesn't work out and I end up selling the building again, it could end up quite a bad investment relative to the counterfactual (of holding crypto). [This goes both ways though.]


Not sure how to post these two thoughts so I might as well combine them.

In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.

However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:

  • Startup-to-give as a high EV caree
... (read more)

Watch team backup: I think we should be incredibly careful about saying things like, "it is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating". I'm sure you mean something reasonable when you say this, similar to what's expressed here, but I still wanted to flag it.

I make a quick (and relatively uncontroversial) poll on how people are feeling about EA. I'll share if we get 10+ respondents.

Currently 27-ish[1] people have responded:

Full results: https://viewpoints.xyz/polls/ea-sense-check/results 

Statements people agree with:

Statements where there is significant conflict:

Statements where people aren't sure or dislike the statement:

  1. ^

    The applet makes it harder to track numbers than the full site. 

3
huw
5h
Without reading too much into it, there's a similar amount of negativity about the state of EA as there is a lack of confidence in its future. That suggests to me that there's a lot of people who think EA should be reformed to survive (rather than 'it'll dwindle and that's fine' or 'I'm unhappy with it but it'll be okay')?

I've said that people voting anonymously is good, and I still think so, but when I have people downvoting me for appreciating little jokes that other people most on my shortform, I think we've become grumpy. 

2
titotal
1h
In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!

I get that, though it feels like shortforms should be a bit looser. 

I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on. 

If I post an article, I have some reason I liked it. Even a single line. Being critical isn't enough on it's own. If someone posts an article, without a single quote they like, with the implication it's a bad article, I am minded to strong downvote so that noone else has to waste their time on it. 

What do you make of this post? I've been trying to understand the downvotes. I find it valuable in the same way that I would have found it valuable if a friend had sent me it in a DM without context, or if someone had quote tweeted it with a line like 'Prominent YouTuber shares her take on FHI closing down'. 

I find posts like this useful because it's valuable to see what external critics are saying about EA. This helps me either a) learn from their critiques or b) rebut their critiques. Even if they are bad critiques and/or I don't think it's worth my... (read more)

Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to? 

(Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)

You might want to use viewpoints.xyz to run a poll here. 

1
Stan Pinsent
7h
I have a post that takes readers through a basic example of how to calculate Shapley values.

I might start doing some policy BOTEC (Back of the envelope calculation) posts. ie where I suggest an idea and try and figure out how valuable it is. I think that do this faster with a group to bounce ideas off. 

If you'd like to be added to a message chat (on whatsapp probably) to share policy BOTECs then reply here or DM me. 

Is EA as a bait and switch a compelling argument for it being bad?

I don't really think so

  1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial beliefs at a "bring your friends" service? What about wearing nice clothes to a first date? [1]
  2. EA is a big movement composed of different groups[2]. Many describe it differently.
  3. EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg h
... (read more)
Showing 3 of 4 replies (Click to show all)

I think that there might be something meaningfully different between wearing nice clothes to a first date (or a job interview), as opposed to intentionally not mentioning more controversial/divisive topics to newcomers. I think there is a difference between putting your best foot forward (dressing nice, grooming, explaining introductory EA principles articulately with a 'pitch' you have practices) and intentionally avoiding/occluding information.

For a date, I wouldn't feel deceived/tricked if someone dressed nice. But I would feel deceived if the person in... (read more)

0
Richard Y Chappell
2d
re: fn 1, maybe my tweet?
2
Nathan Young
2d
Yes, I thought it was you but I couldn't find it. Good analogy.

Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in.

I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.

Showing 3 of 5 replies (Click to show all)
8
Larks
2d
The full quote suggests this is because he classifies Operation Warp Speed (reactive, targeted) as very different from the Office (wasteful, impossible to predict what you'll need, didn't work last time). I would classify this as a disagreement about means rather than ends.     link

I'd class those comments as mostly a disagreement around ends . The emphasis on not getting the credit from his own support base and Republicans not wanting to talk about it are the most revealing. A sizeable fraction of his most committed support base are radically antivax to the point there was audible booing at his own rally when he recommended they got the vaccine, even after he'd very carefully worded it in terms of their "freedoms". It's less a narrow disagreement about a specific layer of Biden bureaucracy and more a recognition that his base sees l... (read more)

2
RedStateBlueState
2d
Trump is anti-tackling pandemics except insofar as it implies he did anything wrong

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to parti

... (read more)
Showing 3 of 8 replies (Click to show all)

"will allow?"

very good.

2
Nathan Young
1d
Yeah seems fair.
4
Nathan Young
1d
Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour.  Seems like they are the de facto people who decide what is or isn't valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable).  I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn't seem to be the case. 

There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like:

  1. What is X person saying
  2. What are the cruxes in this conversation?
  3. Summarise this conversation
  4. What are the key takeaways
  5. What views are being missed from this conversation

I really want an email plugin that basically brute forces rationality INTO email conversations.

Tangentially - I wonder if LLMs can reliably convert peoples claims into a % through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)

It knows the concept of cruxes? I suppose that isn’t that surprising in retrospect.

Do you believe that altruism actually makes people happy? Peter Singer's book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.

Good question I also think about! 

After being only for a few months deeply into EA I already realise that discussing with non EA-people makes me emotional, since I "cannot understand" why they are not getting easily convinced of it as well. How can something so logical not being followed by everyone? At least by donating? I think there is the danger to become pathetic if you don't reflect on it and be aware that you cannot convince everybody. 

On the other side EA is already having a big impact on how I donate and how I act in my job - so in this ... (read more)

tlevin
3d46
11
1
3

I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable.

I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves ... (read more)

Do you have specific examples of proposals you think have been too far outside the window?

24
Tyler Johnston
2d
I broadly want to +1 this. A lot of the evidence you are asking for probably just doesn’t exist, and in light of that, most people should have a lot of uncertainty about the true effects of any overton-window-pushing behavior. That being said, I think there’s some non-anecdotal social science research that might make us more likely to support it. In the case of policy work: * Anchoring effects, one of the classic Kahneman/Tversky biases, have been studied quite a bit, and at least one article calls it “the best-replicated finding in social psychology.” To the extent there’s controversy about it, it’s often related to “incidental” or “subliminal” anchoring which isn’t relevant here. The market also seems to favor a lot of anchoring strategies (like how basically everything on Amazon in “on sale” from an inflated MSRP), which should be a point of evidence that this genuinely just works. * In cases where there is widespread “preference falsification,” overton-shifting behavior might increase people’s willingness to publicly adopt views that were previously outside of it. Cass Sunstein has a good argument that being a “norm entrepreneur,” that is, proposing something that is controversial, might create chain-reaction social cascades. A lot of the evidence for this is historical, but there are also polling techniques that can reveal preference falsification, and a lot of experimental research that shows a (sometimes comically strong) bias toward social conformity, so I suspect something like this is true. Could there be preference falsification among lawmakers surrounding AI issues? Seems possible. Also, in the case of public advocacy, there's some empirical research (summarized here) that suggests a "radical flank effect" whereby overton-window shifting activism increases popular support for moderate demands. There's also some evidence pointing the other direction. Still, I think the evidence supporting is stronger right now. P.S. Matt Yglesias (as usual) has a go
2
tlevin
2d
Yeah, this is all pretty compelling, thanks!

Is there any research on the gap between AI safety research and reality? I wanted to read Eric Drexler's report on R&D automation in AI development, but it was too long so I put it on hold.
It is very doubtful whether such things are within the controllable area.
(1)OpenAI incident
(2)Open source projects such as stockfish have their development process made public. However, it is very unclear and opaque (despite their best efforts).
Overall, I feel strongly that research on AI safety is disconnected from reality.

While we're taking a short break from writing criticisms, I (the non-technical author) was wondering if people would be find it valuable for us to share (brief) thoughts what we've learnt so far from writing these first two critiques - such as how to get feedback, balance considerations, anonymity concerns, things we wish would be different in the ecosystem to make it easier for people to provide criticisms etc.

  1. Especially keen to write for the audience of those who want to write critiques
  2. Keen to hear what specific things (if any) people would be curious
... (read more)

I love this series and I'm sorry to see that you haven't continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less. 

16
Austin
9mo
Hi Omega, I'd be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you've critiqued, Apollo is very new and hasn't received the requisite >$10m, but it's easy to imagine them becoming a major TAIS lab over the next years!
6
Joseph Lemien
9mo
I'd be interested to read about what you've learnt so far from writing these critiques.

I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from.

I really like Zvi's work, and he has been right about a lot of things I ... (read more)

11
MichaelStJules
6d
Yudkowsky's views are discussed here: 1. https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/ 2. https://www.lesswrong.com/posts/KFbGbTEtHiJnXw5sk/i-really-don-t-understand-eliezer-yudkowsky-s-position-on

This was very helpful, thank you! 

2
NickLaing
6d
Perhaps the large uncertainty around it makes it less likely that people will argue against it publicly as well. I would imagine many people might think with very low confidence that some interventions for non-human animals might not be the most cost-effective, but stay relatively quiet due to that uncertainty.

This is an interesting #OpenPhil grant. $230K for a cyber threat intelligence researcher to create a database that tracks instances of users attempting to misuse large language models.

https://www.openphilanthropy.org/grants/lee-foster-llm-misuse-database/

 Will user data be shared with the user's permission? How will an LLM determine the intent of the user when it comes to differentiating between purposeful harmful entries versus user error, safety testing, independent red-teaming, playful entries, etc. If a user is placed on the database, is she notif... (read more)

Not a wholly-unserious suggestion. SWP could do a tie-in with the artist creating these fun knock-offs, capitalise on Swift madness, rehabilitate shrimp as cute in the process.

Excerpt from the most recent update from the ALERT team:

 

Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious.

Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantiall... (read more)

Showing 3 of 4 replies (Click to show all)
2
SiebeRozendal
3d
I'm personally pretty concerned given the many rumors of farm workers falling ill and looking to join a discussion group. In case it doesn't exist, I've started a WhatsApp community that you can join via this link: https://chat.whatsapp.com/LD6OAM32PgF7WdJ51ABVsl
3
MathiasKB
3d
forecasting newsletter by nuno sempere

Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.

Showing 3 of 4 replies (Click to show all)

Btw, I don't think the virus has a high mortality rate in its current form, based on these reported rumors

4
SiebeRozendal
5d
in This Week in Virology, Vincent Racaniello says that he had visited Ohio farmers, and said that farm workers were getting specifically conjunctivitis rather than respiratory infections. He mentioned this really casually. This Week in Virology TWiV 1108: Clinical update with Dr. Daniel Griffin Also this: From this opinion piece by Zeynep Tüfekçi in the NY Times: It's not like there's any at-scale human testing However, I don't think these cases are likely to lead to sustained human-to-human transmission, of it's true that most have only conjunctivitis. It's in line with the one confirmed case, which only had conjunctivitis and no other symptoms: https://www.cdc.gov/media/releases/2024/p0401-avian-flu.html It's also in line with Fouchier et al., 2004 It spreading to pigs farms seems the biggest risk at the moment, and not unlikely.
2
SiebeRozendal
3d
More links: April 22, Science: April 29, Daily Mail: https://www.dailymail.co.uk/health/article-13363325/bird-flu-outbreak-humans-texas-farm-worker-sick.html
Load more