This is a special post for quick takes by JWS. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 6:24 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
JWS
4mo102
7
0
26

Many people find the Forum anxiety inducing because of the high amount of criticism. So, in the spirit of Giving Season, I'm going to give some positive feedback and shout-outs for the Forum in 2023 (from my PoV). So, without further ado, I present the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-2023-Forum-Awards: 🏆✨🎄[1]
 

Best Forum Post I read this year:

10 years of Earning to Give by @AGB: A clear, grounded, and moving look at what it actually means to 'Earn to Give'. In particular, the 'Why engage?' section really resonated with me.

Honourable Mentions:

Best ... (read more)

6
David Mathers
4mo
Thanks for saying nice things about me! For the record, I also think David Thorstad's contributions are very valuable (whether or not his views are ultimately correct). 
2
Joseph Lemien
4mo
This is a lovely idea. Bravo!
JWS
5mo42
11
4
4

This is an off-the-cuff quick take that captures my current mood. It may not have a long half-life, and I hope I am wrong

Right now I am scared

Reading the tea-leaves, Altman and Brockman may be back at OpenAI, the company charter changed, and the board - including Toner and MacAulay - removed from the company

The mood in the Valley, and in general intellectual circles, seems to have snapped against EA[1]

This could be as bad for EA's reputation as FTX

At a time when important political decisions about the future of AI are being made, and potential coalitions are being formed

And this time it'd be second-impact syndrome

I am scared EA in its current form may not handle the backlash that may come

I am scared that we have not done enough reform in the last year from the first disaster to prepare ourselves

I am scared because I think EA is a force for making the world better. It has allowed me to do a small bit to improve the world. Through it, I've met amazing and inspiring people who work tirelessly and honestly to actually make the world a better place. Through them, I've heard of countless more actually doing what they think is right and giving what they can to make the world we find ourse... (read more)

As with Nonlinear and FTX, I think that for the vast majority of people, there's little upside to following this in real-time.

It's very distracting, we have very little information, things are changing fast, and it's not very action-relevant for most of us.

I'm also very optimistic that the people "who work tirelessly and honestly to actually make the world a better place" will keep working on it after this, whatever happens to "EA", and there will still be ways to meet them and collaborate.

Sending a hug

2
JWS
5mo
Thanks for this Lorenzo, I appreciate it <3

It's hard to see how the backlash could actually destroy GiveWell or stop Moskowitz and Tuna giving away their money through Open Phil/something that resembles Open Phil. That's a lot of EA right there.

JWS
5mo13
2
1

It's hard yes, but I think the risk vectors are (note - these are different scenarios, not things that follow in chronological order, though they could):

  • Open Philanthropy gets under increasing scrutiny due to its political influence
  • OP gets viewed as a fully politicised propaganda operation from EA, and people stop associating with it, accepting its money, or call for legal or political investigations into it etc
  • Givewell etc dissassociate themselves from EA due to EA having a strong negative social reaction from potential collaborators or donors
  • OP/Givewell dissociate from and stop funding the EA community for similar reasons as the above, and the EA community does not survive

Basically I think that ideas are more important than funding. And if society/those in positions of power put the ideas of EA in the bin, money isn't going to fix that

This is all speculative, but I can't help the feeling that regardless of how the OpenAI crisis resolves a lot of people now consider EA to be their enemy :(

3
Sharmake
5mo
My general thoughts on this can be stated as: I'm mostly of the opinion that EA will survive this, bar something massively wrong like the board members willfully lying or massive fraud from EAs, primarily because most of the criticism is directed to the AI safety wing, and EA is more than AI safety, after all. Nevertheless, I do think that this could be true for the AI safety wing, and they may have just hit a key limit to their power. In particular, depending on how this goes, I could foresee a reduction in AI safety power and influence, and IMO this was completely avoidable.
4
JWS
5mo
I think a lot will depend on the board justification. If Ilya can say "we're pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn't be trusted to manage this safely" with proof that might work - but then why not say that?[1] If it's just "we decided to go in a different direction", then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset 1. ^ Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom
7
Sharmake
5mo
I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/LW forum, which is not enough evidence at all in the corporate world/legal world, and to be quite frank, the EA/LW standard of evidence on AI risk being a big deal enough to investigate is very low, sometimes non-existent, and that simply does not work once you have to deal with companies/the legal system. More generally, EA/LW is shockingly loose, sometimes non-existent in its standards of evidence for AI risk, which doesn't play well with the corporate/legal system. This is admittedly a less charitable take than say, Lukas Gloor's take.
8
Lukas_Gloor
5mo
Haha, I was just going to say that I'd be very surprised if the people on the OpenAI board didn't have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines. TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. That's something that EA opinion leaders could maybe think about and address. But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually good at – or maybe I should say "some/many parts of EA are unusually good at" – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn't any groupthink among EAs. Also, "unusually good" isn't necessarily that high of a bar.])  I don't know for sure what they did or didn't consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I don't know much about Tasha. I've briefly met Helen but either didn't speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)

While I generally agree that they almost certainly have more information on what happened, which is why I'm not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it's slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that's going to conflict with corporate/legal standards of evidence.

But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually g

... (read more)
2
David Mathers
5mo
Why did you unendorse?
5
Sharmake
5mo
I unendorsed primarily because apparently, the board didn't fire because of safety concerns, though I'm not sure this is accurate.
2
akash
5mo
I am unsure how I feel about takes like this. On one hand, I want EAs and the EA community to be a supportive bunch. So, expressing how you are feeling and receiving productive/helpful/etc. comments is great. The SBF fiasco was mentally strenuous for many, so it is understandable why anything seemingly negative for EA elicits some of the same emotions, especially if you deeply care about this band of people genuinely aiming to do the most good they can.  On the other hand, I think such takes could also contribute to something I would call a "negative memetic spiral." In this particular case, several speculative projections are expressed together, and despite the qualifying statement at the beginning, I can't help but feel that several or all of these things will manifest IRL. And when you kind of start believing in such forecasts, you might start saying similar things or expressing similar sentiments. In the worst case, the negative sentiment chain grows rapidly. It is possible that nothing consequential happens. People's mood during moments of panic are highly volatile, so five years in, maybe no one even cares about this episode. But in the present, it becomes a thing against the movement/community. (I think a particular individual may have picked up one such comment from the Forum and posted it online to appease to their audience and elevate negative sentiments around EA?). Taking a step back, gathering more information, and thinking independently, I was able to reason myself out of many of your projections. We are two days in and there is still an acute lack of clarity about what happened. Emmett Shear, the interim CEO of OpenAI, stated that the board's decision wasn't over some safety vs. product disagreement. Several safety-aligned people at OpenAI signed the letter demanding that the board should resign, and they seem to be equally disappointed over recent events; this is more evidence that the safety vs. product disagreement likely didn't lead to Altman's
JWS
5mo15
2
0
1

Thanks for your response Akash. I appreciate your thoughts, and I don't mind that they're off-the-cuff :)

I agree with some of what you say, and part of what I think is your underlying point , but in some others I'm a bit less clear. I've tried to think about two points where I'm not clear, but please do point if I've got something egregiously wrong!

1) You seem to be saying that sharing negative thoughts and projections can lead others to do so, and this can then impact other people's actions in a negative way. It could also be used by anti-EA people against us.[1]

I guess I can kind of see some of this, but I guess I'd view the cure as being worse than the disease sometimes. I think sharing how we're thinking and feeling is overall a good thing that could help us understand each other more, and I don't think self-censorship is the right call here. Writing this out I think maybe I disagree with you about whether negative memetic spirals are actually a thing causally instead of descriptively. I think people may be just as likely apriori to have 'positive memetic spirals' or 'regressions to the vibe mean' or whatever

2) I'm not sure what 'I was able to reason myself out of many of your ... (read more)

3
akash
5mo
'Hold fire on making projections' is the correct read, and I agree with everything else you mention in point 2.  About point 1 — I think sharing negative thoughts is absolutely a-ok and important. I take issue with airing bold projections when basic facts of the matter aren't even clear. I thought you were stating something akin to "xyz are going to happen," but re-reading your initial post, I believe I misjudged. 
JWS
6mo44
7
1

I want to register that my perspective on medium-term[1] AI existential risk (shortened to AIXR from now on) has changed quite a lot this year. Currently, I'd describe it as moving from 'Deep Uncertainty' to 'risk is low in absolute terms, but high enough to be concerned about'. I guess atm I'd think that my estimates are moving closer toward the Superforecasters in the recent XPT report (though I'd still say I'm still Deeply Uncertain on this issue, to the extent that I don't think the probability calculus is that meaningful to apply)

Some points around this change:

  • I'm not sure it's meaningful to cleanly distinguish AIXR from other anthropogenic x-risks, especially since negative consequences of AI may plausibly increase other x-risks (e.g. from Nuclear War, biosecurity, Climate Change etc.)
  • I think in practice, the most likely risks from AI would come from deployment of powerful systems that have catastrophic consequences that are then rolled back. I'm think of Bing 'Sydney' here as the canonical empirical case.[2] I just don't believe we're going to get no warning shots.
  • Similary, most negative projections of AI don't take into account negative social reaction and systema
... (read more)
8
Guy Raveh
6mo
This seems like a very sensible and down-to-earth analysis to me, and I'm a bit sad I can't seem to bookmark it.
6
JWS
6mo
Thanks :) I might do an actual post at the end of the year? In the meantime I just wanted to get my ideas out there as I find it incredibly difficult to actually finish any of the many Forum drafts I have 😭
6
David Mathers
6mo
Do the post :) 
4
NickLaing
6mo
I agree this feels plenty enough to be a post for me, but we all have different thresholds I guess!
4
Chris Leong
6mo
“AI may plausibly increase other x-risks (e.g. from Nuclear War, biosecurity, Climate Change etc.)” I’m extremely surprised to see climate change listed here. Could you explain?
2
JWS
6mo
Honestly I just wrote a list of potential x-risks to make a similar reference class. It wasn't mean to be a specific claim, just examples for the quick take! I guess climate change might be less of an existential risk in an of itself (per Halstead), but there might be interplays between them that increase their combined risk (I think Ord talks about this in the precipice). I'm also sympathetic to Luke Kemp's view that we should really just care about overall x-risk, regardless of cause area, as extinction by any means would be as bad for humanities potential.[1] I think it's plausible to consider x-risk from AI higher than Climate Change over the rest of this century, but my position at the moment is that this would be more like 5% v 1% or 1% v 0.01% than 90% v 0.001%, but as I said I'm not sure trying to put precise probability estimates is that useful. Definitely accept the general point that it'd be good to be more specific with this language in a front-page post though. 1. ^ Though not necessarily present, some extinctions may well be a lot worse than others there
4
Chris Leong
6mo
My point is that even though AI emits some amount of carbon gases, I'm struggling to find a scenario where it's a major issue for global warming as AI can help provide solutions here as well. (Oh, my point wasn't that climate change couldn't be an x-risk, though it has been disputed, more that I don't see the pathway for AI to exacerbate climate change).
1
David Johnston
6mo
I would take the proposal to be AI->growth->climate change or other negative growth side effects
1
Mo Putera
6mo
I was wondering why he said that, since I've read his report before and that didn't come up at all. I suppose a few scattered recollections I have are * Tom would probably suggest you play around with the takeoffspeeds playground to gain a better intuition (I couldn't find anything 1,000x-in-a-year-related at all though) * Capabilities takeoff speed ≠ impact takeoff speed (Tom: "overall I expect impact takeoff speed to be slower than capabilities takeoff, with the important exception that AI’s impact might mostly happen pretty suddenly after we have superhuman AI")

[edit: a day after posting, I think this perhaps reads more combative that I intended? It was meant to be more 'crisis of faith, looking for reassurance if it exists' than 'dunk on those crazy longtermists'. I'll leave the quick take as-is, but maybe clarification of my intentions might be useful to others]

Warning! Hot Take! 🔥🔥🔥 (Also v rambly and not rigorous)

A creeping thought has entered my head recently that I haven't been able to get rid of... 

Is most/all longtermist spending unjustified?

The EA move toward AI Safety and Longtermism is often based on EV calculations that show the long term future is overwhelmingly valuable, and thus is the intervention that is most cost-effective.

However, more in-depth looks at the EV of x-risk prevention (1, 2) cast significant doubt on those EV calculations, which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones.

But my doubts get worse...

GiveWell estimates around $5k to save a life. So I went looking for some longtermist calculations, and I really couldn't fund any robust ones![1] Can anyone point me in some robust calculations for longtermist funds/organisations where they ... (read more)

which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones.

Why do you think this?

For some very rough maths (appologies in advance for any errors), even Thorstad's paper (with a 2 century long time of perils, a 0.1% post-peril risk rate, no economic/population growth, no moral progress, people live for 100 years) suggests that reducing p(doom) by 2% is worth as much as saving 16x8billion lives - i.e. each microdoom is worth 6.4million lives. I think we can buy microdooms more cheaply than $5,000*6.4million = $32billion each. 

JWS
6mo29
3
0

(I can't actually find those calculations in Thorstad's paper, could you point them out to me? afaik he mostly looks at the value of fractional reduction in x-risk, while microdooms are an absolute reduction if I understand correctly? happy to be corrected or shown in the right direction!)

My concerns here are twofold:

1 - epistemological: Let's say those numbers are correct from the Thorstad paper, that a microdoom has to cost <= $32bn to be GiveWell cost-effective. The question is, how would we know this. In his recent post Paul Cristiano thinks that RSPs could lead to a '10x reduction' in AI risk. How does he know this? Is this just a risk reduction this century? This decade? Is it a permanent reduction?

It's one thing to argue that under set of conditions X work on x-risk reduction is cost-effective as you've done here. But I'm more interested in the question of whether conditions X hold, because that's where the rubber hits the road. If those conditions don't hold, then that's why longtermism might not ground x-risk work.[1]

There's also the question of persistence. I think the Thorstad model either assumes the persistence of x-risk reduction, or the persistence of a low-risk p... (read more)

6
Larks
6mo
He assumes 20% risk and a 10% relative risk reduction, which I translate into 2% absolute risk of doom, and then see the table on p12.
3
nevakanezzar
6mo
Isn't the move here something like, "If doom soon, then all pre-doom value nets to zero"? Which tbh I'm not sure is wrong. If I expect doom tomorrow, all efforts today should be to reduce it; one night's sleep not being bitten by mosquitoes doesn't matter. Stretching this outward in time, doesn't change the calculus much for a while, maybe about a lifetime or a few lifetimes or so. And a huge chunk of xrisk is concentrated in this century.
7
JWS
6mo
The x-risk models actually support the opposite conclusion though. They are generally focused on the balance of two values, v and r - where v the value of a time period and r is the risk of extinction in that period. If r is sufficiently high, then it operates as a de facto discount rate on the future, which means that the most effective way to increase good is to increase the present v rather than reduce r. For an analogy, if a patient has an incredibly high risk of succumbing to terminal cancer, the way to increase wellbeing may be to give them morphine and palliative care rather than perscribe them risky treatments that may or may not work (and might only be temporary) Now one could argue against this by saying 'do not get go gentle into that good night, in the face of destruction we should still do our best'. I have sympathy with that view, but it's not grounded in the general 'follow the EV' framework of EA, and it would have consequences beyond supporting longtermism
JWS
9mo50
12
18

The HLI discussion on the Forum recently felt off to me, bad vibes all around. It seems very heated, not a lot of scout mindset, and reading the various back-and-forth chains I felt like I was  'getting Eulered' as Scott once described. 

I'm not an expert on evaluating charities, but I followed a lot of links to previous discussions and found this discussion involving one of the people running an RCT on Strongminds (which a lot of people are waiting for the final results of) who was highly sceptical of SM efficacy. But the person offering counterarguments in the thread seems to be just as valid to me? My current position, for what it's worth,[1] is:

  • the initial Strongminds results of 10x cash transfer should raise a sceptical response. most things aren't that effective
  • it's worth there being exploration of what the SWB approach would recommend as the top charities (think of this as trying other bandits in a multi-armed bandit charity evaluation problem)
  • it's very difficult to do good social science, and the RCT won't give us dispositive evidence about the effectiveness of Strongminds (especially at scale), but it may help us update. In general we should be mindful of how
... (read more)

Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).

I think this is a significant datum in favor of being able to see the strong up/up/down/strong down spread for each post/comment. If it appeared that much of the karma activity was the result of a handful of people strongvoting each comment in a directional activity, that would influence how I read the karma count as evidence in trying to discern the community's viewpoint. More importantly, it would probably inform HLI's takeaways -- in its shoes, I would treat evidence of a broad consensus of support for certain negative statements much, much more seriously than evidence of carpet-bomb voting by a small group on those statements.

4
JP Addison
9mo
Indeed our new reacts system separates them. But our new reacts system also doesn't have strong votes. A problem with displaying the number of types of votes when strong votes are involved is that it much more easily allows for deanonymization if there are only a few people in the thread.
2
Jason
9mo
That makes sense. On the karma side, I think some of my discomfort comes from the underlying operationalization of post/comment karma as merely additive of individual karma weights.  True opinion of the value of the bulk of posts/comments probably lies on a bell curve, so I would expect most posts/comments to have significantly more upvotes than strong upvotes if voters are "honestly" conveying preferences and those preferences are fairly representative of the user base. Where the karma is coming predominately from strongvotes, the odds that the displayed total reflects the opinion of a smallish minority that feels passionately is much higher. That can be problematic if it gives the impression of community consensus where no such consensus exists. If it were up to me, I would probably favor a rule along the lines of: a post/comment can't get more than X% of its net positive karma from strongvotes, to ensure that a high karma count reflects some degree of breadth of community support rather than voting by a small handful of people with powerful strongvotes. Downvotes are a bit trickier, because the strong downvote hammer is an effective way of quickly pushing down norm-breaking and otherwise problematic content, and I think putting posts into deep negative territory is generally used for that purpose.
2
David M
9mo
Looks like this feature is being rolled out on new posts. Or at least one post: https://forum.effectivealtruism.org/posts/gEmkxFuMck8SHC55w/introducing-the-effective-altruism-addiction-recovery-group
9
Sol3:2
9mo
EA is just a few months out from a massive scandal caused in part by socially enforced artificial consensus (FTX), but judging by this post nothing has been learned and the "shut up and just be nice to everyone else on the team" culture is back again, even when truth gets sacrificed on the process. No thinks HLI is stealing billions of dollars of course, but the charge that they keep quasi-deliberately stacking the deck in StrongMinds' favour is far from outrageous and should be discussed honestly and straightforwardly.

JWS' quick take has often been in negative agreevote territory and is +3 at this writing. Meanwhile, the comments of the lead HLI critic suggesting potential bad faith have seen consistent patterns of high upvote / agreevote. I don't see much evidence of "shut up and just be nice to everyone else on the team" culture here.

5
JWS
9mo
Hey Sol, some thoughts on this comment: * I don't think the Forum's reaction to the HLI post has been "shut up and just be nice to everyone else on the team", as Jason's response suggested. * I don't think mine suggests that either! In fact, my first bullet point has a similar sceptical prior to what you express in this comment[1] I also literally say "holding charity evaluators to account is important to both the EA mission and EAs identity", and point that I don't want to sacrifice epistemic rigour. In fact, one of my main points is that people - even those disagreeing with HLI, are shutting up too much! I think disagreement without explanation is bad, and I salute the thorough critics on that post who have made their reasoning for putting HLI in 'epistemic probation' clear. * I don't suggest 'sacrificing the truth'. My position is that the truth on StrongMind's efficacy is hard to get a strong signal on, and therefore HLI should have been more modest early on their history, instead of framing it as the most effective way to donate. * As for the question of whether HLI were "quasi-deliberately stacking the deck", well I was quite open that I think I am confused on where the truth is, and find it difficult to adjudicate what the correct takeway should be. I don't think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I'm not taking the position you think I am. 1. ^ That's my interpretation of course, please correct me if I've misunderstood
JWS
5mo26
11
7
1

I think (at least) somebody at Open Philanthropy needs to start thinking about reacting to an increasing move towards portraying it, either sincerely or strategically, as a shadowy cabal-like entity influencing the world in an 'evil/sinister' way, similar to how many right-wingers across the world believe that George Soros is contributing to the decline of Western Civilization through his political philanthropy.

Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles. In many think pieces responding to WWOTF or FTX or SBF, they get extensively cited as a primary EA-critic, for example.

I think the 'ignore it' strategy was a mistake and I'm afraid the same mistake might happen again, with potentially worse consequences.

  1. ^

    Do people realise that they've going to release a documentary sometime soon?

Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.

You say this as if there were ways to respond which would have prevented this. I'm not sure these exist, and in general I think "ignore it" is a really really solid heuristic in an era where conflict drives clicks.

I think responding in a way that is calm, boring, and factual will help. It's not going to get Émile to publicly recant anything. The goal is just for people who find Émile's stuff to see that there's another side to the story. They aren't going to publicly say "yo Émile I think there might be another side to the story". But fewer of them will signal boost their writings on the theory that "EAs have nothing to say in their own defense, therefore they are guilty". Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.

4
Ebenezer Dukakis
5mo
Maybe it would be useful to discuss concrete examples of engagement and think about what's been helpful/harmful. Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive. I think this post was probably positive (I read it around a year ago, my recollections are a bit vague). But I guess that post itself could be an argument that even attempting to engage with Émile in good faith is potentially dangerous. Perhaps the right strategy is something like: assume good faith, except with specific critics who have a known history of bad faith. And consider that your comparative advantage may lie elsewhere, unless others would describe you as unusually good at being charitable.

Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive.

What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.

5
Ebenezer Dukakis
5mo
I would guess any legitimization of Émile by Nathan was symmetrical with a legitimization of Nathan by Émile. However I didn't get the sense that either was legitimizing the other, so much as both were legitimizing GiveDirectly. It seems valuable to legitimize GiveDirectly, especially among the "left-ish academic circles" reading Émile who might otherwise believe that Émile is against all EA causes/organizations. (And among "left-ish academics" who might otherwise believe that Nathan scorns "near-termist" causes.) There's a lot of cause prioritization disagreement within EA, but it doesn't usually get vicious, in part because EAs have "skin in the game" with regard to using their time & money in order to make the world a better place. One hypothesis is that if we can get Émile's audience to feel some genuine curiosity about how to make their holiday giving effective, they'll wonder why some people are longtermists. I think it's absolutely fine to disagree with longtermism, but I also think that longtermists are generally thoughtful and well-intentioned, and it's worth understanding why they give to the causes they do. Do you have specific reasons to believe this? It's a possibility, but I could just as easily see most donations coming from non-EAs, or EAs who consider GiveDirectly a top pick anyways. Even if EA donors didn't consider GiveDirectly a top pick on its own, they might have considered "GiveDirectly plus better relations with Émile with no extra cost" to be a top pick, and I feel hesitant to judge this more harshly than I would judge any other EA cause prioritization. BTW, a mental model here is: https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty If Émile is motivated to attack EA because they feel rejected by it, it's conceivable to me that their motivation for aggression would decrease if a super kind and understanding therapist-type person listened to them really well privately and helped them feel heard & understood. The fun

FYI, Émile’s pronouns are they/them.

[Edit: I really don't like that this comment got downvoted and disagree voted...]

7
JWS
5mo
I agree it's a solid heuristic, but heuristics aren't foolproof and it's important to be able to realise where they're not working. I remembered your tweet about choosing intellectual opponents wisely because I think it be useful to show where we disagree on this: 1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, it's worth considering that letting hostile ideas spread unchallenged may work out badly in the future. 2 - I'm not sure it's clear that "the silent majority can often already see their mistakes" in this case. I don't think this is a minor view on EA. I think a lot of people are sympathetic to Torres' point of view, and a signficiant part of that is (in my opinion) because there wasn't a lot of pushback when they started making these claims in major outlets. On my first comment, I agree that I don't think much could have been much done to stop Émile turning against EA,[1] but I absolutely don't think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! They're partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.  Was some pushback going to happen? Yes, but I don't think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who don't fully agree with us. My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could
8
titotal
5mo
I don't think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.  The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.  Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I can't find any big name EA's on this list, for example). This probably contributed to her current anti-EA stance.  As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.  The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI. 
8
JWS
5mo
I don't want to fully re-litigate this history, as I'm more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but you'd think OpenPhil would be open to being concerned about low-chance high-impact threats to it) Agreed. It predated Emile's public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldn't have become allies who have strong ideological influence over a large part of AI research space. I'd like to think so too, but this is a bridge that needs to build from both ends imo, as I wouldn't recommend a unilateral action unless I really trusted the other parties involved. There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
2[anonymous]5mo
What do you see as the risk of building a bridge if it's not reciprocated? 
8
David Mathers
5mo
'The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.' What would this look like? I feel like, if all you do is say nice things,  that is a good idea usually, but it won't move the dial that much (and also is potentially lying, depending on context and your own opinions; we can't just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if you're advocating spending actual EA money and labour on this, surely you'd first need to make a case that stuff "dealing with the short term harms of AI" is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, don't believe in AI X-risk*, so you think it's an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/feminist/socialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/better as a cause area than standard near-term EA stuff or biorisk, can't take that line.  *I am also fairly skeptical it is a good use of EA money and effort for what it's worth, though I've ended up working on it anyway. 
5
titotal
5mo
This seems a little zero-sum, which is not how successful social movements tend to operate. I'll freely confess that I am on the "near term risk" team, but that doesn't mean the two groups can't work together.  A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.  Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run. 
5
harfe
5mo
You haven't actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well. It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on "short-term harms"? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on "short-term harms" that we buy sympathy with the group of people concerned about "short-term harms", so we can later pass regulations together with them to reduce both "short-term harm" and AI x-risk.
5
MvK
5mo
https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different (It's been a while since I read this so I'm not sure it is what you are looking for, but Gideon Futerman had some ideas for what "bridge building" might look like.)
3
harfe
5mo
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of "whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; ". If we were to adopt Gideon's desired framing, it looks like we would need to make sacrifices in epistemics. Related: Some of Gideon's suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
6
AnonymousTurtle
5mo
.
2
JWS
5mo
For the record, I'm very willing to be corrected/amend my Quick Take (and my beliefs on this is in general) if "ignore it" isn't an accurate summary of what was done. Perhaps there was internal action taken with academic spaces/EA organisations that I'm not aware of? I still think the net effect of EA actions in any case was closer to "ignore it", but the literal strong claim may be incorrect.
9
JWS
5mo
Edit: I actually think these considertions should go for many of the comments in this sub-thread, not just my own. There's a lot to disagree about, but I don't think any comment in this chain is worthy of downvotes? (Especially strong ones) A meta-thought on this take given the discussion its generated. Currently this is at net 10 upvotes from 20 total votes at time of writing - but is ahead 8 to 6 on agree/disagree votes. Based on Forum voting norms, I don't think this is particularly deserving of downvotes given the suggested criteria? Especially strong ones? Disagreevotes - go ahead, be my guest! Comments pointing out where I've gone wrong - I actively encourage you to do so! I put this in a Quick Take, not a top-level post so it's not as visible on the Forum front page. (and the point of a QT is for 'exploratory, draft-stage, rough thoughts like this).I led off with saying "I think" - I'm just voicing my concerns about the atmosphere surrounding OpenPhil and its perception. It's written in good faith, albeit with a concerned tone. I don't think it violates what the EA Forum should be about.[1] I know these kind of comments are annoying but still, I wanted to point out that this vote distribution feels a bit unfair, or at least unexplained to me. Sure, silent downvoting is a signal, but it's a crude and noisy signal and I don't really have much to update off here. If you downvoted but don't want to get involved in a public discussion about it, feel free to send me a DM with feedback instead. We don't have to get into a discussion about the merits (if you don't want to!), I'm just confused at the vote distribution. 1. ^ Again, especially in Quick Takes
5
Radical Empath Ismam
5mo
The harsh crticism of EA has only been a good thing, forcing us to have higher standards and rigour. We don't want an echochamber. I would see it as a thoroughly good thing if Open Philanthropy were to combat the protrayal of itself as a shadowy cabal (like in the recent politico piece) for example by: * Having more democratic buy-in with the public * e.g. Having a bigger public presence in media, relying on a more diverse pool of funding than (i.e. less billionarie funding) * Engaged in less political lobbying * More transparent about the network of organisations around them * e.g. from the Politico article: "... said Open Philanthropy’s use of Horizon ... suggests an attempt to mask the program’s ties to Open Philanthropy, the effective altruism movement or leading AI firms"
8
MvK
5mo
1. I am not convinced that "having a bigger public presence in media" is a reliable way to get democratic buy-in. (There is also some "damned if you, damned if you don't" dynamic going on here - if OP was constantly engaging in media interactions, they'd probably be accused of "unduly influencing the discourse/the media landscape") Could you describe what a more democratic OP would look like? 2. You mention "less billionaire funding" - OP was built on the idea of giving away Dustin's and Cari's money in the most effective way. OP is not fundraising, it is grantmaking! So how could it, as you put it, "rely on a more diverse pool of funding"? (also: https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money) I also suspect we would see the same dynamic as above: If OP did actively try to secure additional money in the forms of government grants, they'd be maligned for absorbing public resources in spite of their own wealth. 3. I think a blanket condemnation of political lobbying or the suggestion to "do less" of it is not helpful. Advocating for better policies (in animal welfare, GHD, pandemic preparedness etc.) is in my view one of the most impactful things you can do. I fear we are throwing the baby out with the bathwater here.

Does anyone work at, or know somebody who works at Cohere?

Last November their CEO Aidan Gomez published an anti-effective-altruism internal letter/memo (Bloomberg reporting here, apparently confirmed as true though no further comment)

I got the vibe from Twitter/X that Aidan didn't like EA, but making an internal statement about it to your company seems really odd to me? Like why do your engineers and project managers need to know about your anti-EA opinions to build their products? Maybe it came after the AI Safety Summit?

Does anyone in the AI Safety Space... (read more)

9
gwern
1mo
I don't think it's odd at all. As the Bloomberg article notes, this was in response to the martyrdom of Saint Altman, when everyone thought the evil Effective Altruists were stealing/smashing OA for no reason and destroying humanity's future (as well as the fortune of many of the individuals involved, to a degree few of them bothered to disclose) and/or turning it over to the Chinese commies. An internal memo decrying 'Effective Altruism' was far from the most extreme response at the time; but I doubt Gomez would write it today, if only because so much news has come out since then and it no longer looks like such a simple morality play. (For example, a lot of SV people were shocked to learn he had been fired from YC by Paul Graham et al for the same reason. That was a very closely held secret.)
2
JWS
1mo
Ah good point that it was in the aftermath of the OpenAI board weekend, but it still seems like a very extreme/odd reaction to me (though I have to note the benefit of hindsight as well as my own personal biases). I still think it'd be interesting to see what Aidan actually said, and/or why he's formed such a negative opinion of EA, but I think your right than the simplest explanation here is:
0
Linch
1mo
I'd be a bit surprised if you could find people on this forum who (still) work at Cohere. Hard to see a stronger signal to interview elsewhere than your CEO explaining in a public memo why they hate you. I agree it's odd in the sense that most companies don't do it. I see it as a attempt to enforce a certain kind of culture (promoting conformity, discouragement of dissent, "just build now" at the expense of ethics, etc) that I don't much care for. But the CEO also made it abundantly clear he doesn't like people who think like me either, so ¯\_(ツ)_/¯. 
JWS
11mo37
11
0

Some personal reflections on EAG London:[1]

  • Congrats to the CEA Events Team for their hard work and for organising such a good event! 👏
  • The vibe was really positive! Anecdotally I had heard that the last EAG SF was gloom central, but this event felt much more cheery. I'm not entirely sure why, but it might have had something to do with the open venue, the good weather, or there being more places to touch grass in London compared to the Bay. 
  • I left the conference intellectually energised (though physically exhausted). I'm ready to start drafting some more Forum Post ideas that I will vastly overestimate my ability to finish and publish 😌
  • AI was (unsurprisingly) the talk of the town. But I found that quite a few people,[2] myself included, were actually more optimistic on AI because of the speed of the social response to AI progress and how pro-safety it seems to be, along with low polarisation along partisan lines.
  • Related to the above, I came away with the impression that AI Governance may be as if not more important than Technical Alignment in the next 6-12 months. The window for signficiant political opportunity is open now but may not stay open forever, so the AI Governa
... (read more)
7
Robi Rahman
11mo
I assume any event in SF gets a higher proportion of AI doomers than one in London.

Suing people nearly always makes you look like the assholes I think. 

As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise. 

In some cases, I think that outrage fairly clearly isn't really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist.  But in other cases well, it's hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And 'oh, but I don't endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertainty' does not sound very reassuring to a norma... (read more)

First, I want to thank you for engaging David. I get the sense we've disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith - it's not my intention, but I do admit I've somewhat lost my cool on this topic of late. But in my defence, sometimes that's the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.

As for your comment/reply though, I'm not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former.  Instead, I feel like you've steered the conversation away to a discussion about the implications of naïve utilitariansim. I also fee... (read more)

9
quinn
10mo
I mean in a sense a venue that hosts torres is definitionally trashy due to https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty except insofar as they haven't seen or don't believe this Fuentes person. 
4
David Mathers
10mo
I guess I thought my points about total utilitarianism were relevant, because 'we can make people like us more by pushing back more against misrepresentation' is only true insofar as the real views we have will not offend people. I'm also just generically anxious about people in EA believing things that feel scary to me.  (As I say, I'm not actually against people correcting misrepresentations obviously.)  I don't really have much sense of how reasonable critics are or aren't being, beyond the claim that sometimes they touch on genuinely scary things about total utilitarianism, and that it's a bit of a problem that the main group arguing for AI safety contains a lot of prominent people with views that (theoretically) imply that we should be prepared to take big chances of AI catastrophe rather than pass up small chances of lots of v. happy digital people. On Torres specifically: I don't really follow them in detail (these topics make me anxious), but I didn't intend to be claiming that they are a fair or measured critic, just that they have decent technical understanding of the philosophical issues involved and sometimes puts their finger on real weaknesses. That is compatible with them also saying a lot of stuff that's just false. I think motivated reasoning is a more likely explanation for why they says false things than conscious lying, but that's just because that's my prior about most people. (Edit: Actually, I'm a little less sure of that, after being reminded of the sockpuppetry allegations by quinn below. If those are true, that is deliberate dishonesty.)  Regarding Gebru calling Will a eugenicist. Well, I really doubt you could "sue" over that, or demonstrate to the people most concerned about this that he doesn't count as one by any reasonable definition. Some people use "eugenicist" for any preference that a non-disabled person comes into existence rather than a different disabled person. And Will does have that preference. In What We Owe the Future,

I've generally been quite optimistic that the increased awareness AI xRisk has got recently can lead to some actual progress in reducing the risks and harms from AI. However, I've become increasingly sad at the ongoing rivalry between the AI 'Safety' and 'Ethics' camps[1] 😔 Since the CAIS Letter was released, there seems to have been an increasing level of hostility on Twitter between the two camps, though my impression is that the holistility is mainly one-directional.[2]

I dearly hope that a coalition of some form can be built here, even if it is an... (read more)

5
David Mathers
10mo
What does not "remaining passive" involve? 
2
JWS
10mo
I can't say I have a strategy David. I've just been quite upset and riled up by the discourse over the last week just as I had gained some optimism :( I'm afraid that by trying to turn the other cheek to hostility, those working to mitigate AI xRisk end up ceding the court of public opinion to those hostile to it. I think some suggestions would be: * Standing up to, and callying out, bullying in these discussions can cause a preference cascade of pushback to it - see here - but someone needs to stand up for people to realise that dominant voices are not representative of a field, and silence may obscure areas for collaboration and mutual coalitions to form. * Being aware of what critiques of EA/AI xRisk get traction in adjacent communities. Some of it might be malicious, but a lot of it seems to be a default attitude of scepticism merged with misunderstandings. While not everyone would change their mind, I think people reaching 'across the aisle' might correct the record in many people's minds. Even if not for the person making the claims, perhaps for those watching and reading online.  * Publicly pushing back on Torres. I don't know what went down when they were more involved in the EA movement that caused their opinion to flip 180 degrees, but I think the main 'strategy' has been to ignore their work and not respond to their criticism. The result: their ideas gaining prominence in the AI Ethics field, publications in notable outlets, despite acting consistently in bad faith. To their credit, they are voraciously productive in their output and I don't expect to it slow down. Continuing with a failed strategy doesn't sound like the right call here. * In cases of the most severe hostility, potential considering legal or institutional action? In this example, can you really just get away with calling someone a eugenicist when it's so obviously false? But there have been cases where people have successfully sued for defamation for statements made on Twitter. That
2
quinn
10mo
I would recommend trying to figure out how much loud people matter. Like it's unclear if anyone is both susceptible to sneer/dunk culture and potentially useful someday. Kindness and rigor come with pretty massive selection effects, i.e., people who want the world to be better and are willing to apply scrutiny to their goals will pretty naturally discredit hostile pundits and just as naturally get funneled toward more sophisticated framings or literatures.  I don't claim this attitude would work for all the scicomm and public opinion strategy sectors of the movement or classes of levers, but it works well to help me stay busy and focused and epistemically virtuous.  I wrote some notes about a way forward last february, I just CC'd them to shortform so I could share with you https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=nskr6XbPghTfTQoag  related comment I made: https://forum.effectivealtruism.org/posts/nsLTKCd3Bvdwzj9x8/ingroup-deference?commentId=zZNNTk5YNYZRykbTu 

status: very rambly. This is an idea I want to explore in an upcoming post about longtermism, would be grateful to anyone's thoughts. For more detailed context, see https://plato.stanford.edu/entries/time/ for debate on the nature of time in philosophy

Does rejecting longtermism require rejecting the B-Theory of Time (i.e. eternalism, the view that the past, present, and future have the same ontological status)? Saying that future people don't exist (and therefore can't be harmed, can't lose out by not existing, don't have the same moral rights as present '... (read more)

5
MichaelStJules
8mo
FWIW, someone could reject longtermism for reasons other than specific person-affecting views or even pure time preferences. Even without a universal present, there's still your present (and past), and you can do ethics relative to that. Maybe this doesn't seem impartial enough, and could lead to agents with the same otherwise impartial ethical views and same descriptive views disagreeing about what to do, and those are undesirable? OTOH, causality is still directed and we can still partially order events that way or via agreement across reference frames. The descendants of humanity, say, 1000 years from your present (well, humanity here in our part of the multiverse, say) are still all after your actions now, probably no matter what (physically valid) reference frame you consider, maybe barring time travel. This is because humans' reference frames are all very similar to one another, as differences in velocity, acceleration, force and gravity are generally very small. So, one approach could be to rank or estimate the value of your available options on each reference frame and weigh across them, or look for agreement or look for Pareto improvements. Right now for you, the different reference frames should agree, but they could come apart for you or other agents in the future if/when we or our descendants start colonizing space, traveling at substantial fractions of the speed of light. Also, people who won't come to exist don't exist under the B-theory, so they can’t experience harm. Maybe they're harmed by not existing, but they won't be around to experience that. Future people could have interests, but if we only recognize interests for people that actually exist under the B-theory, then extinction wouldn't be bad for those who never come to exist as a result, because they don't exist under the B-theory.
2
JWS
8mo
Great response Michael! Made me realise I'm conflating a lot of things (so bear with me) By longtermism, I really mean MacAskill's: * future people matter * there could be a lot of them * our actions now could affect their future (in some morally significant way) And in that sense, I really think only bullet 1 is the moral claim. The other two are empirical, about what our forecasts are, and what our morally relevant actions are. I get the sense that those who reject longtermism want to reject it on moral grounds, not empirical ones, so they must reject bullet-point 1. The main ways of doing so are, as you mention, person-affecting views or a pure-rate of time preference, both of which I am sceptical are without their difficult bullets to bite.[1] The argument I want to propose is this: 1. The moral theories we regard as correct ought to cohere with our current best understandings of physics[2] 2. The Special Theory of Relativity (STR) is part of our current best understanding of physics 3. STR implies that there is no universal present moment (i.e. without an observer's explicit frame of reference) 4. Some set of person-affecting views [P] assume that there is a universal present moment (i.e. that we can clearly separate some people as not in the present and therefore not worthy of moral concern, and others which do have this property) 5. From 3 & 4, STR and P are contradictory 6. From 1 & 5, we ought to reject all person-affecting views that are in P And I would argue that a common naïve negative reactions to longtermism (along the liens of) "potential future people don't matter, and it is evil to do anything for them at the expense of present people since they don't exist" is in P, and therefore ought to be rejected. In fact, the only ways to really get out of this seem to be either that someone's chosen person-affecting views are not in P, or that 1 is false. The former is open to question of course, and the latter seems highly suspect. The point

Oh hi. Just rubber-ducking a failure mode some of my Forum takes[1] seem to fall into, but please add your takes if you think that would help :)

----------------------------------------------------------------------------

Some of my posts/comments can be quite long - I like responding with as much context as possible on the Forum, but as some of the original content itself is quite long, that means my responses can be quite long! I don't think that's necessarily a problem in itself, but the problem then comes with receiving disagree votes without commen... (read more)

Has anyone else listened to the latest episode of Clearer Thinking ? Spencer interviews Richard Lang about Douglas Harding's "Headless Way", and if you squint enough it's related to the classic philosophical problems of consciousness, but it did remind me a bit of Scott A's classic story "Universal Love, Said The Cactus Person" which made me laugh. (N.B. Spencer is a lot more gracious and inquisitive than the protagonist!)

But yeah if you find the conversation interesting and/or like practising mindfulness meditation, Richard has a series of guided meditati... (read more)

In this comment I was going to quote the following from R. M. Hare:

"Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things - there is no difference in the 'subjective' concern which people have for things, only in their 'objective' value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except 'None whatever'?"

I remember... (read more)

More from JWS
Curated and popular this week
Relevant opportunities