Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which in... (read more)
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilo🔸. I don't always agree with him, but adding some numbers makes every discussion better!
<edit: Ben deleted the tweets, so it doesn't feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>
This makes me feel bad, and I'm going to try and articulate why. (This is mainly about my gut reaction to seeing/reading these tweets, but I'll ping @Benjamin_Todd because I think subtweeting/vagueposting is bad practice and I don't want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.
Hey JWS,
These comments were off-hand and unconstructive, have been interpreted in ways I didn't intend, and twitter isn't the best venue for them, so I apologise for posting, and I'm going to delete them. My more considered takes are here. Hopefully I can write more in the future.
Hey Ben, I'll remove the tweet images since you've deleted them. I'll probably rework the body of the post to reflect that and happy to make any edits/retractions that you think aren't fair.
I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/future affiliation with EA, I hope you're doing well.
Many people find the Forum anxiety inducing because of the high amount of criticism. So, in the spirit of Giving Season, I'm going to give some positive feedback and shout-outs for the Forum in 2023 (from my PoV). So, without further ado, I present the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-2023-Forum-Awards: 🏆✨🎄[1]
Best Forum Post I read this year:
10 years of Earning to Give by @AGB: A clear, grounded, and moving look at what it actually means to 'Earn to Give'. In particular, the 'Why engage?' section really resonated with me.
Honourable Mentions:
Best ... (read more)
Reflections 🤔 on EA & EAG following EAG London (2024):
In any case, I think it's clear that AI Safety is no longer 'neglected' within EA, and possibly outside of it.
I think this can't be clear based only on observing lots of people at EAG are into it. You have to include some kind of independent evaluation of how much attention the area "should" have. For example, if you believed that AI alignment should receive as much attention as climate change, then EAG being fully 100% about AI would still not be enough to make it no longer neglected.
(Maybe you implicitly do have a model of this, but then I'd like to hear more about it.)
FWIW I'm not sure what my model is, but it involves the fact that despite many people being interested in the field, the number actually working on it full time still seems kind of small, and in particular still dramatically smaller than the number of people working on advancing AI tech.
at least historically very few people travel for EAG. I was surprised by this when I did the surveys and analytics for this when I ran EAG in 2015 and 2016.
Here are some numbers from Swapcard for EAG London 2024:
Country | COUNTA of Country |
United Kingdom | 608 |
United States | 196 |
Germany | 85 |
Netherlands | 48 |
France | 44 |
Switzerland | 34 |
India | 23 |
Sweden | 21 |
Canada | 21 |
Australia | 21 |
Norway | 17 |
Brazil | 15 |
Belgium | 13 |
Philippines | 12 |
Austria | 12 |
Spain | 11 |
Poland | 11 |
Czech Republic | 11 |
Singapore | 10 |
Nigeria | 10 |
Italy | 10 |
Denmark | 10 |
South Africa | 9 |
Kenya | 9 |
Finland | 8 |
Israel | 7 |
Hungary | 7 |
Mexico | 5 |
Ireland | 5 |
Hong Kong | 5 |
Malaysia | 4 |
Estonia | 4 |
China | 4 |
Turkey | 3 |
Taiwan | 3 |
Romania | 3 |
Portugal | 3 |
New Zealand | 3 |
Chile | 3 |
United Arab Emirates | 2 |
Peru | 2 |
Luxembourg | 2 |
Latvia | 2 |
Indonesia | 2 |
Ghana | 2 |
Colombia | 2 |
Zambia | 1 |
Uganda | 1 |
Thailand | 1 |
Slovakia | 1 |
Russia | 1 |
Morocco | 1 |
Japan | 1 |
Iceland | 1 |
Georgia | 1 |
Egypt | 1 |
Ecuador | 1 |
Cambodia | 1 |
Bulgaria | 1 |
Botswana | 1 |
Argentina | 1 |
55% of attendees were not from the UK, 14% of attendees were from the US, at least based on Swapcard data
I find this comment quite discouraging that you didn't feel sadness and hesitation about scheduling it at the same time.
I didn't say that I didn't feel sadness or hesitation about scheduling it at the same time. Indeed, I think my comment directly implied that I did feel some sadness or hesitation, because I used the word "more", implying there was indeed a baseline level of sadness or hesitation that's non-zero.
Ignoring that detail, a bit of broader commentary on why I don't feel that sad:
I at the moment think that most EA community building is net-negative for the world. I am still here as someone trying to hold people accountable and because I have contributed to a bunch of the harm this community has caused. I am in some important sense an "EA Leader" but I don't seem to be on good terms with most of what you would call EA leadership, and honestly, I wish the EA community would disband and disappear and expect it to cause enormous harm in the future (or more ideally I wish it would undergo substantial reform, though my guess is the ship for that has sailed, which makes me deeply sad).
I have a lot of complicated opinions about what this implies about how I should relate to stuff... (read more)
I've considered it! My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things (and IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment expressing such sadness and ask them to resign, that really seems like a great recipe for evaporative cooling).
Also separately, I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund. I would have likely left or called for very substantial reform of the EA Infrastructure Fund, but the LTFF seems like it's probably still overall doing good things (though I am definitely not confident).
Precommitting to not posting more in this whole thread, but I thought Habryka's thoughts deserved a response
IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment
I think this is a fair cop.[1] I appreciate the added context you've added to your comment and have removed the downvote. Reforming EA is certainly high on my list of things to write about/work on, so would appreciate your thoughts and takes here even if I suspect I'll ending up disagreeing with diagnosis/solutions.[2]
My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things
I guess that depends on the theory of change for improving things. If it's using your influence and standing to suggest reforms and hold people accountable, sure. If it's asking for the community to "disband and disappear", I don't know. Like, I don't know in many other movements would that be tolerated with significant influence and funding power?[3] If one of the Lightcone Infrastructure team said "I think lightcone infrastructure in its entirety... (read more)
FWIW Habryka, I appreciate all that I know you’ve done and expect there’s a lot more I don’t know about that I should be appreciative of too.
I would also appreciate if you’d write up these concerns? I guess I want to know if I should feel similarly even as I rather trust your judgment. Sorry to ask, and thanks again
Editing to note I‘ve now seen some of comments elsewhere
I wish the EA community would disband and disappear and expect it to cause enormous harm in the future.
I would be curious to hear you expand more on this:
What is your confidence level? (e.g. is it similar to the confidence you had in "very few people travel for EAG", or is it something like 90%?)
Extremely unconfident, both in overall probability and in robustness. It's the kind of belief where I can easily imagine someone swaying me one way or another in a short period of time, and the kind of belief I've gone back and forth on a lot over the years.
On the question of confidence, I feel confused about how to talk about probabilities of expected value. My guess is EA is mostly irrelevant for the things that I care about in ~50% of worlds, is bad in like 30% of worlds and good in like 20% of worlds, but the exact operationalization here is quite messy. Also in the median world in which EA is bad, it seems likely to me that EA causes more harm than it makes up for in the median world where it is good.
What scenarios are you worried about? Hastening the singularity by continuing to help research labs or by making government intervention less like and less effective?
Those are two relatively concrete things I am worried about. More broadly, I am worried about EA generally having a deceptive and sanity-reducing relationship to the worl... (read more)
OK your initial message makes more sense given your response here - Although I can't quite now connect why MATS and Manifest would be net positive things under this framework while EA community building would be net negative.
My slight pushback would be that EAG London is the most near-term focused of the EAGs, so some of the long-termist potential net negatives you list might not apply so much with that conference.
Quick[1] thoughts on the Silicon Valley 'Vibe-Shift'
I wanted to get this idea out of my head and into a quick-take. I think there's something here, but a lot more to say, and I've really haven't done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.
The political outlook in Silicon Valley has changed.
Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/acc has claimed political victory, and lots of people have noticed the 'vibe shift'.[2] I think that, rather than this being a change in opinions, it's more an event allowing for the beginning of a preference cascade, but at least in Silicon Valley (if not yet reflected in national polling) it has happened.
So it seems that a large section of Silicon Valley is now openly and confidently supporting Trump, and to a greater or lesser extent aligned with the a16z/e-acc worldview,[3] we know it's already reached the ears of VP candidate JD Vance.
How did we get here
You could probably write a book on this, so this is a highly ... (read more)
Once again, if you disagree, I'd love to actually here why.
I think you're reading into twitter way too much.
Edit: Confused about the downvoting here - is it a 'the Forum doesn't need more of this community drama' feeling? I don't really include that much of a personal opinion to disagree with, and I also encourage people to check out Lincoln's whole response 🤷
For visibility, on the LW version of this post Lincoln Quirk - member of the EV UK board made some interesting comments (tagging @lincolnq to avoid sub-posting). I thought it'd be useful to have visibility of them on the Forum. A sentence which jumped out at me was this:
Personally, I'm still struggling with my own relationship to EA. I've been on the EV board for a year+ - an influential role at the most influential meta org - and I don't understand how to use this role to impact EA.
If one of the EV board members is feeling this way and doesn't know what to do, what hope for rank-and-file EAs? Is anyone driving the bus? Feels like a negative sign for the broader 'EA project'[1] if this feeling goes right to the top of the institutional EA structure.
That sentence comes near the end of a longer, reflective comment, so I recommend reading the full exchange to take in Lincoln's whole perspective. (I'll probably post my thoughts on... (read more)
The answer for a long time has been that it's very hard to drive any change without buy-in from Open Philanthropy. Most organizations in the space are directly dependent on their funding, and even beyond that, they have staff on the boards of CEA and other EA leadership organizations, giving them hard power beyond just funding. Lincoln might be on the EV board, but ultimately what EV and CEA do is directly contingent on OP approval.
OP however has been very uninterested in any kind of reform or structural changes, does not currently have any staff participate in discussion with stakeholders in the EA community beyond a very small group of people, and is majorly limited in what it can say publicly due to managing tricky PR and reputation issues with their primary funder Dustin and their involvement in AI policy.
It is not surprising to me that Lincoln would also feel unclear on how to drive leadership, given this really quite deep gridlock that things have ended up in, with OP having practically filled the complete power vacuum of leadership in EA, but without any interest in actually leading.
Sharing some planned Forum posts I'm considering, mostly as a commitment device, but welcome thoughts from others:
A thought about AI x-risk discourse and the debate on how "Pascal's Mugging"-like AIXR concerns are, and where this causes confusion between those concerned and sceptical.
I recognise a pattern where a sceptic will say "AI x-risk concerns are like Pascal's wager/are Pascalian and not valid" and then an x-risk advocate will say "But the probabilities aren't Pascalian. They're actually fairly large"[1], which usually devolves into a "These percentages come from nowhere!" "But Hinton/Bengio/Russell..." "Just useful idiots for regulatory capture..." discourse doom spiral.
I think a fundamental miscommunication here is that, while the sceptic is using/implying the term "Pascallian" they aren't concerned[2] with the percentage of risk being incredibly small but high impact, they're instead concerned about trying to take actions in the world - especially ones involving politics and power - on the basis of subjective beliefs alone.
In the original wager, we don't need to know anything about the evidence record for a certain God existing or not, if we simply Pascal's framing and premisses then we end up with the belief that we ought to believe in God. Similarly, when this term comes... (read more)
I want to register that my perspective on medium-term[1] AI existential risk (shortened to AIXR from now on) has changed quite a lot this year. Currently, I'd describe it as moving from 'Deep Uncertainty' to 'risk is low in absolute terms, but high enough to be concerned about'. I guess atm I'd think that my estimates are moving closer toward the Superforecasters in the recent XPT report (though I'd still say I'm still Deeply Uncertain on this issue, to the extent that I don't think the probability calculus is that meaningful to apply)
Some points around this change:
This is an off-the-cuff quick take that captures my current mood. It may not have a long half-life, and I hope I am wrong
Right now I am scared
Reading the tea-leaves, Altman and Brockman may be back at OpenAI, the company charter changed, and the board - including Toner and MacAulay - removed from the company
The mood in the Valley, and in general intellectual circles, seems to have snapped against EA[1]
This could be as bad for EA's reputation as FTX
At a time when important political decisions about the future of AI are being made, and potential coalitions are being formed
And this time it'd be second-impact syndrome
I am scared EA in its current form may not handle the backlash that may come
I am scared that we have not done enough reform in the last year from the first disaster to prepare ourselves
I am scared because I think EA is a force for making the world better. It has allowed me to do a small bit to improve the world. Through it, I've met amazing and inspiring people who work tirelessly and honestly to actually make the world a better place. Through them, I've heard of countless more actually doing what they think is right and giving what they can to make the world we find ourse... (read more)
As with Nonlinear and FTX, I think that for the vast majority of people, there's little upside to following this in real-time.
It's very distracting, we have very little information, things are changing fast, and it's not very action-relevant for most of us.
I'm also very optimistic that the people "who work tirelessly and honestly to actually make the world a better place" will keep working on it after this, whatever happens to "EA", and there will still be ways to meet them and collaborate.
Sending a hug
It's hard to see how the backlash could actually destroy GiveWell or stop Moskowitz and Tuna giving away their money through Open Phil/something that resembles Open Phil. That's a lot of EA right there.
It's hard yes, but I think the risk vectors are (note - these are different scenarios, not things that follow in chronological order, though they could):
Basically I think that ideas are more important than funding. And if society/those in positions of power put the ideas of EA in the bin, money isn't going to fix that
This is all speculative, but I can't help the feeling that regardless of how the OpenAI crisis resolves a lot of people now consider EA to be their enemy :(
While I generally agree that they almost certainly have more information on what happened, which is why I'm not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it's slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almost certainly imbibed the norms of the AI safety field, and one of them is that there is a very low standard of evidence needed to claim big things, and that's going to conflict with corporate/legal standards of evidence.
... (read more)But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually g
Thanks for your response Akash. I appreciate your thoughts, and I don't mind that they're off-the-cuff :)
I agree with some of what you say, and part of what I think is your underlying point , but in some others I'm a bit less clear. I've tried to think about two points where I'm not clear, but please do point if I've got something egregiously wrong!
1) You seem to be saying that sharing negative thoughts and projections can lead others to do so, and this can then impact other people's actions in a negative way. It could also be used by anti-EA people against us.[1]
I guess I can kind of see some of this, but I guess I'd view the cure as being worse than the disease sometimes. I think sharing how we're thinking and feeling is overall a good thing that could help us understand each other more, and I don't think self-censorship is the right call here. Writing this out I think maybe I disagree with you about whether negative memetic spirals are actually a thing causally instead of descriptively. I think people may be just as likely apriori to have 'positive memetic spirals' or 'regressions to the vibe mean' or whatever
2) I'm not sure what 'I was able to reason myself out of many of your ... (read more)
The HLI discussion on the Forum recently felt off to me, bad vibes all around. It seems very heated, not a lot of scout mindset, and reading the various back-and-forth chains I felt like I was 'getting Eulered' as Scott once described.
I'm not an expert on evaluating charities, but I followed a lot of links to previous discussions and found this discussion involving one of the people running an RCT on Strongminds (which a lot of people are waiting for the final results of) who was highly sceptical of SM efficacy. But the person offering counterarguments in the thread seems to be just as valid to me? My current position, for what it's worth,[1] is:
Some people with high-karma accounts seem to be making some very strong votes on that thread, and very few are making their reasoning clear (though I salute those who are in either direction).
I think this is a significant datum in favor of being able to see the strong up/up/down/strong down spread for each post/comment. If it appeared that much of the karma activity was the result of a handful of people strongvoting each comment in a directional activity, that would influence how I read the karma count as evidence in trying to discern the community's viewpoint. More importantly, it would probably inform HLI's takeaways -- in its shoes, I would treat evidence of a broad consensus of support for certain negative statements much, much more seriously than evidence of carpet-bomb voting by a small group on those statements.
JWS' quick take has often been in negative agreevote territory and is +3 at this writing. Meanwhile, the comments of the lead HLI critic suggesting potential bad faith have seen consistent patterns of high upvote / agreevote. I don't see much evidence of "shut up and just be nice to everyone else on the team" culture here.
[edit: a day after posting, I think this perhaps reads more combative that I intended? It was meant to be more 'crisis of faith, looking for reassurance if it exists' than 'dunk on those crazy longtermists'. I'll leave the quick take as-is, but maybe clarification of my intentions might be useful to others]
Warning! Hot Take! 🔥🔥🔥 (Also v rambly and not rigorous)
A creeping thought has entered my head recently that I haven't been able to get rid of...
Is most/all longtermist spending unjustified?
The EA move toward AI Safety and Longtermism is often based on EV calculations that show the long term future is overwhelmingly valuable, and thus is the intervention that is most cost-effective.
However, more in-depth looks at the EV of x-risk prevention (1, 2) cast significant doubt on those EV calculations, which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones.
But my doubts get worse...
GiveWell estimates around $5k to save a life. So I went looking for some longtermist calculations, and I really couldn't fund any robust ones![1] Can anyone point me in some robust calculations for longtermist funds/organisations where they ... (read more)
which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones.
Why do you think this?
For some very rough maths (appologies in advance for any errors), even Thorstad's paper (with a 2 century long time of perils, a 0.1% post-peril risk rate, no economic/population growth, no moral progress, people live for 100 years) suggests that reducing p(doom) by 2% is worth as much as saving 16x8billion lives - i.e. each microdoom is worth 6.4million lives. I think we can buy microdooms more cheaply than $5,000*6.4million = $32billion each.
(I can't actually find those calculations in Thorstad's paper, could you point them out to me? afaik he mostly looks at the value of fractional reduction in x-risk, while microdooms are an absolute reduction if I understand correctly? happy to be corrected or shown in the right direction!)
My concerns here are twofold:
1 - epistemological: Let's say those numbers are correct from the Thorstad paper, that a microdoom has to cost <= $32bn to be GiveWell cost-effective. The question is, how would we know this. In his recent post Paul Cristiano thinks that RSPs could lead to a '10x reduction' in AI risk. How does he know this? Is this just a risk reduction this century? This decade? Is it a permanent reduction?
It's one thing to argue that under set of conditions X work on x-risk reduction is cost-effective as you've done here. But I'm more interested in the question of whether conditions X hold, because that's where the rubber hits the road. If those conditions don't hold, then that's why longtermism might not ground x-risk work.[1]
There's also the question of persistence. I think the Thorstad model either assumes the persistence of x-risk reduction, or the persistence of a low-risk p... (read more)
Some personal reflections on EAG London:[1]
I think (at least) somebody at Open Philanthropy needs to start thinking about reacting to an increasing move towards portraying it, either sincerely or strategically, as a shadowy cabal-like entity influencing the world in an 'evil/sinister' way, similar to how many right-wingers across the world believe that George Soros is contributing to the decline of Western Civilization through his political philanthropy.
Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles. In many think pieces responding to WWOTF or FTX or SBF, they get extensively cited as a primary EA-critic, for example.
I think the 'ignore it' strategy was a mistake and I'm afraid the same mistake might happen again, with potentially worse consequences.
Do people realise that they've going to release a documentary sometime soon?
Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.
You say this as if there were ways to respond which would have prevented this. I'm not sure these exist, and in general I think "ignore it" is a really really solid heuristic in an era where conflict drives clicks.
I think responding in a way that is calm, boring, and factual will help. It's not going to get Émile to publicly recant anything. The goal is just for people who find Émile's stuff to see that there's another side to the story. They aren't going to publicly say "yo Émile I think there might be another side to the story". But fewer of them will signal boost their writings on the theory that "EAs have nothing to say in their own defense, therefore they are guilty". Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.
Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.
Suing people nearly always makes you look like the assholes I think.
As for Torres, it is fine for people to push back against specific false things they say. But fundamentally, even once you get past the misrepresentations, there is a bunch of stuff that they highlight that various prominent EAs really do believe and say that genuinely does seem outrageous or scary to most people, and no amount of pushback is likely to persuade most of those people otherwise.
In some cases, I think that outrage fairly clearly isn't really justified once you think things through very carefully: i.e. for example the quote from Nick Beckstead about saving lives being all-things-equal higher value in rich countries, because of flow-through effects which Torres always says makes Beckstead a white supremacist. But in other cases well, it's hardly news that utilitarianism has a bunch of implications that strongly contradict moral commonsense, or that EAs are sympathetic to utilitarianism. And 'oh, but I don't endorse [outrageous sounding view], I merely think there is like a 60% chance it is true, and you should be careful about moral uncertainty' does not sound very reassuring to a norma... (read more)
First, I want to thank you for engaging David. I get the sense we've disagreed a lot on some recent topics on the Forum, so I do want to say I appreciate you explaining your point of view to me on them, even if I do struggle to understand. Your comment above covers a lot of ground, so if you want to switch to a higher-bandwidth way of discussing them, I would be happy to. I apologise in advance if my reply below comes across as overly hostile or in bad-faith - it's not my intention, but I do admit I've somewhat lost my cool on this topic of late. But in my defence, sometimes that's the appropriate response. As I tried to summarise in my earlier comment, continuing to co-operate when the other player is defecting is a bad approach.
As for your comment/reply though, I'm not entirely sure what to make of it. To try to clarify, I was trying to understand why the Twitter discourse between people focused on AI xRisk and the FAact Community[1] has been so toxic over the last week, almost entirely (as far as I can see) from the latter to the former. Instead, I feel like you've steered the conversation away to a discussion about the implications of naïve utilitariansim. I also fee... (read more)
Does anyone work at, or know somebody who works at Cohere?
Last November their CEO Aidan Gomez published an anti-effective-altruism internal letter/memo (Bloomberg reporting here, apparently confirmed as true though no further comment)
I got the vibe from Twitter/X that Aidan didn't like EA, but making an internal statement about it to your company seems really odd to me? Like why do your engineers and project managers need to know about your anti-EA opinions to build their products? Maybe it came after the AI Safety Summit?
Does anyone in the AI Safety Space... (read more)
Going to quickly share that I'm going to take a step back from commenting on the Forum for the foreseeable future. There are a lot of ideas in my head that I want to work into top-level posts to hopefully spur insightful and useful conversation amongst the community, and while I'll still be reading and engaging I do have a limited amount of time I want to spend on the Forum and I think it'd be better for me to move that focus to posts rather than comments for a bit.[1]
If you do want to get in touch about anything, please reach out and I'll try my very best... (read more)
I've generally been quite optimistic that the increased awareness AI xRisk has got recently can lead to some actual progress in reducing the risks and harms from AI. However, I've become increasingly sad at the ongoing rivalry between the AI 'Safety' and 'Ethics' camps[1] 😔 Since the CAIS Letter was released, there seems to have been an increasing level of hostility on Twitter between the two camps, though my impression is that the holistility is mainly one-directional.[2]
I dearly hope that a coalition of some form can be built here, even if it is an... (read more)
Oh hi. Just rubber-ducking a failure mode some of my Forum takes[1] seem to fall into, but please add your takes if you think that would help :)
----------------------------------------------------------------------------
Some of my posts/comments can be quite long - I like responding with as much context as possible on the Forum, but as some of the original content itself is quite long, that means my responses can be quite long! I don't think that's necessarily a problem in itself, but the problem then comes with receiving disagree votes without commen... (read more)
Has anyone else listened to the latest episode of Clearer Thinking ? Spencer interviews Richard Lang about Douglas Harding's "Headless Way", and if you squint enough it's related to the classic philosophical problems of consciousness, but it did remind me a bit of Scott A's classic story "Universal Love, Said The Cactus Person" which made me laugh. (N.B. Spencer is a lot more gracious and inquisitive than the protagonist!)
But yeah if you find the conversation interesting and/or like practising mindfulness meditation, Richard has a series of guided meditati... (read more)
status: very rambly. This is an idea I want to explore in an upcoming post about longtermism, would be grateful to anyone's thoughts. For more detailed context, see https://plato.stanford.edu/entries/time/ for debate on the nature of time in philosophy
Does rejecting longtermism require rejecting the B-Theory of Time (i.e. eternalism, the view that the past, present, and future have the same ontological status)? Saying that future people don't exist (and therefore can't be harmed, can't lose out by not existing, don't have the same moral rights as present '... (read more)
In this comment I was going to quote the following from R. M. Hare:
"Think of one world into whose fabric values are objectively built; and think of another in which those values have been annihilated. And remember that in both worlds the people in them go on being concerned about the same things - there is no difference in the 'subjective' concern which people have for things, only in their 'objective' value. Now I ask, What is the difference between the states of affairs in these two worlds? Can any other answer be given except 'None whatever'?"
I remember... (read more)
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilo🔸. I don't always agree with him, but adding some numbers makes every discussion better!