In my view the basic problem with this analysis is you probably can't lump all the camps together as one thing and evaluate them together as one entity. Format, structure, leadership and participants seem to have been very different.
Based on public criticisms of their work and also reading some documents about a case where we were deciding whether to admit someone to some event (and they forwarded their communication with CH). It's a limited evidence, but still some evidence.
This is a bit tangential/meta, but looking at the comment counter makes me want to express gratitude to the Community Health Team at CEA.
I think here we see a 'practical demonstration' of the counterfactuals of their work:
- insane amount of attention sucked by this
- the court of public opinions on fora seems basically strictly worse at all relevant dimensions like fairness, respect of privacy or compassion to people involved
As 'something like this' would be quite often the counterfactual to CH to trying to deal with stuff ...it makes it clear how much value they are creating by dealing with these problems, even if their process is imperfect
While I agree that the discussion here is bad at all those metrics, I'm not sure how you infer that the CH team does better at e.g. fairness or compassion.
Sorry for the delay in response.
Here I look at it from a purely memetic perspective - you can imagine thinking as a self-interested memplex. Note I'm not claiming this is the main useful perspective, or this should be the main perspective to take.
Basically, from this perspective
* the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried - ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach to...
Personally, I think the 1:1 meme is deeply confused.
A helpful analogy (thanks to Ollie Base) is with nutrition. Imagine someone hearing that "chia seeds are the nutritionally most valuable food, top rated in surveys" ... and subsequently deciding to eat just chia seeds, and nothing else!
In my view, sort of obviously, intellectual conference diet consisting just of 1:1s is poor and unhealthy for almost everyone.
In my view this is a bad decision.
As I wrote on LW
Sorry but my rough impression from the post is you seem to be at least as confused about where the difficulties are as average of alignment researchers you think are not on the ball - and the style of somewhat strawmanning everyone & strong words is a bit irritating.
In particular I don't appreciate the epistemic of these moves together
1. Appeal to seeing thinks from close proximity. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this o...
Copy-pasting here from LW.
Sorry but my rough impression from the post is you seem to be at least as confused about where the difficulties are as average of alignment researchers you think are not on the ball - and the style of somewhat strawmanning everyone & strong words is a bit irritating.
Maybe I'm getting it wrong, but it seems the model you have for why everyone is not on the ball is something like "people are approaching it too much from a theory perspective, and promising approach is very close to how empirical ML capabilities research works" &a...
(crossposted from Alignment Forum)
While the claim - the task ‘predict next token on the internet’ absolutely does not imply learning it caps at human-level intelligence - is true, some parts of the post and reasoning leading to the claims at the end of the post are confused or wrong.
Let’s start from the end and try to figure out what goes wrong.
...GPT-4 is still not as smart as a human in many ways, but it's naked mathematical truth that the task GPTs are being trained on is harder than being an actual human.
And since the task that GPTs are be
You are correct with some of the criticism, but as a side-note, completeness is actually crazy.
All real agents are bounded, and pay non-zero costs for bits, and as a consequence, don't have complete preferences. Complete agents in real world do not exist. If they existed, correct intuitive model of them wouldn't be 'rational players' but 'utterly scary god, much bigger than the universe they live in'.
In my view this is an example of a mistake in bounded/local consequentialism
From deontic perspective, there is a coordination problem, where "at least consistent handle" posts can be somewhat costly for the poster, but an atmosphere of an earnest discussion of real people has large social benefits. Vice versa, discussion with a large fraction of anonymous accounts - in particular if they are sniping at real people and each other - decreases trust, and is vulnerable to manipulation by sock puppets and nefarious players.
Also, I think there ...
Seems worth trying
At the same time, I don't think the community post / frontpage attention mechanism is the core of what's going on. Which is, in my guess, often best understood as a fight between memeplexes about hearts and minds
The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example
...On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply kno
I think this is a weird response to what Buck wrote. Buck also isn't paid either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.
I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express 'I disagree', but 'I don't want to read this'.
Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be gla...
Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I'm coming from.
Thanks for all the care and effort which went into writing this!
At the same time, while reading, my reactions were most of the time "this seems a bit confused", "this likely won't help" or "this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion".
Unfortunately, to illustrate this in detail for the whole post would be a project for ...multiple weeks.
At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the ...
In the spirit of communication style you advocate for... my immediate emotional reaction to this is "Eternal September has arrived".
I dislike my comment being summarized as "brings up the "declining epistemics" argument to defend EA orgs from criticism". In the blunt style you want, this is something between distortion and manipulation.
On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing my views on the debate.
I also dislike the way my comment is straw-manned by selective quotation.
In the next bul...
I will try to paraphrase, please correct me if I'm wrong about this: the argument is, this particular bikeshed is important because it provides important evidence about how EA works, how trustworthy the people are, or what are the levels of transparency. I think this is a fair argument.
At the same time I don't think it works in this case, because while I think EA has important issues, this purchase does not really illuminate them.
Specifically, object level facts about this bikeshed
(Disclosure about step 2: I had seen the list of candidate venues, and actually visited one other place on the list. The process was in my view competent and sensible, for example in the aspect it involved talking with potential users of the venue)
Was there no less luxurious option available?
In previous discussion, Geoffrey Miller mentioned the benefits of a luxurious venue. In my opinion, the benefits of a non-luxurious venue equal or outweigh those of a luxurious venue -- for example, as a method to deter grifters. The fact that a luxurious venue wa...
For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
It seems extremely uncharitable to call this bikeshedding.
It's just not that small an amount of money, relatively to one-off projects and grants in the EA world. It seems perfectly reasonable to expect that projects above a certain size have increased transparency, and it's hard to imagine this wouldn't qualify as big enough.
These things are relative to money in EA space - if a high proportion of the actual money moving around EVF space is going to projects like this, it doesn't help to observe that billions of dollars are going from other sources to...
I think your criticism of bikeshedding somewhat misses the point people are raising. Of course the amount of money spent on WA is tiny compared to other things. The reason it's worth talking about it is that it tells you something about EA culture and how EA operates.
This is in large parts a discussion about what culture the movement should have, what EA wants to be and how it wants to communicate to the world. The reason you care about how someone builds a bike shed is because that carries information about what kind of person they are, how trustwor...
Not sure why this is getting negative votes or w/e, it's basically correct. And even in the PR stakes, the cost of the Abbey on the most pessimistic assumptions is absolutely peanuts compared to FTX! No one will remember, no one will care (whereas they absolutely will remember FTX, that's a real reputational long-term hit).
Nice post, but my rough take is there is
When the discussion is roughly at the level 'seem to me obviously worth doing ' it seem to me fine to state dissent of the form 'often seems bad or not working to me'.
Stating an opinion is not 'appeal to authority'. I think in many cases it's useful to know what people believe, and if I have to choose between a forum where people state their beliefs openly and more often, and a forum, where people state beliefs only when they are willing to write a long and detailed justification, I prefer the first.
I'm curious in which direction you think the suppos...
- If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely.
Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive?
Also: even if the possible whistleblowers inside ...
Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP.
And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept "unapproved" funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they ...
I was reacting mostly to this part of the post
I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing
...
Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
I think it's fine for a co...
I think you lack part of the context where Zoe seems to claim to media the suggested reforms would help
- this Economist piece, mentioning Zoe about 19 times
- WP
- this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”
-this twitter thread
...
...- this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”"My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely
I'm not confident what the whole argument is.
In my reading, the OP updated toward the position "it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail" based on FTX causing large economic damage. One of the conjectures based on this is "Implement Carla Zoe Cremer’s Recommendations".
I'm mostly arguing against the position that 'the update of probability mass on EA community building being negative due to FTX evi...
Unfortunately not in detail - it's a lot of work to go through the whole list and comment on every proposal. My claim is not 'every item on the list is wrong', but 'the list is wrong on average' so commenting on three items does not solve possible disagreement.
To discuss something object-level, let's look at the first one
'Whistleblower protection schemes' sound like a good proposal on paper, but the devil is in detail:
1. Actually, at least in the EU and UK, whistleblowers pointing out things like fraud or illegal stuff are protected by the law....
The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities.
The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g...
Just wanted to flag that I personally believe
- most of Cremer's proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX
- it seems clear proposed reforms would not have prevented or influenced the FTX fiasco
- I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the s...
"It seems clear proposed reforms would not have prevented or influenced the FTX fiasco" doesn't really engage with the original poster's argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer's proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible.
Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that's not really relevant to the discussion of the original post.
I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was.
For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think ...
Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement.
- Set up whistleblower protection schemes for members of EA organisations
- Transparent listing of funding sources on each website of each institution
- Detailed and comprehensive conflict of interest reporting in grant giving
Sorry for critical feedback, but
I disagree that the design of the products, or the website, are bad.
This is just my personal aesthetic opinion, of yours, but so is yours. I think your comment should have been phrased with more humility & awareness that you were just reporting your aesthetic taste. I also object to the statement about this being a "step back in EA visual culture", which I think is just mean.
ETA: Also, I just checked the prices and they are almost all surprisingly cheap, so the second point seems wrong to me.
Could be time as well: EAGx Prague had dedicated time where it was not possible to book 1:1s on SwapCard, and I think it worked well.
For example, the OP could have contacted the person he talked to at the EAGx and asked her whether his interpretation of what she said were correct. If you read OP's other post about a conflict resulting from asking for feedback about the grant application, you have one datapoint where someone was talking with him about the unsuccessful grant application, and was surprised by OPs interpretation.
As I mentioned in the comment, in case of this post, unless the person he talked to decides to reveal herself (and she may have good reasons not to do that), ...
I don't know what the actual grantmakers think, but if I was deciding about the funding
- you can get funding to do EA localization basically in any country, if you come up with a reasonable strategy & demonstrate competence and understanding of EA
- difficulty of coming up with a reasonable strategy in my view varies between places; e.g., if you wanted to create, for the sake of discussion, EA Norway, a reasonable strategy may be 'just' supporting people in attempts to do impactful work, supporting uni groups, routing donations and maybe engaging with N...
While I don't want to go into discussing specific details, I want to make clear I don't think 'not writing something egregious in the forum' is the standard which should be applied here.
In my view, national-level groups are potentially important and good but need to be able to do a lot of complex work to be really successful. This means my bar for 'who should do this as a funded full-time job' is roughly similar to 'who should work at median-impact roles at CEA' or 'work as a generalist at Rethink Priorities' or similar.
I don't think the original pos...
Downvoted. Let me explain why:
1. I'm not really convinced by your post that what actually happened with your grant applications was at all caused by you applying from Romania. (Personally, if I was a grantmaker, based on your EA Forum presence, I would reject your application to create/grow a new national-level group if you applied from basically anywhere. You can read more of my thinking about the topic of national EA groups in this post)
2. I really dislike the type of discourse exemplified by this paragraph: "I said that I felt Romania was being di...
This is somewhat serious allegation, but also seems ... a bit like free-floating rumor, without much fact? Unless the person you talked to decides to reveal herself, and explain who are the people she talked to, and what exactly did they said to her, it's really hard to say what happened, and there is a decent chance your guesses are just wrong, or this is some sort of telephone-game scenario
Assuming the OP is telling the truth, what alternative do you expect them to do here? They have made no specific slander against anyone, they have simply mention...
Personally, if I was a grantmaker, based on your EA Forum presence, I would reject your application to create/grow a new national-level group if you applied from basically anywhere.
Huh? It's a little unfair to say this without substantiating. I looked at OP's Forum history to see if there was something egregious there and didn't see anything that would justify a claim like this. Could you elaborate more?
Rationalists do a lot of argument mapping under the label double crux (and similar derivative names: crux-mapping). I would even argue that double crux approach to argument mapping is better than the standard one, and rationalists integrate explicit argument mapping to their lives more than likely any other identifiable group.
Also: more argument mapping / double-cruxing / ... is currently unlikely to create more clarity around AI safety, because are constrained by Limits to Legibility, not by ability to map arguments.
I do agree net positive is too soft, but I don't think this is what anyone is seriously advocating for here.
The main implicit theory of impact for event venues is
venues -> events -> {ideas / intellectual progress / new people engaging with the ideas / sharing of ideas}
I think in domains like "thinking about future with powerful AI" or "how to improve humanity's epistemics" or "what are some implications of digital minds existing" it seems the case that noticeable amount of thinking and discussion is happening at various in-person gatherings.&nb...
The idea that EAs should take only actions which maximize EV according to some sort of straightforward calculation is wrong.
The argmax(EV(action)) is a stupid decision strategy and people should not be criticized for not following it.
Agree with your soft max idea but “net positive EV” is too soft - as I’ve said elsewhere donating to the university you went to or your local animal shelter is still net positive EV.
I noted the same in the following sentence: To get realistic comparisons, you would need to adjust for many other factors sometimes of order 0.3-3x, like occupancy, costs of adjusting venues to your needs, costs of the staff,...
For example, if you use your venue 65% of time, you should multiply the mentioned figure by 0.65.
What's pushing in the opposite direction is for example this: if you use a rented venue, and often spend 1-2 days before and 1 day after the event on setting it up according to your needs / returning to original state, you need to accoun...
Thank you for your detailed addition of context, but I'm honestly slightly disappointed.
You accuse me of making false claims yet this response doesn't say which claims in either my post or my comment are false. In fact, you don't even quote my writing at all, and the comment is mainly about why you think this project is a good idea.
You say:
I am not trying to prevent people expressing their opinions, but would ask that any speculation (or, for example, digging for the physical addresses of vaguely related entities)
For the people who got the impression I dig...
I wanted to provide some context on the recently purchased events venue near Prague that has been discussed here in one of the subthreads. The subthread contains some misleading content, and is not clearly visible, so I am posting this as a top level comment to provide more information on the project. It is extremely long, sorry.
1. My relationship to the project
2. Context for Forum readers
3. Context for outsiders
4. About the project
5. About the venue
6. Communication timelines
1. My relationship to the project
The project is led by Irena Kotikova and was...
Since I’m running the project in question (not Wytham Abbey), I would like to share my perspective as well. (I reached out to the author of the comment, Bob, in a DM asking him to remove the previously posted addresses and we chatted briefly about some of these points privately but I also want to share my answers publicly.)
Multiple claims in this post are misleading, incomplete or false.
I'm writing a longer response to a similar comment of the same author under different post, and hope to post it reasonably soon.
Just flagging that the claim in the question is false - CFAR did run 4 public workshops in the autumn (although not in the mentioned venue), and did run various smaller retreats in the venue in 2022.
I'm not going to comment on the emotional/anger/PR side, but here are some numbers for the discussion to be somewhat connected with Oxford conference accommodation reality; speaking just in my personal capacity as someone who did run events in Oxford.
According to the first public price list in my google results, conference accommodation in a college in Oxford in 2020 was >£70 (standard room) + >$45 on meals + >~$1000 for 4 lecture rooms, per day. With a 30 person event, it's >£4500 per day. With 40 ppl and more lecture rooms, it would be more l...
Yes / I mostly tried to describe "pure" version of the theory, not moderated by applications of other types of reasoning.
I don't think the way I'm using 'deontology ' and 'virtue ethics' reduce to 'conventional morality' either.
For example, I currently have something like a deontic commitment to not do/say things which would predictably damage epistemics - either of me or of others. (Even if the epistemic damage would have an upside like someone taking some good actions)
I think this is ultimately more of an 'attempt at approximating true consequent...
Cf Suggestions for developing national level EA orgs https://forum.effectivealtruism.org/posts/QofjcgYDbCxZQe8nQ/suggestions-for-developing-national-level-effective-altruism
Reposting from twitter: It's a moderate update on the prevalence of naive utilitarians among EAs.
Expanded:
Classical problem with this debate on utilitarianism is the vocabulary used makes motte-and-bailey defense of utilitarianism too easy.
1. Someone points to a bunch problems with a act consequentialist decision procedure / cases where naive consequentialism tells you to do bad things
2. The default response is "but this is naive consequentialism, no one actually does that"
3. You may wonder that while pe...
Yeah, I think it's a severe problem that if you are good at decision theory you can in fact validly grab big old chunks of deontology directly out of consequentialism including lots of the cautionary parts, or to put it perhaps a bit more sharply, a coherent superintelligence with a nice utility function does not in fact need deontology; and if you tell that to a certain kind of person they will in fact decide that they'd be cooler if they were superintelligences so they must be really skillful at deriving deontology from decision theory and therefore they...
I would suggest to actually read, and try to understand the post?
The papers you link mostly use the notion of 'consequentializing' in the sense that you can re-cast many other theories as consequentialist. But often this is almost trivial, if you allow yourself the degree of freedom of 'changing what's considered good' on the consequentialist side (as some of the papers do). For some weird reason, you have a deontic theory prohibiting people to drink blue liquids? Fine, you can represent that in consequentialist terms, by ranking al...
Pretty confident. I typically have a stack of drafts of the type this was, and they end up public at some point.
Btw, I think parts of Ways money can make things worse describe what I think EA community actually did wrong even ex post : succumb to the epistemic distortion field too much.
Crossposting from LW
Here is a sceptical take: anyone who is prone to getting convinced by this post to switch to attempts at “buying time” interventions from attempts at do technical AI safety is pretty likely not a good fit to try any high-powered buying-time interventions.
The whole thing reads a bit like "AI governance" and "AI strategy" reinvented under a different name, seemingly without bothering to understand what's the current understanding.
Figuring out that AI strategy and governance are maybe important, in late 2022, after spending substanti...
FWIW ... in my opinion, retaining the property might have been a more beneficial decision.
Also, I think some people working in the space should not make an update against plans like "have a permanent venue", but plausibly should make some updates about the "major donors". My guess this almost certainly means Open Philanthropy, and also likely they had most of the actual power in this decision.
Before delving further, it's important to outline some potential conflicts of interest and biases:
- I did co-organize or participated at multiple events a... (read more)