You said you were looking for "when the ideas started gathering people". I do suspect there's an interesting counterfactual where in-person gathering wasn't a major part of the EA movement. I can think of some other movements where in-person gathering is not focal. In any case, I'm not hung up on the distinction, it just seemed worth mentioning.
early EA summits were pretty important
The first EA summit was the one you linked in summer 2013, so it just wasn't early enough.
(You could argue that it was important for the movement's growth)
I think the "fulltime job as a scientist" situation could be addressed with an "apply for curation" process, as outlined in the second half of this comment.
Thanks a lot for writing this post!
Personal experience: When I tried a vegan diet, I experienced gradually decreasing energy levels and gradually increasing desire for iron-rich animal products (hamburgers). My energy levels went back to normal when I went ahead and ate the hamburgers.
So, I'm really excited about the potential of nutritional investigations to improve vegan diets!
For bivalvegans, note that some bilvalves are rich in heme iron (heme iron, from animals, is more easily absorbed than the non-heme iron found in plants).
Again, personal experienc...
Thanks for all your hard work, Megan.
I'm reminded of this post from a few months ago: Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely.
And this point from a post Peter Wildeford wrote: "I think criticism of EA may be more discouraging than it is intended to be and we don't think about this enough."
In theory, the EA movement isn't about us as EAs. It's about doing good for others. But in practice, we're all humans, and I think it's human nature to have an expectation of recognition/gratitude when we've ...
I wonder if a good standard rule for prizes is that you want a marketing budget which is at least 10-20% the size of the prize pool, for buying ads on podcasts ML researchers listen to or subreddits they read or whatever. Another idea is to incentivize people to make submissions publicly, so your contest promotes itself.
Title: Prizes for ML Safety Benchmark Ideas
Author: Joshc, Dan H
URL: https://forum.effectivealtruism.org/posts/jo7hmLrhy576zEyiL/prizes-for-ml-safety-benchmark-ideas
Why it's good: Benchmarks have been a big driver of progress in AI. Benchmarks for ML safety could be a great way to drive progress in AI alignment, and get people to switch from capabilities-ish research to safety-ish research. The structure of the prize looks good: They're offering a lot of money, there are still over 6 months until the submission deadline, and all they're asking for is a br...
There are hundreds of startup incubators and accelerators -- is there a particular reason you like Entrepreneur First?
Interesting points.
I think we had a bunch of good shots of spotting what was going on at FTX before the rest of the world, and I think downplaying Sam's actual involvement in the community would have harmed that.
I could see this going the other way as well. Maybe EAs would've felt more free to criticize FTX if they didn't see it as associated with EA in the public mind. Also, insofar as FTX was part of the "EA ingroup", people might've been reluctant to criticize them due to tribalism.
...I also think that CEA would have very likely approved any reques
I think it would be terrible if EA updated from the FTX situation by still giving fraudsters a ton of power and influence, but now just don't publicly associate with them.
I don't think fraudsters should be given power and influence. I'm not sure how you got that from my comment. My recommendation was made in the spirit of defense-in-depth.
I can see how a business founder trying to conceal their status as an EA might create an adversarial relationship, but that's not what I suggested.
Put it another way: SBF claimed he was doing good with lots of fanfar...
Our laws are the end result of literally thousands of years of of experimentation
The distribution of legal cases involving technology over the past 1000 years is very different than the distribution of legal cases involving technology over the past 10 years. "Law isn't keeping up with tech" is a common observation nowadays.
a literal random change to the status quo
How about we revise to "random viable legislation" or something like that. Any legislation pushed by artists will be in the same reference class as the "thousands of years of of experimen...
...their regulations will probably not, except by coincidence, be the type of regulations we should try to install.
A priori, I'd expect a randomly formulated AI regulation to be about 50% likely to be an improvement on the status quo, since the status quo wasn't selected for being good for alignment.
Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave.
I don't see good arguments supporting this point. I tend to think the opposite -- building a coalition to pass a regulation now makes i...
Suppose you saw a commercial on TV. At the end of the commercial a voice says "brought to you by Effective Altruism". The heart-in-lightbulb logo appears on screen for several seconds.
I actually did hear of a case of a rando outside the community grabbing a Facebook page for "Effective Altruism", gaining a ton of followers, and publishing random dubious stuff.
You can insist EA isn't a brand all you want, but someone still might use it that way!
I'm not super attached to getting permission from CEA in particular. I just like the idea of EAs starting more ...
With recent FTX news, EA has room for more billionaire donors. For any proposed EA cause area, a good standard question to ask is: "Could this be done as a for-profit?" Quoting myself from a few years ago:
...There are a few reasons I think for-profit is generally preferable to non-profit when possible:
- It's easier to achieve scale as a for-profit.
- For-profit businesses are accountable to their customers. They usually only stay in business if customers are satisfied with the service they provide. Non-profits are accountable to their donors. The impression
However, that doesn't really change my point that usually the reason a new idea seems wacky and strange is because it's wrong.
I think seeming wacky and strange is mainly a function of difference, not wrongness per se.
I'd argue that the best way to evaluate the merits of a wacky idea is usually to consider it directly. And discussing wacky ideas is what brings them from half-baked to fully-baked.
If you can find a good way to count up the historical reference class of "wacky and strange ideas being explored by highly educated contrarians" and quantify the...
Interesting argument!
I'm not fully persuaded, because I think we're dealing with heterogeneous sub-populations.
Consider the statement "As a non-EA, I believe that EA funders don't allocate enough capital to funding development econ research". I don't think we can conclude from this statement that the opposite is true, and EA funders allocate too much capital to development econ research.
The heterogeneous subpopulations perspective suggests that people who think development econ research is the most promising cause may be self-selecting out of the "dedicat...
My sense is if you look at "wacky and strange ideas being explored by highly educated contrarians" as a historical reference class, they've been important enough to be worth paying attention to. I would put pre-WWW discussion & exploration of hypermedia in this category, for instance. And the first wiki was a rather wacky and strange thing. I think you could argue that the big ideas underpinning EA (RCTs, veganism, existential risk) were all once wacky and strange. (Existential risk was certainly wacky and strange about 10-15 years ago.)
One extremely under-rated impact of working harder is that you learn more. You have sub-linear short-term impact with increasing work hours because of things like burnout, or even just using up the best opportunities, but long-term you have super-linear impact (as long as you apply good epistemics) because you just complete more operational cycles and try more ideas about how to do the work.
Working more hours could help learning in the sense of helping you collect data faster. But if you want to learn from the data you already have, I'd suggest working...
Variant: "EA funds should do small-scale experiments with mechanisms like quadratic voting and prediction markets, that have some story for capturing crowd wisdom while avoiding both low-info voting and single points of failure. Then do blinded evaluation of grants to see which procedure looks best after X years."
One consideration is for some of those names, their 'conversation' with EA is already sorta happening on Twitter. The right frame for this might be whether Twitter or a podcast is a better medium for that conversation.
You could argue podcasts don't funge against tweets. I think they might -- I think people are often frustrated and want to say something, and a spoken conversation can be more effective at making them feel heard. See The muted signal hypothesis of online outrage. So I'd be more concerned about e.g. giving legitimacy to inaccurate criticis...
You make good points, but there's no boolean that flips when "sufficient quantities of data [are] practically collected". The right mental model is closer to a multi-armed bandit IMO.
Great points.
There's an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it's better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: "Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world's poorest!"
But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people ou...
In terms of understanding the causal effect of talking to journalists, it seems hard to say much in the absence of an RCT.
Someone ought to flip a coin for every interview request, in order to measure (a) the causal effect of accepting an interview on probability of article publication, and (b) the direction of any effects on article accuracy, fairness, and useful critique.
(That was meant as a bit of a joke, but I would honestly be delighted to see a bunch of articles about EA which include sentences like "Person X did not offer any comment because we weren...
It is a joke, but it's an appropriate one.
EA has a pathology of insisting that we defer to data even in situations where sufficient quantities of data can't be practically collected before a decision is necessary.
And that is extremely relevant to EA's media problem.
Say it takes 100 datapoints over 10 years to make an informed decision. During that time:
I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything.
Do you have thoughts about the idea of creating a thread on a site like the EA Forum or Less Wrong where someone takes questions from the media and responds ...
I think something like that is a better idea. Or separately, for people to just write up their takes in comments and posts themselves. I've been reasonable happy with the outcomes of me doing that during this FTX thing. I think I've been quoted in one or two articles, and I think those quotes have been fine.
Is there somewhere we can see how the winners of donor lotteries have been donating their winnings?
Thanks for all your hard work in EA!
I think you (and lots of other EAs who feel the same way you do) are totally correct that you don't deserve the response you've been seeing to the FTX situation. You deserve a huge pat on the back for doing so much for the world.
Separately, I also agree with these paragraphs Oliver wrote a few days ago, and I'm (tentatively) glad that there's been more criticism than usual on the forum right now (even if it's ultimately unrelated to FTX):
...I do think it is indeed really sad that people fear reprisal for disagreement. I
This is a really important point. It might make sense to talk to journalists in order to contextualize what you said on the EA Forum -- or to ask them not to use something!
Answering in writing should help with the "foot in mouth" problem. You can ask them to send questions, and say you don't promise to answer all of them.
A journalist reached out to me recently and this is basically what I did; no regrets so far at least.
IMO "try to respond in writing" should be standard advice when dealing with journalists. Past that, I remember a Less Wrong user once created a (public) thread specifically for taking journalist questions; that seems like a good way to discourage misrepresentation.
Any chance we can get an interview with Nishad or Caroline? I feel like their answers would be a lot more informative in terms of what EA should take away from all this.
Fair enough!
You're correct that the EA Forum isn't as democratic as "one person one vote". However, it is one of the more democratic institutions in EA, so provides evidence re: whether moving in a more democratic direction would've helped.
I'd be interested if people can link any FTX criticism on reddit/Facebook prior to the recent crisis to see how that went. In any case, "one person one vote" is tricky for EA because it's unclear who counts as a "citizen". If we start deciding grant applications on the basis of reddit upvotes or Facebook likes, that creates a cash incentive for vote brigades.
Not saying I disagree with this, but it may be worth noting that "democracy" as an alternative didn't exactly do great either -- Stuart Buck wrote this comment, and it got downvoted enough that he deleted it.
Indeed. I actually am inclined to agree that more democracy in distributing funds and making community decisions is safer overall and prevents bad tail risks, and I think Zoe Cremer's suggestions should be take seriously, but let's remember that democracy in recent years has given us Modi, Bolsonaro, Trump, Duterte and Berlusconi as leaders of countries with millions of citizens, on the basis of millions of votes, and that Hitler did pretty well in early 1930s German elections. Democracy is not just "not infallible" but has led to plausibly bad decis...
I agree dense housing would help. Another idea is more group houses. It seems that there's an excess of big houses in the US right now: https://www.wsj.com/articles/a-growing-problem-in-real-estate-too-many-too-big-houses-11553181782
More thoughts on roommates as a solution for loneliness in this post I wrote: How to Make Billions of Dollars Reducing Loneliness. (Have learned more about the topic since writing that post; can share if people are interested)
...A small probability of a big future win. The world today has lots of governments, but they seem to mostly follow a very small number of basic governance templates. At some point, there will be new states with new Constitutions - maybe via space settlements, maybe via collapse of existing states, etc. - but I expect these moments to be few and far between. A significant literature and set of experts on "ideal governance" could lead to a radically different kind of state government, potentially with radically different policies that the rest of the world co
Holden Karnofsky has some interesting thoughts on governance:
One theme is that good governance isn't exactly a solved problem. IMO EA should use a mix of approaches: copying best practices for high-stakes scenarios, and pioneering new practices for lower-stakes scenarios. (For example, setting up a small fund to be distributed according to some experimen...
(Upvoted)
Events are not evidence to the truth of philosophical positions.
Are you sure? How about this position from Richard Chappell's post?
(3) Self-effacing utilitarian: Ex-utilitarian, gave up the view on the grounds that doing so would be for the best.
Psychological effects of espousing a moral theory are empirical in nature. Observations about the world could cause a consequentialist to switch to some other theory on consequentialist grounds, no?
Not sure there's a clean division between moral philosophy and moral psychology.
I agree hastily jum...
I'd be interested to know if there's any psychological research on how niceness and being ethical may be related.
For example, prior to the FTX incident, I didn't usually give money to beggars, on the grounds that it was ineffective altruism. But now I'm starting to wonder if giving money to beggars is an easy way to cultivate benevolence in oneself, and cultivating benevolence in oneself is an important way to improve as an EA.
Does walking past beggars & rehearsing reasons why you won't give them money end up corroding your character over time, such t...
I'd be interested to know if there's any psychological research on how niceness and being ethical may be related.
There is a plethora of research on the subject, including a growing body of evidence which suggests we are born with a sense of compassion, empathy, and fairness. Paul Bloom has done some amazing research with babies at the Yale psych lab, and more recently the University of Washington published a study suggesting altruism is innate.
A brief overview of Paul Bloom's work:
The Moral Life of Babies, Yale Psychology Professor Paul B...
Thanks!
I'm not sure I share your view of that post. Some quotes from it:
...he just believed it was really important for humanity to make space settlements in order for it to survive long-term... From what I could tell, [my professor] probably spend less than 10 hours seriously figuring out if space settlements would actually be more valuable to humanity than other alternatives.
...
...Take SpaceX, Blue Origin, Neurolink, OpenAI. Each of these started with a really flimsy and incredibly speculative moral case. Now, each is probably worth at least $10 Bil
I like how Hacker News hides comment scores. Seems to me that seeing a comment's score before reading it makes it harder to form an independent impression.
I fairly frequently find myself thinking something like: "this comment seems fine/interesting and yet it's got a bunch of downvotes; the downvoters must know something I don't, so I shouldn't upvote". If others also reason this way, the net effect is herd behavior? What if I only saw a comment's score after voting/opting not to vote?
Maybe quadratic voting could help, by encouraging everyone to focus t...
Perhaps the ditch the "Your intellectual contributions are poorly regarded" thread; at best, it is unsupported & off-topic
Morale is low right now and senior EA figures are occupied and some have come under direct criticism, whether justified or not. In this environment, it's difficult to communicate or express leadership. Only the CEA community health team seems to be taking the initiative, which must be very difficult and this is heroic.
In this situation there is often gardening of the online space that tends to be performed by marginal actors. LW and MIRI has been left mostly unscathed by the FTX disaster, and now, Eliezer and Rob B (professional communicator employed by MI...
Seems plausible, I think it would be good to have a dedicated "translator" who tries to understand & steelman views that are less mainstream in EA.
Wasn't sure about the relevance of that link?
(from phone) That was an example of an ea being highly upvoted for dismissing multiple extremely smart and well meaning people's life's work as 'really flimsy and incredibly speculative' because he wasn't satisfied that they could justify their work within a framework that the ea movement had decided is one of the only ones worth contemplating. As if that framework itself isn't incredibly speculative (and therefore if you reject any of its many suppositions, really flimsy)
I'm not sure what you mean by "the principles have little room for errors in implementing them".
That quote seems scarily plausible.
EDIT: Relevant Twitter thread
I think your first paragraph provides a potential answer to your second :-)
There's an implicit "Sam fell prey to motivated reasoning, but I wouldn't do that" in your comment, which itself seems like motivated reasoning :-)
(At least, it seems like motivated reasoning in the absence of a strong story for Sam being different from the rest of us. That's why I'm so interested in what people like nbouscal have to say.)
Well that's the thing -- it seems likely he didn't see his actions as contradicting those principles. Suggesting that they're actually a dangerous set of principles to endorse, even if they sound reasonable. That's what's really got me thinking.
I wonder if part of the problem is a consistent failure of imagination on the part of humans to see how our designs might fail. Kind of like how an amateur chess player devotes a lot more thought to how they could win than how their opponent could win. So if the principles Sam endorsed are at all recoverable, ma...
Thanks for the reply!
In terms of public interviews, I think the most interesting/relevant parts are him expressing willingness to bite consequentialist/utilitarian bullets in a way that's a bit on the edge of the mainstream Overton window, but I believe would've been within the EA Overton window prior to recent events (unsure about now). BTW I got these examples from Marginal Revolution comments/Twitter.
This one seems most relevant -- the first question Patrick asks Sam is whether the ends justify the means.
In this interview, search for "So why then
This one is tricky, because it seems bad to tell people who already experience Chidi Anagonye-style crippling self-doubt that they should self-doubt even more.
EA self-doubt has always seemed weirdly compartmentalized to me. Even the humblest of people in the movement is often happy to dismiss considered viewpoints by highly intelligent people on the grounds that it doesn't satisfy EA principles. This includes me - I think we are sometimes right to do so, but probably do so far too much nonetheless.
This comment seems to support the idea that a whistleblowing system would've helped: https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1?commentId=NbevNWixq3bJMEW7b
I'm curious if you (or any other "SBF skeptic") has any opinion regarding whether his character flaws should've been apparent to more people outside the organizations he worked at, e.g. on the basis of his public interviews. Or alternatively, were there any red flags in retrospect when you first met him?
I'm asking because so far this thread has discussed the problem in terms of private info not propagating. But I want to understand if the problem could've been stopped at the level of public info. If so that suggests that a solution of just getting bette...
One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can't trust our guts, even when they're usually right. When I first met SBF, I liked him quite a bit, and I didn't notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn't have.
It's hard for me to say about what people should have been able to detect fro...
Trying to brainstorm... I noticed this tweet from CZ, which states:
We gave support before, but we won't pretend to make love after divorce. We are not against anyone. But we won't support people who lobby against other industry players behind their backs.
Maybe SBF can hire an apology coach (if that exists? I might know someone kinda like that actually -- but someone SBF knows is probably better) and find it in his heart to apologize to CZ for "lobbying against other industry players behind their backs", and anything else he may have done that CZ resen...
Why? To light 5 billion on fire because....?
When Full Tilt Poker collapsed in 2011 after it turned they also had not segregated customer funds, Pokerstars bought them out and made their depositors whole. But Pokerstars did this because they were getting kicked out of the US market by the regulators and needed to buy some goodwill so they'd be let back in the event of eventual regulatory change (which is slowly happening, state by state). No one actually has a meaningful incentive to save FTX unless either a) you want to curry favour with crypto regulators ...
Not necessarily a deliberate strategy though -- my model is that EA started out fairly cause-neutral, people had lots of discussions about the best causes, and longtermist causes gradually emerged as the best.
E.g. in 2012 Holden Karnofsky wrote:
... (read more)