Köln does have a somewhat active local group currently (see here https://forum.effectivealtruism.org/groups/6BpGMKtfmC2XLeih8 ) - I think they mostly coordinate via Signal, which interestingly is hidden behind the "Join us on Slack" button on the forum page. Don't think this had much to do with this post though.
I'm not aware of anything having happened in Dortmund or the general Ruhrgebiet in the last year or so, with the exception of the Doing Good Together Düsseldorf group.
why restarting your device works to solve problems, but it does (yes, I did look it up, so no need to explain it
I'm now stuck in "I think I know a decent metaphor but you don't want me to share it" land... but then maybe I'll just share it for other people. :P
Basically it's less about how computers work on any technical level, and more about which state they're in. Imagine you want to walk to your favorite store. If you're at home, you probably know the way by heart and can navigate there reliably. But now imagine you've been up for a while and have bee...
Productivity, perfectionism, and self-leadership increased in the correspondingly themed groups.
I guess "increased" here should be "improved"? Unless perfectionism actually increased as well, but this would seem like a surprising outcome. :)
On the one hand yes, but on the other hand it seems crucial to at least mention these observer effects (edit: probably the wrong term, rather anthropic principle). There's a somewhat thin line between asking "why haven't we been wiped out?" and using the fact that we haven't been wiped out yet as evidence that this kind of scenario is generally unlikely. Of course it makes sense to discuss the question, but the "real" answer could well be "random chance" without having further implications about the likelihood of power-seeking AGI.
Highly agree with the post. I discussed almost the same thing with a friend during the conference. Basically, the typical "don't attend talks, most of them are recorded and you can just watch them online later" advice isn't great imho - it seems like a fake alternative to me, in the sense that you miss out on a talk because you tell yourself "ah I'll just watch it later", but in probably >90% of cases this just won't happen. So the actual alternative you're choosing is not "watch online later", but "don't watch at all". Because by the time the talk is o...
Side note: I've read the post on pocket first, and it simply omitted section 7 without any hint of its existence. Wonder if that happens more frequently.
As for the post itself, I do agree with most of it. I think though that it (particularly point 1) has some risk of reinforcing some people's perception of reaching out to well known people as a potential status violation, which I think is already quite common in EA (although I know of some people who would disagree with me on this). I would guess most people already have a tendency to "not waste important ...
The recent push for productization is making everyone realize that alignment is a capability. A gaslighting chatbot is a bad chatbot compared to a harmless helpful one. As you can see currently, the world is phasing out AI deployment, fixing the bugs, then iterating.
While that's one way to look at it, another way is to notice the arms race dynamics and how every major tech company is now throwing LLMs into the public head over heels even when they stil have some severe flaws. Another observation is that e.g. OpenAI's safety efforts are not very popular amo...
it's not AI, more code completion with crowd-sourced code
Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it's pretty much at the forefront of currently available ML technology, I'd be very inclined to call it AI, even if it's (superficially) limited to the use case of completing code.
This seems like a very cool project, thanks for sharing! I agree that this type of project can be considered a "moonshot", which implies that most of the potential impact lies in the tail end of possible outcomes. Consequently the estimated become very tricky. If the EV is dominated by a few outlier scenarios, reality will most likely turn out to be underwhelming.
I'm not sure if one can really make a good case that working on such a game is worthwhile from an impact perspective. But looking at the state of things and the community as a whole, it does still...
Sometimes I think that this is the purpose of EA. To attempt to be the "few people" to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.
Very interesting perspective and comment in general, thanks for sharing!
Very good argument imo! It shows there's a different explanation rather than "people don't really care about dying embryos" that can be derived from this comparison. People tend to differentiate between what happens "naturally" (or accidentally) vs deliberate human actions. When it comes to wild animal suffering, even if people believe it exists, many will think something along the lines of "it's not human-made suffering, so it's not our moral responsibility to do something about it" - which is weird to a consequentialist, but probably quite intuitive for ...
This seems very useful! Thank you for the summaries. Some thoughts:
A bit less pressing maybe, but I'd also be interested in seeing some (empirical) research on polyamory and how it affects people. It appears to be rather prevalent in rationality & EA, and I know many people who like it, and also people who find it very difficult and complicated.
Sort of, so firstly I have a field next to each prediction that automatically computes its "bucket number" (which is just FLOOR(<prediction> * 10)
). To then get the average probability of a certain bucket, I run the following: =AVERAGE(INDEX(FILTER(C$19:K, K$19:K=A14), , 1))
- note that this is google sheets though and I'm not sure to which degree this transfers to Excel. For context, column C contains my predicted probabilities, column K contains the computed bucket numbers, and A14 here is the bucket for which I'm computing this. Similarly I count ...
Thanks for sharing! I've had the feeling for a while that it would be great if EA managed to make goals/projects/activities of people (/organizations) more transparent to each other. E.g. when I'm working on some EA project, it would be great if other EAs who might be interested in that topic would know about it. Yet there are no good ways that I'm aware of to even share such information. So I certainly like the direction you're taking here.
I guess one risk would be that, however easy to use the system is, it is still overhead for people to have their proj...
I'd be up for the reading and comment writing part (will see if it works out time-wise), probably not so much for zoom. Nice idea and thanks for taking the initiative!
Is your post deliberately categorized as question? The four questions included in it all seem to be of the rhetorical kind. :P
Thanks for the post though! I think I'm in a very similar situation and you basically convinced me. I didn't expect five minutes ago to be just one minute of reading away from being convinced of applying to a 80000 hours career advice call, yet here we are.
Great write-up! The "many people are happy to donate to effective charities as long as they also donate to their favorite charity" point did indeed come as a surprise. Seems like a very valuable insight for certain types of outreach.
I think that you should consider connecting and collaborating with key parties who have interdependent goals & similar incentives
A small addition to your list would be this post about a study on a depression related intervention that I believe originated from within the EA community. Might well be worth contacting the author.
Interesting project! It reminds me a bit of Huberman lab, the existence and apparent popularity of which could be taken as an argument in favor of ESH to be worthwhile (although format, target audience and focus might of course differ quite a bit).
One thing I personally find very interesting is the point you mentioned as a counter argument: "Individual differences in benefit significantly outweigh the general differences in value between interventions" - in my opinion, this could even be viewed as quite the opposite: my impression is that in most eas...
This is great, thanks for sharing!
I found the "let's assume humanity remains at a constant population of 900 million" notion particularly interesting. On some level I still have this (obviously wrong) intuition that human knowledge about its history just grows continuously based on what happens at any given time. E.g. I would have implicitly assumed that a person living in 1888 must have known how the population numbers have developed over the preceding centuries. This is of course not necessarily the case for a whole bunch of reasons, but seeing that he w...
Nice! :)
Also, I think a few links are missing here:
David Chapman for inspiring us with these two posts in the Meaningness blog, Raemon for inspiring us with this LessWrong post
Some thoughts (not to say ideas) regarding 3:
There are definitely many coincidence of wants related problems, where someone has a good idea that someone would do or fund but that person never hears of it.
Very much agree with your points, this one in particular. I think in a perfect world we would all have a way of knowing of what others in the EA community are thinking about, working on and what they need help with. I'd love to have a way to share more openly (but without wasting other's attention) what I'm focusing on so that others who think about similar things could be made aware of this opportun...
One thing I could imagine being very helpful is some kind of ongoing local group "mentoring". So instead of one or two single calls on strategy or bottlenecks, having some experienced person more deeply invested with any particular local group in need. Somebody who might (occasionally) participate at our virtual meetups, our planning/strategy calls, gets to know our core members, our situation, needs and problems, and can provide actionable insights on all of them.
The problem with calls I've had in the past is that it's quite difficult to get accross every...
The distinction reminds me of the foxes vs hedgehogs model from Superforecasting / Tetlock. Hedgehogs being "great idea thinkers" seeing everything in the light of that one great idea they're following, whereas foxes are more nuanced, taking in many viewpoints and trying to converge on the most accurate beliefs. I think he mentioned in the book that while foxes tend to make much better forecasters, hedgehogs are not only more entertaining but also good in coming up with good questions to forecast in the first place.
An entirely different thought: The ...
One thing I could imagine happening in these situations is that people close themselves off to object level arguments to a degree, and maybe for (somewhat) good reason.
I remember once when I was younger talking to a Christian fanatic of sorts, who kept coming up w...
The cost of this seems pretty low, but in a way the expected value too seems limited (to me at least from the context you provided): I'd assume that unless this turns out to be so good that it becomes a "standard" of sorts (that people always tend to mention whenever organizational ineffectiveness comes up), it would likely end up as a relatively short lived project that doesn't reach too many people and organizations. Although this could partially be mitigated if it's stored in a persistent, easy to search and find way, so that future people on the lookout for such a guide would stumble upon it and immediately see its value.
Thank you Michael!
Given I just received a link to this article in the 80,000 Hours newsletter: https://80000hours.org/make-a-difference-with-your-career/ -- that article seems like something that a lot of students might potentially be interested in. So something like a brief description of the key idea plus a link to the article would be one option.
Recently I've been thinking a lot about the flow and distribution of information (as in facts/ideas/natural language) as a meta level problem. It seems to me that "ensuring the most valuable information finds its way to the people who need it" could make a huge difference to a lot of things, including productivity, well-being, and problem-solving of any kind, particularly for EAs. (if anybody reading this is knowledgeable in this broad area, please reach out!)
Your post appears to focus on a very related issue, which is how EAs source their EA information a...
Great post, thanks for sharing! Pretty much exactly the type of post I had been hoping for for a while. Just hearing that one success story of a local group that was in a more or less similar state as mine (albeit arguably in a higher potential environment), but made it into something so impressive, is very inspiring.
Given I only have ~10h per week available to spend on EA things (and not all of them go into community building), I was particularly happy to hear your 80/20 remark. I do wonder if it's possible to move a local group onto a kind of growth traj...
It sounds interesting, albeit to be fair a bit gimmicky as well. To me at least, which may not mean much: I can imagine taking a few minutes to play around with such a tool if it existed, maybe find some contradiction in my beliefs (probably after realizing that many of my beliefs are pretty vague and that it's hard to put these hard labels on them), and get to the conclusion that really my beliefs weren't that strong anyway and so the contradiction probably doesn't matter all that much. I can imagine others would have a very different experience though (a...
Just wanted to say I very much like the idea, although I'll probably not get involved myself. I was very happy about the anki deck of EA key numbers that was published two months ago, and would find it great if there were more ways to easily add important EA ideas to one's anki deck (e.g. you mention the 80,000 Hours key ideas in the google doc, great idea!).
It would be quite surprising to me if your idea did not work out, simply because doing good for animals via donations tends to be really low cost (but might depend on what "a lot more money" really means in your case). Imagining for instance that for each and every restaurant in the world some non-negligible cut of the rent (say 5%) would go into effective animal charity, my super rough 3 minute Fermi estimate says that would amount to something in the order of $10 billion per year. Given that about 80 billion land animals are slaughtered each year, that w...
Some random thoughts from me as well:
I recently read Can't Hurt Me by David Goggins, as well as Living with a SEAL about him, and found both pretty appealing. Also wondered whether EA could learn anything from this approach, and am pretty sure that this is indeed the case, at least for a subset of people. There is surely also some risk of his "no bullshit / total honesty / never quit" attitude to be very detrimental to some, but I assume it can be quite helpful for others.
In a way, CFAR workshops seem to go in a similar-ish direction, don't they? Just much more compressed. So one hypothetical...
Thanks for making this public, found it really interesting to follow your train of thought. Also, despite hearing about it in the past, I had completely forgotten about Julia's book. Added it to my reading list now. :)
How much time should a participant roughly allocate for this? How much time are we supposed to spend on each of the questions? For how many days/weeks/months will this be running?
Is "start by finding someone to practice with" something one should do before signing up, i.e. should people sign up in groups of 2? Or does that matching of participants happen once you've got enough together? If the latter, do you have control over which of the two roles you get? I couldn't yet make that much sense of the descriptions of what backcaster and retriever are doing e...
Thanks a lot for the thorough post Emily! I like the framing of staying up late as a high-interest loan a lot. And I agree that reading Why We Sleep may indeed be quite useful for certain people, despite its shortcomings. You make a lot of good points and provide several interesting ideas, plus the post is written in a very readable way, and the drawings are great.
Not that much else to add, except two tiny nitpicks regarding your estimation:
Neat! Small mistake: "What is the probability that it will still be working after eight twenty years" should probably be "after twenty years". And multiple data points are exciting indeed!
Perfect! In the end the impact will of course be orders of magnitude higher, as a slightly better name of any particular organization will affect tens if not hundreds of thousands of people in the long run. And there may even be a tail chance of better names increasing the community's stability and thus preventing collapse scenarios. I think overall you really undersold your project with that guesstimate model only focusing on this post only, as if that was all there is to it.
Note that A or B decisions are often false dichotomies, and you may be overlooking alternative options that combine the advantages. So narrowing in on given options too soon may sometimes be a mistake, and it can be useful to try to come up with more alternatives.
Also, in my experience many of the decisions I get stuck with fall somewhere between 2 and 3: I know their implications and have most of the information, but the results differ on various dimensions. E.g. option 1 is safe and somewhat impactful, while option 2 is potentially higher impact but much... (read more)