Soaking screams food poisoning to me; especially with unclean water. Perhaps this is not a risk if done right, but this could be why it's not done.
Definitely, for example if people are bikeshedding (vigorously discussing something that doesn't matter very much)
Another proposal: Visibility karma remains 1 to 1, and agreement karma acts as a weak multiplier when either positive or negative.
So:
Could also give karma on that basis.
However thinking about it, I think the result would be people would start using the visibility vote to express opinion even more...
A little ambiguous between "disagree karma & upvote karma should have equal weight" and "karma should have equal weight between people"
I think because the sorting is solely on karma, the line is "Everything above this is worth considering" / "Everything below this is not important" as opposed to "Everything above this is worth doing"
One situation I use strong votes for is whenever I do "upvote/disagree" or "downvote/agree". I do this to offset others who tend not to split their votes.
I would have expected the opposite corner of the two axis voting (because I think people don't like the language)
There seems to be two different conceptual models for AI risk.
The first is a model like in his report "Existential risk from power-seeking AI", in which he lays out a number of things, which, if they happen, will cause AI takeover.
The second is a model (which stems from Yudkowsky & Bosteom, and more recently in Michael Cohen's work https://www.lesswrong.com/posts/XtBJTFszs8oP3vXic/?commentId=yqm7fHaf2qmhCRiNA ) where we should expect takeover by malign AGI by default, unless certain things happen.
I personally think the second model is much more reasonable. Do you have any rebuttal?
Likewise, I have a post from January suggesting that crypto assets are over-represented in the EA funding portfolio.
Probably the number of people actually pushing the frontier of alignment is more like 30, and for capabilities maybe 3000. If the 270 remaining alignment people can influence those 3000 (biiiig if, but) then the odds aren't that bad
Not sure what Rob is referring to but there are a fair few examples of org/people's purposes slipping from alignment to capabilities, eg. OpenAI
I myself find it surprisingly difficult to focus on ideas that are robustly beneficial to alignment but not to capabilities.
(E.g. I have a bunch of interpretability ideas. But interpretability can only have no impact on, or accelerate timelines)
Do you know if any of the alignment orgs have some kind of alignment research NDA, with a panel to allow any alignment-only ideas be public, but keep the maybe-capabilities ideas private?
I think probably this post should be edited and "focus on low risk interventions first" put in bold in the first sentence and put right next to the pictures. Because the most careless people (possibly like me...) are the ones that will read that and not read the current caveats
An addendum is then:
If Buying time interventions are conjunctive (ie. one can cancel out the effect of the others); but technical alignment is disjunctive
If the distribution of people performing both kinds of intervention is mostly towards the lower end of thoughtfulness/competence, (which we should imo expect)
Then technical alignment is a better recommendation for most people.
In fact it suggests that the graph in the post should be reversed (but the axis at the bottom should be social competence rather than technical competence)
Rob puts it well in his comment as "social coordination". If someone tries "buying time" interventions and fails, I think that because of largely social effects, poorly done "buying time" interventions have potential to both fail at buying time and preclude further coordination with mainstream ML. So net negative effect.
On the other hand, technical alignment does not have this risk.
I agree that technical alignment has the risk of accelerating timelines though.
But if someone tries technical alignment and fails to produce results, that has no impact compare...
I see! Yes, I agree that more public "buying time" interventions (e.g. outreach) could be net negative. However, for the average person entering AI safety, I think there are less risky "buying time" interventions that are more useful than technical alignment.
That's a reasonable point - the way this would reflect in the above graph is then wider uncertainty around technical alignment at the high end of researcher ability
I would push back a little, the main thing is that buying time interventions obviously have significant sign uncertainty. Eg. your graph on median researcher "buying time" vs technical alignment, I think should have very wide error at the low end of "buying time", going significantly below 0 within the 95% confidence interval. Technical alignment is lots less risky to that extent.
Out of interest, were you considering students working together and thus submitting similar work as being plagiarism? Or was it more just a lot of cases of some students fully copy/pasting another's work against their wishes?
Sounds like it was a very successful program!
I think this should be acommpanied by a message/prompt in the comment text field that tells people that this post was a draft and to be err in favour of not giving any negative feedback
Sorry for a negative comment, but I think that all of these interventions fail to really address wild animal suffering, and that that is pretty clear already. This is simply due to the fact that pretty much all interventions on WAW have only a temporary positive effect, or worse are zeroed out completely, by the malthusian trap.
Thanks for engaging with the report. I'll offer a response since Tapinder's summer fellowship has ended and I was her manager during the project. I've made a general comment in response to Tristan that applies here too.
On your comment specifically, the "malthusian trap" is empirically not always supported. A population can approach or be at its carrying capacity and still have adequate resources, for instance if they simply do not reproduce as much due to less resource surplus.
Wow people really downvoted it. I just ignored it, in general I don't like to downvote people who are talking about their poor mental health ๐คทโโ๏ธ
Not sure why people were using the main downvote button on this one, and not just the disagree downvote.
I have been attending a Secular Buddhist group for a couple of years and I have also seen this similarity.
My main idea about how to link EA and Buddhism is as follows:
No idea how to go about finding information on this, but by my personal priors I would weight various kinds of evidence as follows:
Being related to diet, my prior is that people are usually over thinking it. However I have always agreed that it seems unlikely that a fully vegan diet has no nutritional downsides without supplementation.
I've done a cursory search, just wikipedia, here are my thoughts on the biological plausibi...
A defense of the inner ring, excerpts from the original.
......
I must now make a distinction. I am not going to say that the existence of Inner Rings is an Evil. It is certainly unavoidable. There must be confidential discussions: and it is not only a bad thing, it is (in itself) a good thing, that personal friendship should grow up between those who work together. And it is perhaps impossible that the official hierarchy of any organisation should coincide with its actual workings. If the wisest and most energetic people held the highest spots, it might coinci
Hey, appreciate your response. Perhaps we should discuss the meaning of the word "hub" here? To me, it is about 1) Having enough EAs to establish beneficial network effects, and 2) to have a reason why the EAs living there aren't constantly incentivised to move elsewhere (which also means they can live and and work there if they choose)
I think that your value proposition of a beautiful, cheap location for remote work is a great reason for a hub! This fulfills condition 2). Then, having enough people fulfills 1).
However, network effects cause increasing ret...
Hi!
I have to say I strongly disagree with this idea, for one particular reason. If we successfully establish a new hub with cheap living costs and beautiful nature, it MUST be outside the USA. The USA is notoriously hard to immigrate into from most countries!
It is unfortunate that we already have one hub (SF Bay Area / Berkeley) in the USA, although I definitely am OK with D.C. becoming a hub. However, I'd ask any Americans who want to be in an EA hub, but don't want to be in those two places, to go to someone else's hub (Mexico City, Cape Town), or if still wanting to set one up, to do so in a jurisdiction with permissive immigration.
Yeah the example above with choosing to not get promoted or not recieve funding is a more realistic scenario.
I agree these situations are somewhat rare in practice.
Re. AI Safety, my point was that these situations are especially rare there (among people who agree it's a problem, which is about states of knowledge anyway, not about goals)
Thanks for this post, I think it's a good discussion.
Epistemic status: 2am ramble.
It's about trust, although it definitely varies in importance from situation to situation. There's a very strong trust between people who have strong shared knowledge that they are all utilitarian. Establishing that is where the "purity tests" get value.
Here's a little example.
Let's say you had some private information about a problem/solution that the ea community hadn't yet worked on, and the following choice: A) reveal it to the community, with near certainty that the problem will be solved at least as well as if you yoursel...
I agree that high-trust networks are valuable (and therefore important to build or preserve). However, I think that trustworthiness is quite disconnected to how people think of their life goals (whether they're utilitarian/altruistic or self-oriented). Instead, I think the way to build high-trust networks is by getting to know people well and paying attention to the specifics.
For instance, we can envision"selfish" people who are nice to others but utilitarians who want to sabotage others over TAI timeline disagreements or disagreements about population eth...
This doesn't address the elephant which is "quality" of talent. EA has a funding overhang with respect to some implicit "quality line" at which people will be hired. Getting more people who can demonstrate talent over that line (where the placement of each specific line is very dependent on context) lowers the funding overhang, but only getting more people under the line doesn't change anything.
No no, I still believe it's a great idea. It just needs people to want to do it, and I was just sharing my observation that there doesn't seem to be that many people who want it enough to offset other things in their life (everyone is always busy).
Your comment about "selecting for people who don't find it boring" is a good re-framing, I like it.
I've had quite a few people ask me "What's altruism?" when running university clubs fair stalls for EA Wellington.
I've been very keen to run "deep dives" where we do independent research on some topic, with the aim that the group as a whole ends up with significantly more expertise than at the start.
I've proposed doing this with my group, but people are disappointingly unreceptive to it, mainly because of the time commitment and "boringness".
For an overview of most of the current efforts into "epistemic infrastructure", see the comments on my recent post here https://forum.effectivealtruism.org/posts/qFPQYM4dfRnE8Cwfx/project-a-web-platform-for-crowdsourcing-impact-estimates-of
For an overview of most of the current efforts into "epistemic infrastructure", see the comments on my recent post here https://forum.effectivealtruism.org/posts/qFPQYM4dfRnE8Cwfx/project-a-web-platform-for-crowdsourcing-impact-estimates-of
Buying coal mines to secure energy production post-global-catastrophe is a much more interesting question.
Seems to me that buying coal, rather than mines, is a better idea in that case.
I'm really hoping we can get some better data on resource allocation and estimated effectiveness to make it clearer when funders or individuals should return to focusing on global poverty etc.
There's a few projects in the works for "ea epistemic infrastructure"
Ok - this is a good critique of my comment.
I was kind of off-topic and responding to something a bit more general. Since writing my comment I have found someone on the forum summarizing my perspective better.
and relatedly re. funding
Just posting my reactions to reading this:
That's really high?? Oh - this is not the giving what we can pledge๐
At what stage of YC? I guess that will be answered later. EDIT:
... (read more)