Following the launch events in Melbourne and Sydney, there will be an AI Safety Brisbane Launch in May.
We here in Brisbane will not be left behind! We may be turned into paperclips but we will not be left behind!
Further details such as speakers and the precise venue will be announced shortly.
Often it’s really helpful to know who might be good at or potentially some day willing to join exciting projects.
We’re collecting that information, so we can share it with >20 longtermist orgs and >100 longtermist entrepreneurs and funders.[1]
All you need to do is put your name, email and LinkedIn on this form. This will be quick if you have an up-to-date LinkedIn or online CV.
And if you want to, you can give us a bunch more information that might help connect you up with future projects.
Link to census of everyone who could ever see themselves doing longtermist work.
______
This is a census of people who might (now or any time in the future) be interested in joining or launching a project aimed at improving...
Thanks for making this!
In the future, you may want to ask just one full name question for people who don't fit neatly into the first name/last name split.
Note — I’m writing this in a personal capacity, and am not representing the views of my employer.
I’m interested in the EA red-teaming contest as an idea, and there are lots of interesting critiques I’d want to read. But I haven’t seen any of those written yet. I put together a big list of critiques of EA I’d be really interested in seeing come out of the contest. I personally would be interested in writing some of these, but don’t really have time to right now, so I am hoping that by sharing these, someone else will write a good version of them. I’d also welcome people to share other critiques they’d be excited to see written in the comments here!
I think that if someone wrote all...
There are multiple examples of EA orgs behaving badly I can't really discuss in public. The community really does not ask for much 'openness'.
As many of you know, on LessWrong there is now:
two axes on which you can vote on comments: the standard karma axis remains on the left, and the new axis on the right lets you show much you agree or disagree with the content of a comment.
I was thinking we should have this on EA Forum for the same reasons ... to avoid (i) agreement with the claim/position being confounded with (ii) liking the contribution to the discussion/community.
Reading the comments over there, it seems there are mixed reviews. Some key critiques:
My quick takes:
A. We might consider this for EAFo after LW works out the bugs...
I can't think of many examples where I agreed with a position but didn't want to see it or wanted to see a position that I disagreed with. I think that I've only experienced the latter case when I want to see discussions about the topic. In those cases I feel like you should balance between the good and the bad on upvoting and choose between the 5 levels (if you take into account the strong votes and no vote) that the current system provides. Also, if you believe that a topic that you want to talk about (and believe that others too) is going to be divised,... (read more)
Epistemic status: Dwelling on the negatives.
From 2013 to 2021, Open Philanthropy donated $200M to criminal justice reform. My best guess is that, from a utilitarian perspective, this was likely suboptimal. In particular, I am fairly sure that it was possible to realize sooner that the area was unpromising and act on that earlier on.
In this post, I first present the background for Open Philanthropy's grants on criminal justice reform, and the abstract case for considering it a priority. I then estimate that criminal justice grants were distinctly worse than other grants in the global health and development portfolio, such as those to GiveDirectly or AMF.
I speculate about why Open Philanthropy donated to criminal justice in the first place, and why it continued donating. I end up uncertain...
Sorry I did not realize that OP doesn't solicit donations from non megadonors. I agree this recontextualizes how we should interpret transparency.
Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.
Bivalve aquaculture means marine farming of scallops, oysters, clams, mussels, and similar. We will be focusing on these four because they are the largest production share of edible bivalves. In contrast to intensive fish aquaculture, bivalve aquaculture is an extensive form of aquaculture; bivalves feed on algae...
Thanks so much for your response Rockwell, really appreciate it. The detailed inspection of the supporting evidence is really valuable for me, because it helps improve the quality of the thesis and so that we all have a more accurate understanding of the benefits and drawbacks. I’d also like to share my thoughts in more detail on some of the points you raised.
... (read more)You appear to be comparing different animal menu items against each other, rather than sources of protein. Globally, looking at consumption by grams of protein, plants continue to comprise the l
Welcome!
For inspiration, you can see the last open thread here.
New member here. I teach American government and learned about effective altruism through The Scout Mindset and Julia Galef's podcast.
Posts about the use of crowdsourcing or other means of eliciting estimates from other people in order to achieve better estimates.
Argh bugger, this conflicts with EAGx so won't make this but would love to be kept in the loop