You make some great points. If you think humanity is so immoral that a lifeless universe is better than one populated by humans, then yes, it would indeed be bad to colonize Mars, from that perspective.
I would be pretty horrified at humans taking fish aquaculture with us to Mars, in a manner as inhumane as current fish farming. However, I opened the Deep Space Food Challenge link, and it's more like what I expected: the winning entries are all plants or cellular manufacturing. (The Impact Canada page you linked to is broken.)
If we don't invent any morally ...
Interesting argument. However, I don't think this point about poverty is right.
The problem is that [optimistic longtermism is] based on the assumption that life is an inherently good thing, and looking at the state of our world, I don’t think that’s something we can count on. Right now, it’s estimated that nearly a billion people live in extreme poverty, subsisting on less than $2.15 per day.
Poverty is arguably a relic of preindustrial society in a state of nature, and is being eliminated as technological progress raises standards of living. If we were to ...
Thanks for your engagement.
That’s an interesting point with respect to poverty. Intuitively I don’t see any reason why there won’t be famine and war and poverty in the galaxies, as there is and presumably will continue to be on Earth, but I’ll think on it more. I really doubt folks out there will live in peace, provided they remain human. One could articulate all sorts of hellscapes by looking at what it is like for many to live on Earth.
Humans by nature are immoral. For example, most members want to eat animals, and even if they know that it is wrong to e...
Shrimpify Mentoring? Shrimping What We Can? Future of Shrimp Institute?
Oh, and we can't forget about 1FTS: One for the Shrimp.
I'm very disappointed that Rethink Priorities has chosen to rebrand as Rethink Shrimp. I really think we should have gone with Reshrimp Priorities. That said, I will accept the outcome, whatever is deemed to be most effective, and in any case redouble my efforts to forecast timelines to the shrimp singularity.
I don't see Shapley values mentioned anywhere in your post. I think you've made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.
I don't really see anything in the article to support the headline claim, and the anonymous sources don't actually work at NIST, do they?
Rather than farmers investing more profits from growing plants into animal farming, I think the main avenue of harm is that animal feed is an input to meat production, so if the supply of feed increases, production of meat would increase.
Under preference utilitarianism, it doesn't necessarily matter whether AIs are conscious.
I'm guessing preference utilitarians would typically say that only the preferences of conscious entities matter. I doubt any of them would care about satisfying an electron's "preference" to be near protons rather than ionized.
So you think your influence on future voting behavior is more impactful than your effect on the election you vote in?
Gina and I eventually decided that the data collection process was too time-consuming, and we stopped partway through.
Josh You and I wrote a python script that searches Google for a list of keywords, saves the text of the web pages in the search results, and shows them to GPT and asks it questions about them from a prompt. This would quickly automate the rest of your data collection if you have the pledge signers in a list already. Email me if you want a copy.
The social value of voting in elections is something where I've seen a lot of good arguments on both sides of an issue and it's unresolved with substantial implications for how I should behave. I would really love to see a debate between Holden Karnofsky, Eric Neyman, and Toby Ord against Chris Freiman and Jacob Falkovich.
Context for people who don't follow the authors:
"Why Swing-State Voting is not Effective Altruism" by Jason Brennan and Chris Freiman: https://onlinelibrary.wiley.com/doi/abs/10.1111/jopp.12273
Eric Neyman on voting:
I don't think this is empirically true. US speed limits are typically set lower than the safest driving speeds for the roads, so micromurders from speeding are often negative in areas without pedestrians.
I agree, however, isn't there still the danger that as scientific research is augmented by AI, nanotechnology will become more practical? The steelmanned case for nanotech x-risk would probably argue that various things that are intractable for us to do now, have no theoretical reasons why they couldn't be done if we were slightly better at other adjacent techniques.
they were trying to do was place two carbon atoms onto a carbon surface, and they failed, as they didn't have the means to reliably image diamond surfaces
Has this limitation been ameliorated by advancements in imaging? I used to work in materials science and don't anymore, but my understanding is that scientists have very recently refined needles to one-atom width at the point, which should improve the resolution of scanning tunneling microscopy. Someone correct me if I'm wrong.
a prosecutor showing smiling photos of a couple on vacation to argue that he couldn’t have possibly murdered her
I think you meant a defense attorney, not a prosecutor.
It's not clear the anecdotes in that section are real and not made-up. Kat is dodging questions about it, so for all we know, it could be the case that everyone referenced in that section was a Nonlinear employee who feels bad due to Ben's post. Some people elsewhere in this thread theorized that it's Kat describing herself, and strangely but conspicuously, she hasn't denied it.
David probably meant "overall character of Nonlinear management" there. And in that case you might not interview the managers themselves, although you'd probably want to interview other employees to see if they were treated like Alice and Chloe.
Can you just confirm that it's something someone else told you, and not referring to yourself in third person?
Phrasings like
"if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary"
for what is evidently a fully paid, luxurious work & travel experience... tanks the quality of the comment.
Huh? No, that is a succinct and accurate description of a disputed interpretation, and I think Nonlinear's interpretation is wrong there. They keep saying in their defense that they paid Alice (the equivalent of) $72,000 when they didn't - it's really not the same thing at all if 80% of it is comped flights, food, and hotels. At least for me, the amount of cash that would be an equivalent value to Alice's compensation package is something like $30-40,000.
I’m less interested in “debating whether a person in a villa in a tropical paradise got a vegan burger delivered fast enough” or “whether it’s appropriate for your boss to ask you to pick up their ADHD medication from a Mexican pharmacy” or “if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary”? Than in interrogating whether EA wouldn’t be better off with more “boring” organisations
Though the degree of un-professionalism displayed by all parties involved in this saga is startling, I actually think EA has a great mix of "b...
I think it's not actually accurate to say that
The vast majority of what they gave is disputing the evidence
as it's constantly interspersed with stuff like how great it is to work in a hot tub.
- [Alice] chose to pay herself an annualized ~$72,000 per year - more than anyone else at the org, and far more than the ~minimum wage she earned in previous jobs.
- This is more than most people make at OpenPhil, according to Glassdoor.
This seems unlikely - these numbers on Glassdoor are way lower than I'd expect for most of these job titles. Can anyone from OP corroborate?
The Glassdoor numbers are outdated. We share salary information in our job postings; you can see examples here ($84K/year plus a $12k 401k contribution for an Operations Assistant) and here (a variety of roles, almost all of which start at $100k or more per year — search "compensation:" to see details).
I am confident many of these salaries are inaccurate. I don't know the operation-jobs pay-scales, since I've interfaced more with the grantmakers and research associates, but I would be very surprised if these are the current numbers.
When will we learn? I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects.
Huh? What's the lesson from FTX that would have improved the OpenAI situation?
What are some EA/LW/etc coworking spaces that could accommodate ~10 people for ~5 days? I'm aware of Constellation and Lighthaven (Berkeley, CA), HAIST and MAIA (Cambridge, MA), Wytham Abbey and Trajan House (Oxford, UK), CEEALAR (Blackpool, UK), LEAH and LISA (London, UK), Epistea and Fixed Point (Prague, Czechia). Are there any others?
If you're trying to maximize computational efficiency, instead of building a Dyson sphere, shouldn't you drop the sun into a black hole and harvest the Hawking radiation?
Upvoted your post because you made some good points, but I think your analogy between human cloning and AI training is totally wrong.
Take for example, human reproductive cloning. This is so morally abhorrent that it is not being practised anywhere in the world. There is no black market in need of a global police state to shut it down. AGI research could become equally uncool once the danger, and loss of sovereignty, it represents is sufficiently well appreciated.
There is no black market in human cloning, and no police state trying to stop it, because n...
Can you name some of the red flags to watch for? I'd also be interested in hearing who some of the bad actors are (perhaps in a DM if you don't want them to know they've been spotted).
You mean this?
If so, what part of it do you object to?
This doesn't answer your question, but: I've heard several people opine that "fiscal sponsorship" is a really bad name for what it entails. I work at Epoch, which is a fiscal sponsee of Rethink Priorities (and yes, RP uses the word "sponsee" for us and all their sponsees). My understanding is that we (Epoch) pay some kind of fee to RP (annual? maybe a percentage of our budget? idk), and in return, RP's HR people handle our HR stuff and some of their ops people spend some time doing ops work for us. This is almost the complete opposite of being "fiscally sp...
We just finished hiring a data analyst for October. It's possible that we'll hire another candidate in the future, but the position is not currently taking applications.
I don't think this speaks badly to their skill level and certainly not their potential, just that they start out in a really unfair circumstance, with a head filled with a bunch of bullshit that just needs to be thrown out as cleanly as possible, and Mearsheimer is a great way to do that.
I'm out of the loop; what's the bullshit from high school civics class that needs to be thrown out of my head, and why is Mearsheimer unbalanced but also a good starting point?
Edited to clarify that my experiences were all with the same organization.
Some personal examples:
I worked for an EA-adjacent organization and was repeatedly asked, and witnessed co-workers being asked, to use campaign donation data to solicit people for political and charitable donations. This is illegal[1]. My employer openly stated they knew it was illegal, but said that it was fine because "everyone does it and we need the money". I was also asked, and witnessed other people being told, to falsify financial reports to funders to make it look like we had...
- Voluntary human challenge trials
- Run a real money prediction market for US citizens
- Random compliance stuff that startups don't always bother with: GDPR, purchased mailing lists, D&I training in california, ...
Here are some illegal (or gray-legal) things that I'd consider effectively altruistic though I predict no "EA" org will ever do:
- Produce medicine without a patent
- Pill-mill prescription-as-a-service for certain medications
- Embryo selection or human genome editing for intelligence
- Forge college degrees
- Sell organs
- Sex work earn-to-give
- Helping illegal immigration
I am very happy to clarify topics around nuclear, coming from the energy industry myself.
What part of the energy industry do you work in?
For example, with $2k, I expect I could hire a pub in central London for an evening (or maybe a whole day), with perhaps around 100 people attending. So that's $20 per person, or 1% of the cost of EAG. Would they get as much benefit from attending my event as attending EAG? No, but I'd bet they'd get more than 1% of the benefit.
Actually, I'm not sure this is right. An evening has around 1/10 of the networking duration of a weekend, and number of connections are proportional to time spent networking and to number of participants squared. If this is 1/...
and number of connections are proportional to time spent networking and to number of participants squared
This seems wrong, 1-1s are gated by the fact that there are only so many 30 minute slots in a day. Doubling the number of attendees might allow someone to be slightly more selective in who they network with but it doesn't let them do 4x as many meetings.
Furthermore, a lion becomes more dangerous as it becomes more intelligent and capable, even if its terminal goal is not "maximize number of wildebeests eaten".
I think you're committing a typical mind fallacy if you think most people would benefit from reading HPMOR as much as you did.
There are lots of these! I saw several orgs that provide that service while I was working on EAGxNYC. I can look them up later and get back to you (but we're really busy with the conference coming up so it might take a while).
I don't expect people would move into such an area for a tiny chance at receiving a payment of this size
This isn't something I expect either, and I think you may be slightly misunderstanding the mechanism by which moral hazard leads to bad outcomes.
When moral hazard hurts regular people who have their money in the banking system, it's not because a bank executive specifically tried to bankrupt their corporation to collect bailout funds from the government. Rather, it is the toxic incentive structure caused by privatized payoffs and socialized losses. These...
I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from unregulated social media.
Isn't social media approximately not a problem at all, at least on the scale of other EA causes? There are some disputed findings that it may cause increased anxiety, depression, or suicide among some demographic groups (e.g. Jonathan Haidt claims it is responsible for mental illness in teenage girls and there is an ongoing scientific debate on this) but ev...
If the flooding is predictable, are we causing moral hazard by subsidizing farming in flood-prone areas?
Scott's analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It's not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it's not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
Hey Andreas! The conference capacity is around 500, we've admitted 404 people so far, and CEA have told me that usually 95% of accepted applicants register and 95% of registered attendees show up to the conference. Therefore we have 500-404*0.95*0.95 = 135 spots left, so ideally we'd like to admit another 150 people.
Our acceptance rate is currently 80%, with another 10% waitlisted and 9% rejected.
Beef cattle are not that carbon-intensive. If you're concerned about the climate, the main problem with cattle is their methane emissions.
If I eat beef, my emissions combined with other people's emissions does some amount of harm. If I don't eat beef, other people's emissions do approximately the same amount of harm as there would have been if I had eaten it. The marginal harm from my food-based carbon emissions are really small compared to the marginal harm from my food-based contribution to animal suffering.