New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
22
Linch
1d
0
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy" One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date. For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds: 1. The physical world: matter, energy, atoms, stars, cells. An detached external observer might think that's all there is to our universe. 2. The mathematical world. Mathematics, logic, abstract structure, rationality, "natural laws." Even many otherwise-strict "materialists" can see how the mathematical world is conceptually distinct from the physical one: mathematical truths seem conceptually different and perhaps deeper than mere physical facts. And if you're a robot/present-day LLM, you might just live in the first two worlds[2]. Some Kantians try to ground morality entirely within this world, in the logic of cooperation and strategic interaction. 3. The world of consciousness. The experiential realm. Qualia, subjective experience, "what it's like to be me." Most secular moral philosophers treat this as where the real moral action is. A pure hedonic utilitarian might think conscious experience is the only thing that matters, but even other moral philosophies would consider conscious experience extremely important (usually the most important). For the purposes of this post, I'm not that interested in the delineating between whether these worlds are truly different or just conceptually interesting ways to talk about things (ie I'm not positing a strong position on mathematical platonism or consciousness dualism) But what's interesting to me is how these different worlds ground morality/value, what some philosophers would call "axiology." When people try to solely ground morality
If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts.    PS- I'm aware this looks a bit 'people selling mirrors'
7
Linch
1d
0
I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no. I think this is a systematic mistake most of the time. It's true that your impact often routes through a small number of people. However, only some of the time would you know who the decisionmakers are ahead of time (eg X philanthropic fund should fund Y project, B regulator should loosen regulations in C domain), and have a plan for directly reaching them. For the other cases, you probably need to reach at minimum thousands of vaguely-related/vaguely-interested people before the ~5 most relevant people for your research would come across your research.   Furthermore, popularity has other advantages: * If many people read your writing, it's more likely someone else can discover empirical mistakes, logical errors, or (on the upside) unexpected connections. If 100 randos read your article, it's unlikely any of them can discover a critical mistake. This becomes much more likely at 10,000+ randos. * writing for a semi-popular audience forces some degree of simplicity and a different type of rigor. If you write for "informed people" or "vaguely related experts" as opposed to people in your subsubfield, you have less shared assumptions, and are forced to use less jargon and be more precise about your claims. * Recruitment and talent attraction. If your research agenda is good, you want other people to work on it. Popular writing is one of the best ways to get other smart people (with or without directly relevant expertise) to notice a probl
12
quinn
2d
16
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.  Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10. 
9
Linch
2d
0
On the off chance anybody is both interested in AI news and missed it, Anthropic sued DoW and other government officials/agencies for the supply chain risk designation in DC and Northern Californian Circuit. The full-text of the Northern Californian complaint here: The primary complaints: 1. First Amendment retaliation. Anthropic alleges that Pentagon officials illegally retaliated against the company for its position on AI safety. They argue that Trump, Hegseth, and others wanted to punish Anthropic for protected speech, citing public social media and other dialogue as evidence that the punishment is ideological in nature.   2. Misuse of the supply chain risk designation. Anthropic was officially designated a supply chain risk, which requires defense contractors to certify they don't use Claude in their Pentagon work. Anthropic argues that this is a misuse of the SCR designation which Congress intended to stop foreign actors, and that Anthropic clearly does not pose a supply-chain risk in a reading of the law. 3. Lack of Due Process (Fifth Amendment violation). "The Challenged Actions arbitrarily deprive Anthropic of those interests without any process, much less due process." 4. Ultra vires. Anthropic alleges that the Presidential Directive requiring every federal agency to immediately cease all use of Anthropic’s technology exceeds the limits of the President's authority as granted by Congress. 5. Administrative Procedural Act. Similar to the above, Anthropic argues that administration violates the administrative procedural act, and the sanctions are not permitted to the relevant agencies as a duty granted to them by Congress.  IANAL etc. in my personal opinion #2 seems very clearcut as a common-language and precedent reading of these things. #1 also seems strong. Sources I randomly skimmed online thought #3-#5 had a good case too, but I don't have an independent view. The DC complaint looks less meaty (and I didn't read it)