All of NicholasKross's Comments + Replies

I'm a Definooooor! I'm gonna Defiiiiiiine! AAAAAAAAAAAAAAAA

I like circles, though my favorites are (of course) boxes and arrows.

3
Devin Kalish
15d
Pinea did complain about how many dimensions I wanted in my ethics...

TIL that a field called "argumentation theory" exists, thanks!

Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I'd phrase them, but maybe that's to be expected(?)

Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.

(There's a question of who would actually read those responses, and correspondingly where they'd be published, but that's a key question that all persuasive-media-creators should be answering anyway.)

7
Evan_Gaensbauer
3mo
I'm considering writing a reply to one or more of Bordelon's reports. To aid others who might want to do so is one of the main reasons why I shared the document. Given my understanding that POLITICO is widely read by policymakers in DC, another reason I shared it is for more EAs to at least be aware of how they're being perceived in DC, for better or worse. If I wind up writing a response, I'm not sure where I might publish it, though the EA Forum would likely be one platform. Other than EAs, it could serve as a resource to be shared with those outside of EA.

Yeah I get that, I mean specifically the weird risky hardcore projects. (Hence specifying "adult", since that's both harder and potentially more necessary under e.g. short/medium AI timelines.)

Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.

Why hasn't e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing

3
Ian Turner
3mo
A number of global health and development achieve some significant part of their benefits through higher adult IQ. This is especially true of deworming, somewhat true of anti-malaria projects, and possibly true of childhood vaccinations
2
Karthik Tadepalli
3mo
Here's a point in favor of reference class skepticism. (see top comment)
3
titotal
3mo
Looking at the linked post, this paragraph jumps out at me: Ignoring the spin, what this paragraph actually says is "I sent this proposal to a bunch of experts and they said it probably wouldn't work". So my guess is to why nobody is funding this is that it probably wouldn't work. 

There's a new chart template that is better than "P(doom)" for most people.

Have long hoped someone would do this thoroughly, thank you.

3
trammell
8mo
Thanks! Hardly the first version of an article like this (or most clearly written), but hopefully a bit more thorough…!

Much cheaper, though still hokey, ideas that you should have already thought of at some point:

  • A "formalization office" that checks and formalizes results by alignment researchers. It should not take months for a John Wentworth result to get formalized by someone else.
  • Mathopedia.
  • Alignment-specific outreach at campuses/conventions with top cybersecurity people.

Maybe! I'm most interested in math because of its utility for AI alignment and because math (especially advanced math) is notoriously considered "hard" or "impenetrable" by many people (even people who otherwise consider themselves smart/competent). Part of that is probably lack of good math-intuitions (grokking-by-playing-with-concept, maths-is-about-abstract-objects, law-thinking, etc.).

Yeah, we'd hope there's a good bit of existing pedagogy that applies to this. Not much stood out to me, but maybe I haven't looked hard enough at the field.

We ought to have a new word, besides "steelmanning", for "I think this idea is bad, but it made me think of another, much stronger idea that sounds similar, and I want to look at that idea now and ignore the first idea and probably whoever was advocating it".

Good points, thanks! (Mainly the list part)

Thank you! Another person pointed this out on LW.

This post/cause seems sorely underrated; e.g. what org exists can someone donate to, for mass case detection? It has such a high potential lives-saved-per-$1,000!

4
Jon Servello
1y
Thanks Nicholas. I'm still advocating for this and submitted a more specific project proposal to several EA-affiliated organisations in late 2022. I understand at least two of these organisations are exploring TB as a potential cause area.  I would love to join the Charity Entrepreneurship Incubation Programme with this idea, and have also considered founding a charity independently. Perhaps one day I can direct you to an organisation I'm directly involved in. Until then, a good first reference might be the grantees of the TB REACH programme, listed (alongside the results of their grants) in this PDF: https://stoptb.org/assets/documents/resources/publications/technical/TB_Case_Studies.pdf. Another idea would be to donate directly to the Stop TB Partnership, part of UNOPS. These are mostly larger organisations that have directed some of their resources to case detection, rather than dedicated charities. As far as I know, no such single-focus charity exists (yet).

OK, thanks! Also, after more consideration and object-level thinking about the questions, I will probably write a good bit of prose anyway.

How would you respond to essays that are substantially or mostly in the form of bullet-points, lists, tables, and other information organization methods besides prose? (Prior discussion here, here, and here, to get a sense of why I'm interested in doing this.)

6
Jason Schukraft
1y
Hi Nicholas, The details and execution probably matter a lot, but in general I'm fine with bullet-point writing. I would, however, find it hard to engage with an essay that was mostly tables with little prose explaining the relevance of the tables.

I have a question.

IF:

  • we can submit multiple entries (but only one will win), AND
  • judging is based on 67% uncovering considerations and 33% clarifying concepts,

THEN, would you prefer if I:

  • make one large entry that puts all my research/ideas/information in one place, OR
  • make several smaller entries, each one focusing on a single idea?

(Assuming this is for answering one question. Presumably, since multiple entries are allowed, I could duplicate this strategy for the other question, or even use a different one for each. But if I'm wrong about this, I'd also like to know that!)

6
Jason Schukraft
1y
Hi Nicholas, Thanks for your question. It's a bit difficult to answer in the abstract. If your ideas hang together in a nice way, it makes sense to house them in a single entry. If the ideas are quite distinct and unrelated, it makes more sense to house them in separate entries. Another consideration is length. Per the contest guidelines, we're advising entrants to shoot for a submission length around 5000 words (though there are no formal word limits). All else equal, I'd prefer three 5000 word entries to one 15,000 word entry, and I'd prefer one 5000 word entry to ten 500 word entries. Hope this helps. Jason

Enough cheating at business, we must cheat at League next

It is more dramatic to break the curfew tho

I was almost too lazy to even write my post this year, please TLDR this setup and explain how I can receive money and social status and other personal gains thank you

I hereby request funding for more overwrought posts about the community's social life, as they are a cost-effective way to do this.

I feel targeted 📝📝📝📝📝📝📝

TLDR: 📝📝📝📝📝📝📝📝

Buy enough darkweb stimulants to move up a rank in League.

Way ahead of you, but 6 months of stimulants cost less than a catered dinner--only a few hundred thousand dollars. 

And League is impossible! It is so hard! How do people work hard to accomplish things the normal way?

We can stop Big Chicken with Big Fox. Big Malaria can be prevented with Big Net.

This is interesting, but I'm not sure I'll have the time to listen to it. Maybe make transcripts of these audio versions?

Buy illicit nootropics for everyone in the community. They can't stop us all!

3
I_machinegun_Kelly
1y
Okay, send me their addresses.

Guys i fixed the formatting :3

Wait is the lazy susan built into the table itself? Now that's flexible career capital!

We've also ordered ten custom lazy-susan tables from Japan.

I want to ask for a source, but I'm not sure how to source this (maybe like an FLI tax form?). Where did that news outlet's document come from? Did they make it up? EDIT: nvm, found their actual statement.

Agreed, with the caveat that people (especially those inexperienced with the media and/or the specific sub-issue they're being asked about) go in with decent prep.This is not the same as being cagey or reserved, which would probably lower the "momentum" of this whole thing and make change less likely. Yudkowsky, at some points, has been good at balancing "this is urgent and serious" with "don't froth at the mouth", and plenty of political activists work on this too. Ask for help from others!

3
RedStateBlueState
1y
Part of the motivation for this post is that I think AI Safety press is substantially different from EA press as a whole. AI safety is inherently a technical issue which means you don’t get this knee-jerk antagonism that happens when people’s ideology is being challenged (ie when you tell people they should be donating to your cause instead of theirs). So while I haven’t read the whole EA press post you linked to, I think parts of it probably apply less to AI.

This font is coursing through my veins and increasing my IQ thank you

Strong agree, hope this gets into the print version (if it hasn't already).

Personal feelings: I thought Karnofsky was one of the good ones! He has opinions on AI safety, and I agree with most of them! Nooooooooooo!

Object-level: My mental model of the rationality community (and, thus, some of EA) is "lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides."

Given this, I'm pessimistic that, in our current setup, we're able to attract the absolute "best and brightest and also most ethical and also most e... (read more)

I think a common maladaptive pattern is to assume that the rationality community and/or EA is unusually good at "increasing our rationality, comprehending big problems", and I really, really, really doubt  that "the most "epistemically rigorous" people are writing blog posts". 

I think I agree with both of these, actually: EA needs unusually good leaders, possibly better than we can even expect to attract.

(Compare EA with, say, being an elite businessperson or politician or something.)

Ah, thank you!

paraphrased: "morality is about the interactions that we have with each other, not about our effects on future people, because future people don't even exist!"

If that's really the core of what she said about that... yeah maybe I won't watch this video. (She does good subtitles for her videos, though, so I am more likely to download and read those!)

Agree, I don't see many "top-ranking" or "core" EAs writing exhaustive critiques (posts, not just comments!) of these critiques. (OK, they would likely complain that they have better things to do with their time, and they often do, but I have trouble recalling any aside from (debatably) some of the responses to AGI Ruins / Death With Dignity.)

4
Denkenberger
1y
As was said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk. There are also examples of the orthodoxy changing due to core EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, towards more policy.  

Agreed. When people require literally everything to be written in the same place by the same author/small-group, it disincentives writing potentially important posts.

Strong agree with most of these points; the OP seems to not... engage on the object-level of some of its changes. Like, not proportionally to how big the change is or how good the authors think it is or anything?

Reminder for many people in this thread:

"Having a small clique of young white STEM grads creates tons of obvious blindspots and groupthink in EA, which is bad."

is not the same belief as

"The STEM/techie/quantitative/utilitarian/Pareto's-rule/Bayesian/"cold" cluster-of-approaches to EA, is bad."

You can believe both. You can believe neither. You can believe just the first one. You can believe the second one. They're not the same belief.

I think the first one is probably true, but the second one is probably false.

Thinking the first belief is true, is nowhere ne... (read more)

Who should do the audit? Here's some criteria I think could help:

  • Orgs that don't get a high/any % of their funding from the individuals/groups under scrutiny.
  • People who've been longtime community members with some level of good reputation in it.
  • Orgs that do kinda "meta" things about the EA movement, like CEA or Nonlinear (disclosure: I used to volunteer for Nonlinear).
3
Devin Kalish
1y
I think if we do an audit, we shouldn’t hire someone for it who’s part of EA at all.
Load more