My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown.
Has anyone looked to that movement for lessons about AI?
Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach?
One particularly worrying difference in opinions is the difference in the range of values. Moorhouse’s range is 5.1 orders of magnitude, whereas Leech’s is 12.6 (the participants’ average is 7.6).
what about taking exp(normalize(log(x)) for some normalization function that behaves roughly like vector normalization?
“You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!”
I can speak of one EA institution, which I will not name, that suffers from this. Math and cognitive science majors can get a little too far in EA circles just by mumbling something about AI Safety, and not delivering any actual interfacing with the literature or the community.
So, thanks for posting.
Have you told the institution about this? Seems like a pretty important thing for them to know!
I am commenting to create public knowledge, in a form stronger than a mere upvote, that I think this post is on the right track and that wellbeing increases from just tackling loneliness, lack of affection, lack of validation, etc. directly ought to be a serious cause candidate.
idea: taboo "community building", say "capacity building" instead.
"At least existential"
How do I get into the Groups slack?
Sub-extinction event drills, games, exercises
Civilizational resilience to catastrophes
Someone should build up expertise and produce educational materials / run workshops on questions like
Stitcher please? Many shows named "after hours" of some kind, can't find 80k on there.
Related: the term I've been using lately in thinking about this sort of thing is epistemic public goods, which I think was prompted when I saw Julia Galef tweet about the "epistemic commons".
I think you missed a disadvantage: I think there's a free rider problem where everyone reaps the benefits of the research and it's too easy for a given org to decline funding it.
Overall I like the idea a lot and
Some mechanism may be required to ensure that multiple organisations do not fund the same work.
I hope to find time for this exercise later today.
We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge" things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!
I heard it from Abram Demski at AISU'21.
Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever LA which will be 100 valuable if you end up in world A, or you ... (read more)
One downside of decentralization you missed is the idea that protocols are slower to update than any other software, which in some scenarios leads to a lock-in risk.
To be more specific, suppose mechanism designer A encodes beliefs/values/aesthetics X into a mechanism M, which gets deployed in a robustly decentralized fashion. Then, upon philosophical breakthroughs totally updating X into X', A encodes X' into a new mechanism M'. The troubling idea I'm pointing to is that coordinating the pivot from M to M' seems exceedingly difficult, likely much mor... (read more)
Is there an econ major or geek out there who would like to
something like 5 hours / week, something like $20-40 /hr
(EA Forum DMs / firstname.lastname@example.org / disc @quinn#9100)
I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator!
This is odd. I audited/freeloaded at a perfectly mediocre university math department and they seemed careful to assign the prof who's dissertation was in functional analysis to teach real analysis, and the prof who's dissertation was in algebraic geometry to teach group theory. I guess I only observed in the 3rd/4th year courses case. For 1st/2nd year courses, intuitively you'd want the analysts teaching calculus and the logicians teaching discrete, perhaps something like this, but I don't expect a disaster if they crossed the streams, in the way that I so... (read more)
post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like.
(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)
What if pedant was a sort of "backend" to a sheet UX? A compiler that takes sheet formulae and generates pedant code?
The central claim is that sheet UX is error prone, so why not keep the UX and add verification behind it?
to partially rehash what was on discord and partially add more:
great link btw, thanks!
Can people please list who they think should be asked if an EA org wanted to host a debate? I don't know who an excellent moderator would be, my first guess after not being very informed about the space is that Tyler Cowen would make a good participant on the growth side, and I don't know enough about degrowth to even make a guess at who would participate for the degrowth side.
It's in my plans for the next few years. Only thing that would stop me would be a truly fantastic group house community in the states. Cash increasing the margin that goes to donation is one reason, basic cosmopolitan values implying an urge to actually form inside views about other cultures is another.
My shortlist is singapore, taipei, or several south american cities. Open to having my mind changed about neglect for african cities, and south asian cities.
I think it's plausible that it's hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here a little questioning how important aesthetic preferences may be. I think it's plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I'm not convinced.
the word "techbros" signals you have a kind of information diet and worldview that I think people have bad priors about
IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn't seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.
If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP's point for them...
Techno-utopian approach (via paper abstract)
This review is great and has gotten a lot of my friends excited about science and being a human. Just watched the movie last night, absolutely loved it.
https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=yLG8yWWHhuTKLbdZA seems like a I didn't hear about it kind of thing
Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?
Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you hav... (read more)
Strange. Everyone I watched it with (the second time when I watched it with non-EAs) was impressed and touched. My sister, who has mostly climate change epistemics, was emotionally moved into thinking more about her own extinction concerns (and was very amenable when I explained that pandemics and some AI scenarios are greater threats than climate change).
Reasons not to identify as EA to me are just nuances about identities altogether and trying to keep them small (http://www.paulgraham.com/identity.html).
Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.
CC'd to lesswrong.com/shortform
I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets... (read more)
I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work.
Do EAs have hit-by-a-bus contingency plans for their net worths?
Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it ... (read more)
(cc'd to the provided email address)
In Think Tank Junior Fellow, OP writes
Recently obtained a bachelor’s or master’s degree (including Spring 2022 graduates)
How are you thinking about this requirement? Is there something flex about it (like when a startup says they want a college graduate) or are there bureaucratic forces at partner organizations locking it in stone (like when a hospital IT department says they want a college graduate)? Perhaps describe properties of a hypothetical candidate that would inspire you to flex this requirement?
We're writing to let you know that the group you tried to contact (techpolicyfellowship) may not exist, or you may not have permission to post messages to the group. A few more details on why you weren't able to post:* You might have spelled or formatted the group name incorrectly.* The owner of the group may have removed this group.* You may need to join the group before receiving permission to post.* This group may not be open to posting.If you have questions related to this or any other Google Group, visit the Help Center at https://support.google.com/a
Ah, just saw email@example.com at the bottom of the page. Sorry, will direct my question to there!
Hi Luke, could you describe a candidate that would inspire you to flex the bachelor's requirement for Think Tank Jr. Fellow? I took time off credentialed institutions to do lambda school and work (didn't realize I want to be a researcher until I was already in industry), but I think my overall CS/ML experience is higher than a ton of the applicants you're going to get (I worked on cooperative AI at AI Safety Camp 5 and I'm currently working on multi-multi delegation, hence my interest in AI governance). If possible, I'd like to hear from you how you're thinking about the college requirement before I invest the time into writing a cumulative 1400 words.
Awesome! I probably won't apply as I lack political background and couldn't tell you the first thing about running a poll, but my eyes will be keenly open in case you post a broader data/analytics job as you grow. Good luck with the search!
I'm thrilled about this post - during my first two-three years of studying math/cs and thinking about AGI my primary concern was the rights and liberties of baby agents (but I wasn't giving suffering nearly adequate thought). Over the years I became more of an orthodox x-risk reducer, and while the process has been full of nutritious exercises, I fully admit that becoming orthodox is a good way to win colleagues, not get shrugged off as a crank at parties, etc. and this may have played a small role, if not motivated reasoning then at least humbly deferring... (read more)
Hey, glad you liked the post! I don't really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I'm missing something?
When it comes to limited time and resources, I'm not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they ... (read more)
I've been increasingly hearing advice to the effect that "stories" are an effective way for an AI x-safety researcher to figure out what to work on, that drawing scenarios about how you think it could go well or go poorly and doing backward induction to derive a research question is better than traditional methods of finding a research question. Do you agree with this? It seems like the uncertainty when you draw such scenarios is so massive that one couldn't make a dent in it, but do you think it's valuable for AI x-safety researchers to make significant (... (read more)
So I read Gwern and I also read this Dylan Matthews piece, I'm fairly convinced the revolution did not lead to the best outcomes for slaves and for indigenous people. I think there are two cruxes for believing that it would be possible to make this determination in real-time:
One of my core assumptions, which is up for debate, is that EAs ought to focus on outcomes for sla... (read more)
I'm puzzled by the lack of push to convert Patrick Collision. Paul Graham once tweeted that Stripe would be the next google, so if Patrick Collision doesn't qualify as a billionaire yet, it might be a good bet that he will someday (I'm not strictly basing that on PG's authority, I'm also basing that on my personal opinion that Stripe seems like world domination material). He cowrote this piece "We need a science of progress" and from what I heard in this interview, signs point to a very EA-sympathetic person.
My first guess, based on the knowledge I have, is that the abolitionist faction was good, and that supporting them would be necessary for an EA in that time (but maybe not sufficient). Additionally, my guess is that I'd be able to determine this in real time.
Technical AI Safety Podcast
AI X-Risk Podcast
Maybe! I'm only going after a steady stream of 2-3 chapters per week. Be in touch if you're interested: I'm re-reading the first quarter of PLF since they published a new version in the time since I knocked out the first quarter of it.
Thanks for the comment. I wasn't aware of yours and Rohin's discussion on Arden's post. Did you flesh out the inductive alignment idea on lw or alignment forum? It seems really promising to me.
I want to jot down notes more substantive than "wait until I post 'Going Long on FV' in a few months" today.
As Rohin's comment suggests, both aiming proofs about properties of models toward today's type theories and aiming tomorrow's type theories toward ML have two classes of obstacles: 1. is it possible? 2. can it be made co... (read more)