There are definitely a lot of examples of places where some rich people wanted to try to create a kinda dumb, socially-useless tax haven, and then they accomplished that goal, and then the resulting entity had either negative impact or close-to-zero impact on the surrounding area. (I don't know much about Monaco or the Cayman Islands, but these seem like potentially good examples?) But there have also been times when political leaders have set out to create sustained, long-term, positive-sum economic growth, and this has also occasionally been achiev...
Hyperbolic discounting, despite its reputation for being super-short-term and irrational, is actually better in this context, and doesn't run into the same absurd "value an extra meal in 10,000 years more than a thriving civilization in 20,000 years" problems of exponential discounting.
Here is a nice blog post arguing that hyperbolic discounting is actually more rational than exponential: hyperbolic discounting is what you get when you have uncertainty over what the correct discount rate should be.
Nice! I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.
Very interested to find out some of the details here:
Side note: Greg held two roles: chair of the board, and president. It sounds like he was fired from the former and resigned from the latter role.
Regarding the second question, I made this prediction market: https://manifold.markets/JonasVollmer/in-a-year-will-we-think-that-sam-al?r=Sm9uYXNWb2xsbWVy
Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't. (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc? Even tiny mammals like mice/rats display sophisticated social behaviors...)
I tend to assume that some kind of panpsychism is true, so you don't...
Why would showing that fish "feel empathy" prove that they have inner subjective experience? It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy. Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?
Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy? Fish are very simple crea...
April fools' day request:
I was reading the openai blog post "learning to summarize with human feedback" from the AI Safety Fundamentals course (https://openai.com/research/learning-to-summarize-with-human-feedback), especially the intriguing bit at the end about how if they try to fully optimize the model for maximum reward, they actually overfit and get lower-quality responses.
My ill-advised request is that I would just LOVE to see the EA Forum's "summaryBot" go similarly haywire for a day and start summarizing every post in the same repetitive / aggressi...
Here is a post of mine where I try to explore what a consaguinity-based intervention might look like, and what some of the benefits (cultural as well as health!) might be.
This is great; I've used it a few times over the past month and it's been interesting/helpful!
Here is a suggestion for a very similar tool: I would love to use some kind of "arbitrage calculator". If I think that two markets with different prices have substantially the same criteria (for example, these three markets, which were priced at 24%, 34%, and 53% before I stepped in), obviously I can try to arbitrage them! But there are many complications that I haven't been able to think through very clearly:
The framing of this does indeed sound like an accusation, and I kind of agree with Matthew Barnett that if you actually asked for "comment on the general trend", Caplan would just respond that he thinks he's right on all those things and that libertarianism is simply a good ideological lens.
But I totally agree that it would be great to ask for "examples of views he holds that are most inconvenient for his politics" -- this seems like a generally interesting/underrated interview question!
I would be interested in hearing more from Caplan about "stable totalitarianism", but not if it's just going to be a retread of the abstract concept that stable totalitarianism seems bad, what if Stalin had lived forever, etc. Some questions I'd be interested in:
Caplan has a lot of kids, wrote a book about why it's a good idea to have kids, and puts a lot of special effort into parenting (ie, teaching his children advanced economics). To some extent this has been talked about in previous podcasts, but it would be interesting to hear some more from him about the ups and downs of parenting, what his advice would be to prospective parents, etc.
I would be interested to hear Bryan Caplan's take on Georgism (Tyler Cowen for instance thinks it's a bad idea) -- in general Caplan is opposed to pigouvian taxation, despite its appealing efficiency on paper, because Caplan thinks that it's all too easy for government to start calling anything it doesn't like a "negative externality", thus eroding peoples' freedoms. I can see where he's coming from. But maybe land value taxes could be a good idea even if we don't jump all the way to "tax everything we can think of that strikes us as a negative...
Kind of a repetitive stream-of-consciousness response, but I found this both interesting as a philosophical idea and also annoying/cynical/bad-faith:
This is interesting but also, IMO, kind of a strawman -- what's being attacked is some very specific form of utilitarianism, wheras I think many/most "longtermists" are just interested in making sure that we get some kind of happy long-term future for humanity and are fuzzy about the details. Torres says that "Longtermists would surely argue...", but I would like to see some real longtermists quoted as a...
I think the downvotes are coming from the fact that Émile P. Torres has been making similar-ish critiques on the concept of longtermism for a while now. (Plus, in some cases, closer to bad-faith attacks against the EA movement, like I think at one point saying that various EA leaders were trying to promote white supremacism or something?) Thus, people might feel both that this kind of critique is "old news" since it's been made before, and they furthermore might feel opposed to highlighting more op-eds by Torres.
Some previous Torres content whi...
Various "auto-GPT" schemes seem like a good demonstration of power-seeking behavior (and perhaps very limited forms of self-preservation or self-improvement), insofar as auto-GPT setups will often invent basic schemes like "I should try to find a way to earn some money in order to accomplish my goal of X", or "I should start a twitter account to gain some followers", or other similarly "agenty" actions/plans.
This might be a bit of a stretch, but to the extent that LLMs exhibit "sycophancy" (ie, telling people what they want to hear in response to stuff lik...
I'm definitely not deeply familiar with any kind of "official EA thinking" on this topic (ie, I don't know any EAs that specialize in nuclear security research / grantmaking / etc). But here are some things I just thought up, which might possibly be involved:
reposting a reply by Omnizoid from Lesswrong:
"Philosophy is pretty much the only subject that I'm very informed about. So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on. That's . . . worrying."
And my reply to that:
Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences:
I suggest maybe re-titling this post to:
"I strongly disagree with Eliezer Yudkowsky about the philosophy of consciousness and decision theory, and so do lots of other academic philosophers"
or maybe:
"Eliezer Yudkowsky is Frequently, Confidently, Egregiously Wrong, About Metaphysics"
or consider:
"Eliezer's ideas about Zombies, Decision Theory, and Animal Consciousness, seem crazy"
Otherwise it seems pretty misleading / clickbaity (and indeed overconfident) to extrapolate from these beliefs, to other notable beliefs of Eliezer's -- such as cryonics, quantum mec...
I agree, and I think your point applies equally well to the original Eliezer Zombie discussion, as to this very post. In both cases, trying to extrapolate from "I totally disagree with this person on [some metaphysical philosophical questions]" to "these people are idiots who are wrong all the time, even on more practical questions", seems pretty tenuous.
But all three parts of this "takedown" are about questions of philosophy / metaphysics? How do you suggest that I "follow the actual evidence" and avoid "first principles reasoning" when we are trying to learn about the nature of consciousness or the optimal way to make decisions??
For what it's worth, I didn't mean to come off as being hostile to the idea that EA should pay more attention to the exploitation of poor countries. I really just don't know much about this area -- for example I've never before heard the idea that France has an extractive relationship with former colonies that use the French currency. Maybe to you, "neoliberalism" is a very clear-cut concept. But personally, I would have a hard time telling apart which loans are good and which are ill-intentioned debt traps. (Is the IMF mostly good and Belt-And-Road mo...
"what about Haiti? How many people are in poverty there that can easily be solved, how many lives saved?"
I agree that there is probably a lot more that rich western countries should be doing to make life better for ordinary Haitians. Unfortunately, Step One in almost every conceivable plan is "help establish some kind of functional government in Haiti, to end the ongoing gang-fueled anarchy and violence." And that would involve sending western soldiers to take temporary control of the island, which (justly or not) would be derided by the press and public...
Some discussion of the "career security" angle, in the form of me asking how 1-year-grant recipients are conceiving of their longer-term career arcs: https://forum.effectivealtruism.org/posts/KdhEAu6pFnfPwA5hf/grantees-how-do-you-structure-your-finances-and-career
Radical life extension is IMO a big part of the rationalist worldview, if not the EA movement. (Although recent progress in AI has taken attention away from anti-aging, on the grounds that if we get AI alignment wrong, we're all dead, and if we get alignment right, the superintelligent AI will easily solve aging for us.)
One of the problems with radical life extension as an EA cause area is that it seems like other people ought to be rationally self-interested in funding anti-aging research, so it's not clear why EA should foot the bill:
Health interve...
Roll to disbelieve? 50-100 words is only, like, a couple of tweets, so it is really not much time to communicate many new ideas. Consider some of the most influential tweets you've ever read (either influential on you personally, or on societal discourse / world events / etc). I think that the most impactful/influential tweets, are gonna be pretty close to the limit of what's possible when you are just blasting out a standard message to everyone -- even with a superintelligence, I doubt there is much room for improvement.
Now, if you were ...
Seems like this might be a cause that "Canning What We Give" is well-positioned to address! https://forum.effectivealtruism.org/posts/ozyQ7PwhRgP7D9b2w/new-org-canning-what-we-give
(seriously though, this is a really cool post and I enjoyed reading it)
I think the main response would be to try and scale up alternative food supplies such as those being explored by https://allfed.info/, as well as trying to scale up cultivation of whatever plants are immune to the virus (it would presumably be difficult to make a single disease which could wipe out all plants, so if perhaps wheat crops start failing worldwide, maybe we start re-seeding those farms with potatoes/corn/rice or whatever is climactically appropriate).
In general, I think out-of-the-box GCBR ideas are not discussed much in public due to the infoh...
The following is way more speculative and wacky than the proven benefits of family planning that you point out above, but I think it's interesting that there is some evidence that changing family norms around marriage / children / etc might have large downstream effects on culture, in a way that potentially suggests "discouraging cousin marriage" as an intervention to increase openness / individualism / societal-level trust: https://forum.effectivealtruism.org/posts/h8iqvzGQJ9HiRTRja/new-cause-radio-ads-against-cousin-marriage-in-lmic
Well, I wrote this script and not the other one, and I think the idea of publishing draft scripts on the Forum just never occurred to RationalAnimations before? (Even though it seemed like a natural step to me, and indeed it has been helpful.) So naturally I will advocate doing it with future scripts. We already have a couple other ways of getting feedback including internal feedback, working with relevant organizations (eg for this Charter Cities script we got some comments from people associated with the charter cities institute, etc), but the more the merrier when you are trying to get something finalized before animating it for a large audience.
Yes, I think RationalAnimations is actually planning to do an episode on GiveDirectly sometime soon, which is why I nod towards the idea of interventions like cash transfers and bednets at the very beginning of the script -- the GiveDirectly video will probably come out first, so then in the beginning of the Charter Cities video we'll be able to make some visual allusions and put an on-screen title-card link to the GiveDirectly video.
@MvK and @titotal , here is the new section about political tractability:
..."A bigger problem is political feasibility. The whole concept of giving a city the ability to write its own rules is to make reform easier, but in order to get that ball rolling, you first need to find a nation willing to give away lots of their own regulation-writing authority in order to enable your charter city project. This isn’t completely unheard of -- in many ways, charter cities are just a bigger and bolder version of “Special Economic Zones”, where a port might be
Yes, in response to MvK's comment, I am reworking the script to add a section (in-between "objection: why whole new cities?" and "wider benefits") about political feasibility, where I will talk about how Paul Romer abandoned the idea after delays and failed projects in Honduras and Madagascar. I'll add another comment here when I update this Forum post with the new draft.
Do you have any suggestions as to which parts of the draft could be cut or made shorter? The current post is already getting a little long compared to our ideal video length of 10-15 mins.
As I mention in my reply to MvK above, I agree that I don't think charter city efforts should literally be funded by EA megadonors or ranked as top charities by GiveWell; I just think they are a potentially helpful idea that the EA movement should support when convenient / be friendly towards. Instead, since charter cities double as a "lets all get rich" thing, they can be mostly funded by investors (just like how investors already fund lots of international development projects -- factories, etc).
Also agree that the benefit vs tractability of charte...
Thanks for this feedback! For more context, the tone of the video is intended to be a kind of middle ground between persuasion and EA-Forum-style information, which I'd describe as "introducing people to a cool and intriguing new idea, as food for thought". (I also see this video as "making sure RationalAnimations has enough interesting variety to keep pulling in new viewers, even though many of our upcoming videos are going to be about more-technical AI alignment topics".) So, the video is definitely trying to be informative and somewhat...
So, the video is definitely trying to be informative and somewhat evenhanded rather than a purely persuasive advertisement for charter cities.
If this is your goal, I'm afraid to say you have not succeeded. I apologise if the following sounds harsh, but you have a platform and a commensurate responsibility towards accuracy.
Ask yourself the question: "Is the viewer coming away from this video with a broadly accurate picture of the facts and the most relevant expert opinions on this topic"? I think the answer is a clear no. If an audience member i...
Just brainstorming here, I have zero experience with actual psychology research:
- It might be interesting to try and identify some psychological traits that lead people to becoming EAs / becoming alignment researchers, in order to aid future recruitment/education/community-building efforts.
- This is a medium-term concern rather than about alignment itself, but I would be interested to get a clearer picture on how "botpocalypse" concerns will play out. (See this ACX article for more detail, as well as the relevant recurring section of theZvi's AI news...
Yeah, as a previous top-three winner of the EA Forum Creative Writing Contest (see my story here) and of Future of Life Institute's AI Worldbuilding contest (here), I agree that it seems like the default outcome is that even the winning stories don't get a huge amount of circulation. The real impact would come from writing the one story that actually does go viral beyond the EA community. But this seems pretty hard to do; perhaps better to pick something that has already gone viral (perhaps an existing story like one of the Yudkowsky essays, or...
This post felt vague and confusing to me. What is meant by a "game board" -- are you referring to the world's geopolitical situation, or the governance structure of the United States, or the social dynamics of elites like politicians and researchers, or some kind of ethereum-esque crypto protocol, or internal company policies at Google and Microsoft, or US AI regulations, or what?
How do we get a "new board"? No matter what kind of change you want, you will have to get there starting from the current situation that the world is in right now.&nbs...
I think at the very least, I'd expect non-neglected AI safety to look like the global campaigns against climate change, or the US military-industrial complex:
I think it is widely acknowledged that virtue ethics is perhaps easier to live by / more motivating / produces better incentives / etc, on an individual level, than trying to be a hardcore utilitarian in all your daily-life actions. And I agree with Stefan Schubert's linked posts.
But when people look at morality from the perspective of what works best on an individual level, they miss some of the most advantageous things about utilitarianism as it pertains to EA:
Community notes seem like a genuinely helpful improvement on the margin -- but coming back to this post a year later, I would say that on net I am disappointed. (Disclaimer -- I don't use twitter much myself, so I can't evaluate people's claims of whether twitter's culture has noticeably changed in a more free-speech direction or etc. From my point of view just occasionally reading others' tweets, I don't notice any change.)
During the lead-up to the purchase, people were speculating about all kinds of ways that Twitter could try to change its s...
For totally selfish, non-historical reasons, I feel like May 8 is a better date:
December 9 is too close to other rationalist/EA holidays, like Solstice, Giving Tuesday, and Petrov Day.
December 9 is right at the START of the typical cold/flu season, when infectious diseases are the worst. (Although idk if smallpox, plague, typhus, etc, were also seasonal in this way?) Maybe this makes it thematically resonant? But personally, like how Christians celebrate Easter at the end of winter, I feel like Smallpox eradication is a good seasonal match as a sp
I agree with the idea that nuclear wars, whether small or large, would probably push human civilization in a bad, slower-growth, more zero-sum and hateful, more-warlike direction. And thus, the idea of civilizational recovery is not as bright a silver lining as it seems (although it is still worth something).
I disagree that this means that we should "try to develop AGI as soon as possible", which connotes to me "tech companies racing to deploy more and more powerful systems without much attention paid to alignment concerns, and spurred on by a sense ...
I agree that hoping for ideal societies is a bit of a pipe dream. But there is some reason for hope. China and Russia, for instance, were both essentially forced to abandon centrally-planned economies and adopt some form of capitalism in order to stay competitive with a faster-growing western world. Unfortunately, the advantages of democracy vs authoritarianism (although there are many) don't seem quite as overwhelming as the advantages of capitalism vs central planning. (Also, if you are the authoritarian in charge, maybe you don't...
Yes, it is definitely a little confusing how EA and AI safety often organize themselves via online blog posts instead of papers / books / etc like other fields! Here are two papers that seek to give a comprehensive overview of the problem:
Hi! Some assorted thoughts on this post:
Yeah, I wondered what threshold to set things at -- $10m is a pretty easy bar for some of these areas, since of course some of my listed cause areas are more niche / fringe than others. I figure that for the highest-probability markets, where $10m is considered all but certain, maybe I can follow up with a market asking about a $50m or $100m threshold.
I agree that $10m isn't "mainstream" in the sense of joining the pantheon alongside biosecurity, AI safety, farmed animal welfare, etc. But it would still be a big deal to me if, say, OpenPhil doubled their... (read more)