In my understanding, Pascal's Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let's assume that I don't have the money sufficient for that donation, and have no other way to get that money. Ever. I don't care to spend the money I do have on anything else. Then, thinking altruistically, I'll keep negotiating with Pascal's Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable do...
Hmm. Interesting, but I don't understand the locality problem. I suspect that you think of consequences as non-local, but instead far-flung, thus involving you in weighing interests with greater significance than you would prefer for decisions. Is that the locality problem to you?
What an interesting and fun post! Your analysis goes many directions and I appreciate your investigation of normative, descriptive, and prescriptive ethics.
The repugnant conclusion worries me. As a thought experiment, it seems to contain an uncharitable interpretation of principles of utilitarianism.
You increase total and average utility to measure increases in individual utility across an existing and constant population. However, those measures, total and average, are not adequate to handle the intuition people associate with them. Therefore, they sh
The ConcernedEA's state:
"People with heterodox/'heretical' views should be actively selected for when hiring to ensure that teams include people able to play 'devil’s advocate' authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view"
I disagree. Ability to accurately evaluate the views of the heterodox minority depends on developing a charitable interpretation (not necessarily a steel-manning) of the views. Furthermore, if the majority can not or wi...
Yeah. I'll add:
There are more but I'm not finished reading them. I can't say that I've learned what I should from all those books, but I got the right idea, more than once, from them.
effectivealtruism.org suggests that EA values include:
Cargill Corporation lists its values as:
Hm, ok. Couldn't Pascal's mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.
My pursuit of giving to charity is not unbounded, because I don't perceive an unbounded need. I...
I think identifying common modes of inference (e.g., deductive, inductive, analogy) can be helpful, if argument analysis takes place. Retrodiction is used to describe a stage of retroductive (abductive) reasoning, and so has value outside a Bayesian analysis.
If there's ever an equivalent in wider language for what you're discussing here (for example, "important premise" for "crux"), consider using the more common form rather than specialized jargon. For example, I find EA use of "counterfactual" to confuse me about the meaning of what I think are discussio...
I'm not sure I'm understanding. It looks like at some K, you arbitrarily decide that the probability is zero, sooner than the table that the paper suggests. So, in the thought experiment, God decides what the probability is, but you decide that at some K, the probability is zero, even though the table lists the N at which the probability is zero where N > K. Is that correct?
Another way to look at this problem is with respect to whether what is gained through accepting a wager for a specific value is of value to you. The thought experiment assumes that y...
Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified?
I'm wondering about any overlap between your concerns and the OP's.
I'd be glad for an answer or just a link to something written, if you have time.
Well, thank you for the helpful follow-up. I went ahead and bought the book, and will read it. I have browsed three articles and read two through.
The first article was "Animal advocacy's Stockholm Syndrome", written by several authors. The tone of that article is positive toward EA, starting off with "It's time for Effective Altruists in the farmed animal protection movement to expand their strategic imagination, their imagination of what is possible, and their imagination of what counts as effective. ... Effective Altruist support has brought new respect ...
Thank you for the chapter pointers.
You mention obvious reasons. The reasons are not obvious to me, because I am ignorant about this topic. Do you mean that these critics are being self-serving and that some animal advocacy orgs lost funding for other reasons than EA competition or influence?
The book's introduction proposes:
I wrote:
"You need to rock selfishness well just to do charity well (that's my hunch)."
Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, "Oh, that's some effective selfishness" vs "Oh, that's a poser's selfishness righ...
I understand, Henrik. Thanks for your reply.
The karma system works similarly to highlight information, but there's these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.
The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement ...
Right, the first class are the use cases that the OP put forward, and vote brigading is something that the admins here handle.
The second class is more what I asking about, so thank you for explaining why you would want a conversation bubble. I think if you're going to go that far for that reason, you could consider a entrance quiz. Then people who want to "join the conversation" could take the quiz, or read a recommended reading list, and then take the quiz, to gain entrance to your bubble.
I don't know how aversive people would find that, but if lack of te...
Hmm. I've watched the scoring of topics on the forum, and have not seen much interest in topics that I thought were important for you, either because the perspective, the topic, or the users, were unpopular. The forum appears to be functioning in accordance with the voting of users, for the most part,because you folks don't care to read about certain things or hear from certain people. It comes across in the voting.
I filter your content, but only for myself. I wouldn't want my peers, no matter how well informed, deciding what I shouldn't read, though I don...
EAs should read more deep critiques of EA, especially external ones
Yes, I gave David my wish list of stuff he could discuss in a comment when he announced his blog. So far he hasn't done that, but he's busy with his chosen topics, I expect. I wrote quite a lot in those comments, but he did see the list.
In an answer to Elliot Temple's question "Does EA Have An Alternative To Rational Written Debate", I proposed a few ideas, including one on voting and tracking of an EA canon of argument...
Thought experiments are usually intended to stimulate thinking, rather than be true to life. Newcomb's problem seems important to me in that it leads to a certain response to a certain kind of manipulation, if it is taken too literally. But let's assume we're all too mature for that.
In Newcomb's problem, a person is given a context, and a suggestion, that their behavior has been predicted beforehand, and that the person with that predictive knowledge is telling them about it . There are hypothetical ...
Regarding decision theory: I responded to you on substack. I'll stand by my thought that real-world decisions don't allow accurate probabilities to be stated, particularly in some life-or-death decision. Even if some person offered to play a high-stakes dice game with me, I'd wonder if the dice are rigged, if someone were watching us play and helping the other player cheat, etc.
Separately, it occurred to me yesterday that a procedure to decide how many chances to take depends on how many will meet a pre-existing need of mine, and what costs are associated...
On policy, there's Annie Duke's idea of "resulting", that just because a policy leads to success or failure doesn't necessarily speak to whether it was the strategically best choice. Causes of policy failure go beyond the policy specifics. For example, bad luck is a cause of policy failure. Accordingly, then, you can be certain your policy choice is the best but still be doubtful of the intended outcome's occurrence.
There's a bit of irony in that we should also realize our ignorance of what others want from policy, stated goals are not necessarily shared goals.
There's no agreement that there is a meta-crisis. Yes, there are multiple sources of danger, and they can interact synergistically and strongly (or so I believe), but that's not the same as saying that there must be root causes for those (global, existential) dangers that humanity can address.
If you asked a different question, like: "What are the underlying drivers of the multiple anthropogenic existential threats that we all face, like nuclear war, engineered pandemics, climate destruction, etc?"
You could get some interesting answers from people who think...
There's this thing, "the repugnant conclusion". It's about how, if you use aggregate measures of utility for people in a population, and consider it important that more people each getting the same utility means more total utility, and you think it's good to maximize total utility, then you ought to favor giant populations of people living lives barely worth living.
Yes, it's a paradox. I don’t care about it because there's no reason to want to maximize total utility by increasing a population's size that I can see. However, by thinking so, I'm led down a d...
Directly address the substance of all criticisms of EA.
Use the source's language as much as you can, rather than add your own jargon. Using your jargon and writing for other EA's makes you less credible and legitimate. It looks like obfuscation to the source of the criticism and to other outsiders reviewing your response.
Avoid going meta. Going meta t...
Well, I've been noodling that human physiology defines our senses, our senses limit our ability to represent information to ourselves, and correction for differences of sensory representation of different sets of information from the same class allows for better comparisons and other reasoning about each (for example, interpreting) . A classic example is television pharmaceutical drug ads. The ads present verbal information about the dangers of a medication in tandem with visual information showing happy people benefiting from the same medication. Typically.
Does "intuition" have a specific, carefully-guarded meaning in moral philosophy? Intuition as I understand it is vague. The term "intuition" captures examples of lots of opinions and preferences and conclusions that share the attribute of having a feeling or partial representation to the person holding them. For example, some moral intuitions could develop through or depend on personal experience but have this property of having a vague representation. For someone using my definition of "intuition", a discussion of whether all moral intuitions are evolutionarily-driven seems clearly wrong.
I made a critique of EA that I think qualifies as "deep" in the sense that it challenges basic mechanisms established for bayesianism as EA's practice it, what you call IBT, but also epistemic motives or attitude. This was not my red-team, but something a bit different.
The Scout Mindset offers a partitioning of attitudes relevant to epistemics if its categories of "scout" and "soldier" are interpreted broadly. If I have an objection to Julia Galef's book "The Scout Mindset", it is in its discussion of odds. Simply the mention of "odds." I see it as a mi...
EDIT: Oh! It was rockstrom, but the actual quote is: "The richest one percent must reduce emissions by a factor [of] 30, while the poorest 50% can actually increase emissions by a factor [of] 3" from Johan Rockström at #COP26: 10 New Insights in Climate Science | UN Climate Change. There he is talking about fair and just carbon emissions adjustments. The other insights he listed have economic implications as well, if you're interested. The accompanying report is available here.
The quote is:
"Action on climate change is a matter of intra- and intergeneration...
I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That's fine, but my last name is spelled "Scales" not "Scale". :)
No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.
Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistem...
You don't know yet how Shell's ownership affects what Sonnen does in the marketplace. If you think home batteries are a net positive morally then it's just a matter of comparing the impact of Sonnen with the impact of other companies where you could work.
Home batteries are part of the energy transition at small scale but I don't believe they matter at large scale in terms of reducing climate destruction. However, home batteries are great for buffering against blackouts and if I were a homeowner, I would be grateful to have a battery technology like Sonnen's.
Oh, I see. So by "benign" you mean shaming from folks holding common-sense but wrong conclusions, while by "deserved" you mean shaming from folks holding correct conclusions about consequences of EA actions. And "compromise" is in this sense, about being a source of harm.
I have read the Democratizing Risk paper that got EA criticism and think it was spot on. Not having ever been very popular anywhere (I get by on being "helpful" or "ignorable"), I use my time here to develop knowledge.
Your work and contributions could have good timing right now. You also have credentials and academic papers, all useful to establish your legitimacy for this audience. It might be useful to check to what extent TUA had to do with the FTX crisis, and whether a partitioning of EA ideologies combines or separates the two.
I believe that appeti...
It could be that EA folks:
Is that the compromise you're alluding to when you write:
...But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and
Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.[4] But this is the normal course of a life, including highly moral lives.... But the greater part of it being normal is that all action incurs risk, including moral risk.
It's not correct to say that action deserves criticism, but maybe correct to say that action receives criticism. The relevant distinction to make is why the action brought criticism on it, and that is different case-by-case. The c...
If I understand you:
Survival (resilience) traits and sexual attractiveness (well-being) traits diverge. Either can lead to reproduction. Selection for resilience inhibits well-being. More selection for well-being implies less selection for resilience. Reproduction implies selection for resilience or well-being but not both.
There's some argument about specific examples available like attractiveness of peacocks:
...Surprisingly, we found that peahens selectively attend to only a fraction of this display, mainly gazing at the lower portions of the male train a
Sure, I agree. Technically it's based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here's a link:
...Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs
I see the impact of AGI as primarily in the automation domain, and near-term alternatives are every bit as compelling, so no difference there. In fact, AGI might not serve in the capacity that some imagine them, full replacements for knowledge-workers. However, automation of science with AI tools will advance science and engineering, with frightening results rather than positive ones. To the extent that I see that future, I expect corresponding societal changes:
Sure. I'm curious how you will proceed.
I'm ignorant of whether AGI Safety will contribute to safe AGI or AGI development. I suspect that researchers will shift to capabilities development without much prompting. I worry that AGI Safety is more about AGI enslavement. I've not seen much defense or understanding of rights, consciousness, or sentience assignable to AGI. That betrays the lack of concern over social justice and related worker's rights issues. The only scenarios that get attention are the inexplicable "kill all humans" scenarios, but not the more...
I am interested in early material on version space learning and decision-tree induction, because they are relatively easy for humans to understand. They also provide conceptual tools useful to someone interested in cognitive aids.
Given the popularity of neural network models, I think finding books on their history should be easier. I know so little about genetic algorithms, are they part of ML algorithms now, or have they been abandoned? No idea here. I could answer that question with 10 minutes on Wikipedia, though, if my experience follows what is typical.
You seem to genuinely want to improve AGI Safety researcher productivity.
I'm not familiar with resources available on AGI Safety, but it seems appropriate to:
The 6th Assessment Reports
Key Climate Reports: The 6th (latest) Assessment Reports and additional reports covering many
You wrote
Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.
and
...Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, persona
When society includes widespread use of life extension technology, a few unhealthy trends could develop.
the idea of being "forced to live" will take on new meaning and different meaning for folks in a variety of circumstances, testing institutional standards and norms that align with commonly employed ethical heuristics. Testing of the applicability of those heuristics will result in numerous changes to informed and capable decision-making in ethical domains.
life-extension technology will become associated with lo
Sizable government rebates on purchase of new human-powered vehicles, including but not limited to bicycles and electric bicycles.
Cluster thinking could provide value. Not quite the same as moral uncertainty, in that cluster thinking has broader applicability, but the same type of "weighted" judgement. I disagree with moral uncertainty as a personal philosophy,given the role I suspect that self-servingness plays in personal moral judgements. However, cluster thinking applied in limited decision-making contexts appeals to me.
A neglected areas of exploration in EA is selfishness, and self-servingness along with that. Both influence worldview, sometimes on the fly, and are not necessari...
Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:
Although I'm not a fan of subjective probabilities, that could be because I don't make a lot of wagers.
There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situ... (read more)