All of Noah Scales's Comments + Replies

Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:

  • probability that you assign to the Mugger's threat
  • probability that the Mugger or a third party assigns to the Mugger's threat

Although I'm not a fan of subjective probabilities, that could be because I don't make a lot of wagers.

There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situ... (read more)

Simple and useful, thanks.

In my understanding, Pascal's Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let's assume that I don't have the money sufficient for that donation, and have no other way to get that money. Ever. I don't care to spend the money I do have on anything else. Then, thinking altruistically, I'll keep negotiating with Pascal's Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable do... (read more)

1
tobycrisford
1y
When you write: "I decide what the probability of the Mugger's threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn't have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren't people better off if I give that money to charity after all?" This is exactly the 'dogmatic' response to the mugger that I am trying to defend in this post! We are in complete agreement, I believe! For possible problems with this view, see other comments that have been left, especially by MichaelStJules.

Hmm. Interesting, but I don't understand the locality problem. I suspect that you think of consequences as non-local, but instead far-flung, thus involving you in weighing interests with greater significance than you would prefer for decisions. Is that the locality problem to you?

1
Markus Bredberg
1y
So, in a general sense, the locality problem is the problem of deciding how much room we should allow for personal decisions, in contrast to impersonal decision. As I've argued, the personal -- or local, or subjective -- decisions are intuitive and feel meaningful, but are also intrinsically linked with biases. Impersonal -- or Universal, or objective -- decisions are always more correct, but doesn't allow for any self expression. The locality problem becomes more urgent -- there is more at stake -- when we increase the scope of our decisions, which we naturally do when we try to achieve more of any (good) thing. Increasing the scope, in turn, tends to make more people affected by any action, and it tends to do so in a less intuitive way, making the Universal decision making better suited. The downside of this is that less subjective decision making means a decreased sense of people feeling important, and also them suppressing their emotions. It is the dilemma of how personal we can allow the decisions to be, that is the locality problem.

What an interesting and fun post! Your analysis goes many directions and I appreciate your investigation of normative, descriptive, and prescriptive ethics.

The repugnant conclusion worries me. As a thought experiment, it seems to contain an uncharitable interpretation of principles of utilitarianism.

  1. You increase total and average utility to measure increases in individual utility across an existing and constant population. However, those measures, total and average, are not adequate to handle the intuition people associate with them. Therefore, they sh

... (read more)

About steel-manning vs charitably interpreting

The ConcernedEA's state:

"People with heterodox/'heretical' views should be actively selected for when hiring to ensure that teams include people able to play 'devil’s advocate' authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view"

I disagree. Ability to accurately evaluate the views of the heterodox minority depends on developing a charitable interpretation (not necessarily a steel-manning) of the views. Furthermore, if the majority can not or wi... (read more)

Yeah. I'll add:

  • Single-sourcing: Building Modular Documentation by Kurt Ament
  • Dictionary of Concise Writing by Robert Hartwell Fiske
  • Elements of Style by William Strunk Jr
  • A Rulebook for Arguments by Anthony Weston

There are more but I'm not finished reading them. I can't say that I've learned what I should from all those books, but I got the right idea, more than once, from them.

6
Geoffrey Miller
1y
I'd also add 'The sense of style: The thinking person's guide to writing in the 21st century' (2015) by Steven Pinker (Harvard Psychologist who focuses on language) -- an excellent book.

effectivealtruism.org suggests that EA values include:

  1. proper prioritization: appreciating scale of impact, and trying for larger scale impact (for example, helping more people)
  2. impartial altruism: giving everyone's interests equal weight
  3. open truth-seeking: including willingness to make radical changes based on new evidence
  4. collaborative spirit: involving honesty, integrity, and compassion, and paying attention to means, not just ends.

Cargill Corporation lists its values as:

  1. Do the Right Thing
  2. Put People First
  3. Reach Higher

Lockheed-Martin Corporation... (read more)

Hm, ok. Couldn't Pascal's mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.

My pursuit of giving to charity is not unbounded, because I don't perceive an unbounded need. I... (read more)

1
tobycrisford
1y
I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn't even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger. But you haven't really avoided the problem, just re-phrased it slightly. Whatever the amount of money you would be willing to risk for others, then on expected utility terms, it seems better to give it to the mugger, than to an excellent charity, such as the Against Malaria Foundation. In this framing of the problem, the mugger is now effectively robbing the AMF, rather than you, but the problem is still there.

I think identifying common modes of inference (e.g., deductive, inductive, analogy) can be helpful, if argument analysis takes place. Retrodiction is used to describe a stage of retroductive (abductive) reasoning, and so has value outside a Bayesian analysis.

If there's ever an equivalent in wider language for what you're discussing here (for example, "important premise" for "crux"), consider using the more common form rather than specialized jargon. For example, I find EA use of "counterfactual" to confuse me about the meaning of what I think are discussio... (read more)

I'm not sure I'm understanding. It looks like at some K, you arbitrarily decide that the probability is zero, sooner than the table that the paper suggests. So, in the thought experiment, God decides what the probability is, but you decide that at some K, the probability is zero, even though the table lists the N at which the probability is zero where N > K. Is that correct?

Another way to look at this problem is with respect to whether what is gained through accepting a wager for a specific value is of value to you. The thought experiment assumes that y... (read more)

2
tobycrisford
1y
If we know the probabilities with certainty somehow (because God tells us, or whatever) then dogmatism doesn't help us avoid reckless conclusions. But it's an explanation for how we can avoid most reckless conclusions in practice (it's why I used the word 'loophole', rather than 'flaw'). So if someone comes up and utters the Pascal's mugger line to you on the street in the real world, or maybe if someone makes an argument for very strong longtermism, you could reject it on dogmatic grounds. On your point about diminishing returns to utility preventing recklessness, I think that's a very good point if you're making decisions for yourself. But what about when you're doing ethics? So deciding which charities to give to, for example? If some action affecting N individuals has utility X, then some action affecting 2N individuals should have utility 2X. And if you accept that, then suddenly your utility function is unbounded, and you are now open to all these reckless and fanatical thought experiments. You don't even need a particular view on population ethics for this. |The Pascal mugger could tell you that the people they are threatening to torture/reward already exist in some alternate reality.

Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified?

I'm wondering about any overlap between your concerns and the OP's.

I'd be glad for an answer or just a link to something written, if you have time.

Well, thank you for the helpful follow-up. I went ahead and bought the book, and will read it. I have browsed three articles and read two through.

The first article was "Animal advocacy's Stockholm Syndrome", written by several authors. The tone of that article is positive toward EA, starting off with "It's time for Effective Altruists in the farmed animal protection movement to expand their strategic imagination, their imagination of what is possible, and their imagination of what counts as effective. ... Effective Altruist support has brought new respect ... (read more)

Thank you for the chapter pointers.

You mention obvious reasons. The reasons are not obvious to me, because I am ignorant about this topic. Do you mean that these critics are being self-serving and that some animal advocacy orgs lost funding for other reasons than EA competition or influence?

The book's introduction proposes:

  1. sanctuary X lost funding because of EA competition and other EA influence.
  2. legal cases to free animals lose some impetus because without a sanctuary for a freed animal, an abused animal could suffer a worse fate than their current abus
... (read more)
9
zchuang
1y
Disclaimer: I follow animal welfare news as a hobby and out of curiosity so I definitely am getting things wrong on the object level. Please feel free to push back. A lot the analysis relies on empirically unproven claims as to the counterfactual. A few examples: 1. In arguments against alt-proteins the authors argue that a theoretical problem with impossible meats is that they drive more meat consumption because people who would have become vegans become flexitarians (which I guess is a transitive consumption jump I don't buy) or that a family with a vegan teenager is more likely to go to burger king because impossible burgers mean the teenager won't throw up a fuss about going. I think I just think the substitution effect doesn't work like that because I can't imagine the conditions to be: no. of vegan consumers  success rate of moving family > no. of reducetarians  * consumption reduction by substitution of meal 2. On the institution level it's something like sanctuaries lose funding and the value of cows roaming is a value that can't be measured in QUALYs compared to chickens. The EA memeplex around animal welfarism means young would be activists go towards EA rather than animal sanctuaries. But my thinking is that EA brought in new people and didn't counterfactually reduce the number of people there. 3. On the funding level a lot of it is anecdotes about being put off that EAs won't fund people they meet at conferences. For instance a lot of the sanctuary arguments follow the logic ACE brought in 3.5 million and therefore the counterfactual is that if ACE took in sanctuaries ranked my sanctuary highly then I would be rolling in it. But I think the mistake lies in the fact that ACE isn't causing people to move by designation signalling alone but moving donors by showing the research and the donors already held EA type priors on welfare and therefore donate. The legal case stuff I found the most confusing counterfactually: 1. A lot of the argumen

I wrote:

"You need to rock selfishness well just to do charity well (that's my hunch)."

Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, "Oh, that's some effective selfishness" vs "Oh, that's a poser's selfishness righ... (read more)

I understand, Henrik. Thanks for your reply.

Forum karma

The karma system works similarly to highlight information, but there's these edge cases. Posts appear and disappear based on karma from first page views. New comments that get negative karma are not listed in the new comments from the homepage, by default.

This forum in relation to the academic peer review system

The peer review system in scientific research is truly different than a forum for second-tier researchers doing summaries, arguments, or opinions. In the forum there should be encouragement ... (read more)

Right, the first class are the use cases that the OP put forward, and vote brigading is something that the admins here handle.

The second class is more what I asking about, so thank you for explaining why you would want a conversation bubble. I think if you're going to go that far for that reason, you could consider a entrance quiz. Then people who want to "join the conversation" could take the quiz, or read a recommended reading list, and then take the quiz, to gain entrance to your bubble.

I don't know how aversive people would find that, but if lack of te... (read more)

Can you explain with an example when a bubble would be a desirable outcome?

7
Writer
1y
One class of examples could be when there's an adversarial or "dangerous" environment. For example: * Bots generating low-quality content. * Voting rings. * Many newcomers entering at once, outnumbering the locals by a lot. Example: I wouldn't be comfortable directing many people from Rational Animations to the EA Forum and LW, but a karma system based on Eigen Karma might make this much less dangerous. Another class of examples could be when a given topic requires some complex technical understanding. In that case, a community might want only to see posts that are put forward by people who have demonstrated a certain level of technical knowledge. Then they could use EigenKarma to select them. Of course, there must be some way to enable the discovery of new users, but how much of a problem this is depends on implementation details. For example, you could have an unfiltered tab and a filtered one, or you could give higher visibility to new users. There could be many potential solutions.

Hmm. I've watched the scoring of topics on the forum, and have not seen much interest in topics that I thought were important for you, either because the perspective, the topic, or the users, were unpopular. The forum appears to be functioning in accordance with the voting of users, for the most part,because you folks don't care to read about certain things or hear from certain people. It comes across in the voting.

I filter your content, but only for myself. I wouldn't want my peers, no matter how well informed, deciding what I shouldn't read, though I don... (read more)

2
Henrik Karlsson
1y
I think maybe the word "filter" which I use gives the impression that it is about hiding information. The system is more likely to be used to rank order information, so that information that has been deemed valuable by people you trust is more likely to bubble up to you. It is supposed to be a way to augment your abilities to sort through information and social cues to find competent people and trustworthy information, not a system to replace it.

EAs should read more deep critiques of EA, especially external ones

For instance this blog and this forthcoming book

 

Yes, I gave David my wish list of stuff he could discuss  in a comment when he announced his blog.  So far he hasn't done that, but he's busy with his chosen topics, I expect.  I wrote quite a lot in those comments, but he did see the list.

In an answer to Elliot Temple's question "Does EA Have An Alternative To Rational Written Debate", I proposed a few ideas, including one on voting and tracking of an EA canon of argument... (read more)

0
Noah Scales
1y
I wrote: "You need to rock selfishness well just to do charity well (that's my hunch)." Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, "Oh, that's some effective selfishness" vs "Oh, that's a poser's selfishness right there" or "That selfishness there is a waste of money". Everyone thinks they understand selfishness, but there don't seem to be many theories of selfishness, not competing theories, nor ones tested for coherence, nor puzzles of selfishness. You spend a great deal of time on debates about ethics, quantifying altruism, etc, but somehow selfishness is too well-understood to bother? The only argument over selfishness that has come up here is over self-care with money. Should you spend your money on a restaurant meal, or on charity? There was plenty of "Oh, take care of yourself, you deserve it" stuff going around, "Don't be guilty, that's not helpful" but no theory of how self-interest works. It all seems relegated to an ethereal realm of psychological forces, that anyone wanting to help you with must acknowledge. Your feelings of guilt, and so on, are all tentatively taken as subjectively impactful and necessarily relevant just by the fact of your having them. If they're there, they matter. There's pop psychology, methods of various therapy schools, and different kinds of talk, really, or maybe drugs, if you're into psychiatric cures, but nothing too academic or well thought out as far as what self-interest is, how to perform it effectively, how or whether to measure it, and its proper role in your life. I can't just look at the problem, so described, and say, "Oh, well, you're not using a helpful selfishness

What about testing code for quality, that is, verifying code correctness, thereby reducing bugs?

Newcomb's problem, honesty, evidence, and hidden agendas

Thought experiments are usually intended to stimulate thinking, rather than be true to life. Newcomb's problem seems important to me in that it leads to a certain response to a certain kind of manipulation, if it is taken too literally. But let's assume we're all too mature for that.

In Newcomb's problem, a person is given a context, and a suggestion, that their behavior has been predicted beforehand, and that the person with that predictive knowledge is telling them about it . There are hypothetical ... (read more)

Regarding decision theory: I responded to you on substack. I'll stand by my thought that real-world decisions don't allow accurate probabilities to be stated, particularly in some life-or-death decision. Even if some person offered to play a high-stakes dice game with me, I'd wonder if the dice are rigged, if someone were watching us play and helping the other player cheat, etc.

Separately, it occurred to me yesterday that a procedure to decide how many chances to take depends on how many will meet a pre-existing need of mine, and what costs are associated... (read more)

On policy, there's Annie Duke's idea of "resulting", that just because a policy leads to success or failure doesn't necessarily speak to whether it was the strategically best choice. Causes of policy failure go beyond the policy specifics. For example, bad luck is a cause of policy failure. Accordingly, then, you can be certain your policy choice is the best but still be doubtful of the intended outcome's occurrence.

There's a bit of irony in that we should also realize our ignorance of what others want from policy, stated goals are not necessarily shared goals.

There's no agreement that there is a meta-crisis. Yes, there are multiple sources of danger, and they can interact synergistically and strongly (or so I believe), but that's not the same as saying that there must be root causes for those (global, existential) dangers that humanity can address.

If you asked a different question, like: "What are the underlying drivers of the multiple anthropogenic existential threats that we all face, like nuclear war, engineered pandemics, climate destruction, etc?"

You could get some interesting answers from people who think... (read more)

1
Mars Robertson
7mo
This question (Jan 29), your comment (Feb 4)... I think many things changed now (Sep 16) I think there is much more written material and much more understanding about the metacrisis. It is clear to me that it exists. I think that your approach of enumerating the factors "underlying drivers of the multiple anthropogenic existential threats" does not give the justive. The whole concept of metacrisis is that they are interconnected and need to be adressed as whole.

There's this thing, "the repugnant conclusion". It's about how, if you use aggregate measures of utility for people in a population, and consider it important that more people each getting the same utility means more total utility, and you think it's good to maximize total utility, then you ought to favor giant populations of people living lives barely worth living.

Yes, it's a paradox. I don’t care about it because there's no reason to want to maximize total utility by increasing a population's size that I can see. However, by thinking so, I'm led down a d... (read more)

Directly address the substance of all criticisms of EA.

  • if a criticism contains a faulty premise, identify it and rebut it.
  • if a criticism uses poor reasoning, identify it and reject it.
  • if a criticism contains valid elements, identify and acknowledge them all.

Use the source's language as much as you can, rather than add your own jargon. Using your jargon and writing for other EA's makes you less credible and legitimate. It looks like obfuscation to the source of the criticism and to other outsiders reviewing your response.

Avoid going meta. Going meta t... (read more)

Well, I've been noodling that human physiology defines our senses, our senses limit our ability to represent information to ourselves, and correction for differences of sensory representation of different sets of information from the same class allows for better comparisons and other reasoning about each (for example, interpreting) . A classic example is television pharmaceutical drug ads. The ads present verbal information about the dangers of a medication in tandem with visual information showing happy people benefiting from the same medication. Typically.

Does "intuition" have a specific, carefully-guarded meaning in moral philosophy? Intuition as I understand it is vague. The term "intuition" captures examples of lots of opinions and preferences and conclusions that share the attribute of having a feeling or partial representation to the person holding them. For example, some moral intuitions could develop through or depend on personal experience but have this property of having a vague representation. For someone using my definition of "intuition", a discussion of whether all moral intuitions are evolutionarily-driven seems clearly wrong.

4
David Mathers
1y
'Does "intuition" have a specific, carefully-guarded meaning in moral philosophy? ' Quite possibly not:  a bit over 15 years ago Timothy Williamson famously argued (in effect, that's not quite how he frames it)  that "intuition" as philosophers use it just isn't very well-defined: http://media.philosophy.ox.ac.uk/assets/pdf_file/0008/1313/intuit3.pdf   Rather, philosopher say "intuitively, P" when they can't be bothered arguing for "P" or "that's just an intuition, why would they be reliable" when someone says "P" and they disagree, but something about the terminology convinces people that we know what "intuitions" are in some substantive theoretical sense, when at most it just means something like a judgment that people in the current conversational context think feels "natural"', which, as Tim points out, actually covers pretty much any time a human being quickly and easily applies a word to something on the basis of pretty much any kind of evidence. 
3
Geoffrey Miller
1y
Noah - 'intuition' does seem pretty vague. I would expect evo-debunking arguments to be most relevant to 'moral intuitions' that are relatively universal across humans and cultures and historical epochs -- and there are many such intuitions studied by moral psychologists, evolutionary anthropologists, evo psych people, etc. Whereas, 'moral intuitions' that are more culture-limited or idiosyncratic probably aren't as open to evo-debunking -- although they might be subject to other kinds of debunking (e.g. cultural/historical analysis of where the cultural 'intuition' originated; psychological analysis of how an individual's traumatic experiences shaped their moral judgments, etc.)

I made a critique of EA that I think qualifies as "deep" in the sense that it challenges basic mechanisms established for bayesianism as EA's practice it, what you call IBT, but also epistemic motives or attitude. This was not my red-team, but something a bit different.

The Scout Mindset offers a partitioning of attitudes relevant to epistemics if its categories of "scout" and "soldier" are interpreted broadly. If I have an objection to Julia Galef's book "The Scout Mindset", it is in its discussion of odds. Simply the mention of "odds." I see it as a mi... (read more)

EDIT: Oh! It was rockstrom, but the actual quote is: "The richest one percent must reduce emissions by a factor [of] 30, while the poorest 50% can actually increase emissions by a factor [of] 3" from Johan Rockström at #COP26: 10 New Insights in Climate Science | UN Climate Change. There he is talking about fair and just carbon emissions adjustments. The other insights he listed have economic implications as well, if you're interested. The accompanying report is available here.

The quote is:

"Action on climate change is a matter of intra- and intergeneration... (read more)

Great fun post!

I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That's fine, but my last name is spelled "Scales" not "Scale". :)

About scout mindset and group epistemics in EA

No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.

Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistem... (read more)

You don't know yet how Shell's ownership affects what Sonnen does in the marketplace. If you think home batteries are a net positive morally then it's just a matter of comparing the impact of Sonnen with the impact of other companies where you could work.

Home batteries are part of the energy transition at small scale but I don't believe they matter at large scale in terms of reducing climate destruction. However, home batteries are great for buffering against blackouts and if I were a homeowner, I would be grateful to have a battery technology like Sonnen's.

Oh, I see. So by "benign" you mean shaming from folks holding common-sense but wrong conclusions, while by "deserved" you mean shaming from folks holding correct conclusions about consequences of EA actions. And "compromise" is in this sense, about being a source of harm.

I have read the Democratizing Risk paper that got EA criticism and think it was spot on. Not having ever been very popular anywhere (I get by on being "helpful" or "ignorable"), I use my time here to develop knowledge.

Your work and contributions could have good timing right now. You also have credentials and academic papers, all useful to establish your legitimacy for this audience. It might be useful to check to what extent TUA had to do with the FTX crisis, and whether a partitioning of EA ideologies combines or separates the two.

I believe that appeti... (read more)

It could be that EA folks:

  1. risk criticism for all actions. Any organization risks criticism for public actions.
  2. deserve criticism for any immoral actions. Immoral actions deserve criticism.
  3. risk criticism with risky actions whose failure has unethical consequences and public attention. EA has drawn criticism for using expected value calculations to make moral judgments.

Is that the compromise you're alluding to when you write:

But the greater part of it being normal is that all action incurs risk, including moral risk. We do our best to avoid them (and

... (read more)
3
Gavin
1y
Good analysis. This post is mostly about the reaction of others to your actions (or rather, the pain and demotivation you feel in response) rather than your action's impact. I add a limp note that the two are correlated. The point is to reset people's reference class and so salve their excess pain. People start out assuming that innocence (not-being-compromised) is the average state, but this isn't true, and if you assume this, you suffer excessively when you eventually get shamed / cause harm, and you might even pack it in. "Bite it" = "everyone eventually does something that attracts criticism, rightly or wrongly" You've persuaded me that I should have used two words: * benign compromise: "Part of this normality comes from shame usually being a common sense matter - and common sense morals correlate with actual harm, but are often wrong in the precise ways this movement is devoted to countering!" * deserved compromise: "all action incurs risk, including moral risk. We do our best to avoid them (and in my experience grantmakers are vigilant about negative EV things), but you can't avoid it entirely. (Again: total inaction also does not avoid it.)"

Lots of people on this forum have struggled with the feeling of being compromised. Since FTX. Or Leverage. Or Guzey. Or Thiel. Or Singer. Or Mill or whatever.[4] But this is the normal course of a life, including highly moral lives.... But the greater part of it being normal is that all action incurs risk, including moral risk.

It's not correct to say that action deserves criticism, but maybe correct to say that action receives criticism. The relevant distinction to make is why the action brought criticism on it, and that is different case-by-case. The c... (read more)

3
Gavin
1y
We're not disagreeing.

If I understand you:

Survival (resilience) traits and sexual attractiveness (well-being) traits diverge. Either can lead to reproduction. Selection for resilience inhibits well-being. More selection for well-being implies less selection for resilience. Reproduction implies selection for resilience or well-being but not both.

There's some argument about specific examples available like attractiveness of peacocks:

Surprisingly, we found that peahens selectively attend to only a fraction of this display, mainly gazing at the lower portions of the male train a

... (read more)
2
Sherry
1y
I think you got it if you meant that either resilience or well-being could lead to the selection, and it's well-being that amplifies reproduction. Thank you for the feedbacks

Sure, I agree. Technically it's based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here's a link:

Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs

... (read more)

I see the impact of AGI as primarily in the automation domain, and near-term alternatives are every bit as compelling, so no difference there. In fact, AGI might not serve in the capacity that some imagine them, full replacements for knowledge-workers. However, automation of science with AI tools will advance science and engineering, with frightening results rather than positive ones. To the extent that I see that future, I expect corresponding societal changes:

  1. collapsing job roles
  2. increasing unemployment
  3. inability to repay debt
  4. dangerously distracting t
... (read more)

Sure. I'm curious how you will proceed.

I'm ignorant of whether AGI Safety will contribute to safe AGI or AGI development. I suspect that researchers will shift to capabilities development without much prompting. I worry that AGI Safety is more about AGI enslavement. I've not seen much defense or understanding of rights, consciousness, or sentience assignable to AGI. That betrays the lack of concern over social justice and related worker's rights issues. The only scenarios that get attention are the inexplicable "kill all humans" scenarios, but not the more... (read more)

I am interested in early material on version space learning and decision-tree induction, because they are relatively easy for humans to understand. They also provide conceptual tools useful to someone interested in cognitive aids.

Given the popularity of neural network models, I think finding books on their history should be easier. I know so little about genetic algorithms, are they part of ML algorithms now, or have they been abandoned? No idea here. I could answer that question with 10 minutes on Wikipedia, though, if my experience follows what is typical.

You seem to genuinely want to improve AGI Safety researcher productivity.

I'm not familiar with resources available on AGI Safety, but it seems appropriate to:

  • develop a public knowledge-base
  • fund curators and oracles of the knowledge-base (library scientists)
  • provide automated tools to improve oracle functions (of querying, summarizing, and relating information)
  • develop ad hoc research tools to replace some research work (for example, to predict hardware requirements for AGI development).
  • NOTE: the knowledge-base design is intended to speed up the researc
... (read more)
2
Minh Nguyen
1y
Strong upvoted because this is indeed an approach I'm investigating in my work and personal capacity. For other software fields/subfields, upskilling can be done fairly rapidly, by grinding knowledge bases with high feedback loops. It is possible to be as good as a professional software engineer quickly, independently and in a short timeframe. If AI Safety wants to develop its talent pool to keep up with the AI Capabilities talent pool (which is probably growing much faster than average), researchers-especially juniors- need an easy way to learn quickly and conveniently. I think existing researchers may underrate this, since they're busy putting out their own fires and finding their own resources. Ironically, it has not been quick and convenient for me to develop this idea to a level where I'd work on it, so thanks for this.

Resources on Climate Change

IPCC Resources

... (read more)

You wrote

Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.

and

Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, persona

... (read more)
2
Sharmake
1y
This is the most important paragraph in a comment where I strongly agree. Thanks for saying it.

Life extension and Longevity Control

When society includes widespread use of life extension technology, a few unhealthy trends could develop.

  1. the idea of being "forced to live" will take on new meaning and different meaning for folks in a variety of circumstances, testing institutional standards and norms that align with commonly employed ethical heuristics. Testing of the applicability of those heuristics will result in numerous changes to informed and capable decision-making in ethical domains.

  2. life-extension technology will become associated with lo

... (read more)

Sizable government rebates on purchase of new human-powered vehicles, including but not limited to bicycles and electric bicycles.

Cluster thinking could provide value. Not quite the same as moral uncertainty, in that cluster thinking has broader applicability, but the same type of "weighted" judgement. I disagree with moral uncertainty as a personal philosophy,given the role I suspect that self-servingness plays in personal moral judgements. However, cluster thinking applied in limited decision-making contexts appeals to me.

A neglected areas of exploration in EA is selfishness, and self-servingness along with that. Both influence worldview, sometimes on the fly, and are not necessari... (read more)

Load more