All of MakoYass's Comments + Replies

Creating Individual Connections via the Forum

Yeah, reddit ends up getting a really huge quantity of useful information about its users this way.

I wouldn't expect LW/EA to reliably get that info with the tag subscription feature as it currently stands: I'm probably not going to subscribe to most of the tags I'm interested in, because receiving a notification for every single post to a tag isn't generally desirable. The only tags for which that sort of subscription is the right thing are the tags that get too little activity to be useful for matchmaking.

New: use The Nonlinear Library to listen to the top EA Forum posts of all time

I'm glad this exists.

Feedback:

  • Why no links to the post in the episode notes? If I find a post interesting, I'm basically always going to want to be able to click through and read comments or vote on it so I need that.
  • I think there shouldn't be a preamble before reading the title. It kinda destroys the usecase of listening to article titles and skipping to decide whether you want to hear it, it makes that take two or three times as long as it needs to as a result of forcing the listener to sit through the same intro again.
    I'd suggest... just reading the tit
... (read more)
3Kat Woods25d
Thanks for the suggestions! Yeah, the links in the episode notes is the most requested feature. We have them in all of our channels except for the static playlists (such as the top of all time lists) and the main channel because of technical reasons. We're working on the main channel, but it might take a bit because it's surprisingly difficult. Kind of reminds me of this comic [https://imgs.xkcd.com/comics/tasks.png]. For the intros, at least on PocketCast you can set it to skip the first X seconds, which I recommend.
1Max Clarke1mo
Agree
The Future Fund’s Project Ideas Competition

This is a really hard problem that people have been working on for decades

What problem are you referring to. Face tracking and remote presence didn't have a hardware platform at all until 2016, and wasn't a desirable product until maybe this year (mostly due to covid), and wont be a strongly desirable product until hardware starts to improve dramatically next year. And due to the perversity of social software economics, it wont be profitable in proportion to its impact, so it'll come late.

There are currently zero non-blurry face tracking headsets with that... (read more)

This innovative finance concept might go a long way to solving the world's biggest problems

If an investor has a finger in every pie, then it will mean that they are invested in a company and also that company's competitors...

Blackrock generally are that way, although I don't know whether they actually intervene in governance decisions as often as people sometimes fear. I'd guess there are a lot of industry-specific ETFs that intervene more often than they do though?

... but this doesn't seem that important -- they had an incentive to create cartels, Universal Owner or no.

Yeah I guess I'm not saying UO will make this worse, more that there could b... (read more)

This innovative finance concept might go a long way to solving the world's biggest problems

There's an interesting tension between.

  • having a group of people with their fingers in every pie is good because it makes them care about the whole world.
  • and having a group of people with their fingers in every pie is bad because it will lead to anti-consumer anti-competitive corporate governance interventions.

How does the balance weigh, in your view?

2Sanjay1mo
It's not clear to me that this is true. If an investor has a finger in every pie, then it will mean that they are invested in a company and also that company's competitors... ... but this doesn't seem that important -- they had an incentive to create cartels, Universal Owner or no. What it does mean is that they are also invested in the company's consumers -- i.e. if one company acts to harm all consumers, this too will harm the wider economy and hence (for a universal owner) the wider portfolio. So if anything, it seems that the opposite is true.
Announcing What The Future Owes Us

Hah!

I think it's worth discussing the straight answer to this, though: The future gives back simply by creating many of the things that I want to exist, which is a class of service that encompasses most of my values (I think).
This illuminates an interesting and surprising fact: Not all trade requires an exchange of physical objects, or even information. It is, in some cases, possible to evidence that something will occur, without ever entirely confirming it, which we will later find to be a foundational resolution in inter-universal moral trade schemes

Will "impact certificates" value only impact?

Can you imagine a way to get a person to engage well with an impact market (or any market) when they don't understand money/beneficial self-reifying games or whatever?

2Denis Drescher2mo
I replied to this in private, but maybe it’s helpful to put it here too: > Dann suggested to just see to it that investing is really profitable in the beginning. :-3 That’s a bit in tension with my hope to limit various risks by [initially, for the first experiments] not attracting non-altruists to the market, but I think experiments on the EA Forum can safely be made quite profitable.
Who is protecting animals in the long-term future?

I don't see a way for it to go on forever.

  • We should expect the efficiency of farming to improve until no suffering is involved.
    (See the cellular agriculture/cultured meat projects.)
  • We should expect humans to change for the better.

    It would be deliriously conservative to expect as many as 10 thousand years to pass before humans begin to improve their own minds, to live longer while retaining mental clarity and flexibility, to be aware of more, to be more as they wish to be, to reckon deeply with their essential, authentic values, to learn to live in accordan
... (read more)

I disagree here. Even though I think it's more likely than not space factory farming won't go on forever, it's not impossible that it will stay, and the chance isn't like vanishingly low. I wrote a post on it.

Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the... (read more)

Will "impact certificates" value only impact?

A contract is also not usually transferable. It doesn't really have an owner.

But the implied analogy about the owner of the impact cert being morally credited for the work, is actually good, and clarifying. If you buy the credit for the act then you get credit for the act. Yes. That's how it's supposed to work. And if the worker wants to retain credit then they should retain a fraction of the impact cert, because selling the entire thing is genuinely supposed to mean that they relinquish all right to be retroactively rewarded to the buyer.

2Denis Drescher2mo
Contract: I was thinking of the contract as specifying the ownership, sort of like a page in the land register or the list of owners in a company that uses a paper list of the purpose. So you don’t really care about owning the piece of paper or the abstract idea of the list, but you care about owning the part of the company or the plot of land that is specified there. Credit: It’s interesting that you find it clarifying. If it has that effect, then I suppose that works? But in my experience people who encounter the term “moral credit” or just “credit” in this context start to think about these contracts in mystical terms: “Why would I value buying and holding impact certs [gold, euros, Bitcoin, Apple shares] if I don’t inherently value impact certs [gold, euros, Bitcoin, Apple shares]?” “Are other people going to respect me for buying and holding impact certs [gold, euros, Bitcoin, Apple shares]?” These questions probably seem weird and besides the point to anyone with any of the bracketed items, but I’ve heard them repeatedly when it comes to impact certs. One of the tests I want to do is to run the system while making the concept of the impact cert very nonobvious in the UI. I’m hoping that that’ll make it easier for people to grasp the idea of the impact market without being distracted by these mystical thoughts. But maybe it’ll be confusing again in some other way…
Will "impact certificates" value only impact?

Well I think that would be an extremely uninformative and fairly confusing thing to call it. It's only an agreement insofar as any exchange of anything is an agreement, the class of agreement is uncharacteristically open-ended relative to most contracts, and reducible to a transfer of ownership.
I'm supportive of Ryan's suggestion of "credit" at this point. The difference between "an amount of credit" and "an amount of credits" might resolve the ambiguity it might have had with carbon credits.

2Denis Drescher2mo
I don’t understand the critique of “attribution contract.” Could you try rephrasing the second sentence? I’ve seen “credit” interpreted as “this person is to be (morally) credited,” which leads to all sorts of complications in thinking about impact markets. I suppose no one makes the mental gymnastics anymore to remember that their credit card provider is to be credited temporarily for the food one buys with the credit card, but impact markets are not as commonplace yet, and it’s something I’ve come across already. So I prefer terms that simply have something like “certificate” or “contract” in them.
Simple comparison polling to create utility functions

I've been thinking about this sort of preference aggregation problem for a few years. I think the best way to do it, that we have right now, is to form a graph with edges weighted by the comparison strength, then do pagerank, or something like it. Rank entries by their pagerank scores.

But I've been working towards something more precise, and this might be novel: Parallel and serial reducer functions (and another one, a "crosslink" reducer, I believe), sort of like the reducer functions you'd use to make judgements about electronic circuit graphs, which, gi... (read more)

2NunoSempere2mo
My sense is that the mathematized version would be much more valuable (for instance, I could incorporate it into my tooling), but also harder to obtain than you might realize.
Certificates of impact

That's compatible with the systems being built, I believe. Impact Certs would be aggregated/componentized into impact class pools. If I grow a bunch of forests, the impact cert I file this as, could then be submitted to, and if found legitimate, permanently locked/ingested by some qualified authority to produce a corresponding quantity of fungible Carbon credits and JobCreator credits, which I could then sell to whoever likes those.

EA Projects I'd Like to See

I think funding good criticism is a really good idea.

As a meetup organizer, I'm becoming very aware that preserving a culture of criticism is in tension with building a strong social fabric, or making friends. Maintaining the culture is really hard. It would help a bit, to have this very clear signal that we materially value good criticism, and that we protect our critics, even though we're normally so moderate and agreeable when we meet in person, do not be fooled, we know the value of disagreement too.

Another thing is, I think this prize would convince a... (read more)

AI Risk is like Terminator; Stop Saying it's Not

I was about to write this exact comment, yes. I think the OP is making a necessary point, AI Risk is like Terminator lore, but it is importantly unlike the depictions that make up the bulk of the movies. We've been pretty absurdly miscommunicating our thoughts on this, but I think after this course correction we're still going to want to complain about Terminator.

Terminator lore contains the alignment problem, but the movie is effectively entirely about humans triumphing in physical fights against robots, which is a scenario that is importantly incompatibl... (read more)

FLI launches Worldbuilding Contest with $100,000 in prizes

I checked again and, yeah, that's right, sorry about the misunderstanding.

I think the root of my confusion on this is that most of my thinking about prediction platform designs, is situated in the genre of designs where users can create questions without oversight, and in this genre I'm hoping to find something highly General, and Robust. These sorts of designs always seem to collapse into being prediction markets.
So it comes as a surprise to me that just removing user-generated questions seems to turn out to prevent that collapse[1], and this thing it bec... (read more)

The Future Fund’s Project Ideas Competition

I think everyone will adapt. I vaguely remember hearing that there might be a relatively large contingent of people who never do adapt, I was unable to confirm this with 15 minutes of looking just now, though. Every accessibility complaint I came across seemed to be a solvable software problem rather than anything fundamental.

Will "impact certificates" value only impact?

Regarding intellectual property connotations, it seems to me, at this point, that all of the systems we're making for trading impact certs would also be useful for trading intellectual property, so we might have to tolerate that.

I like "credit"... It might introduce too many ambiguities. I initially overlooked it because it also means "money"... it's also used in "carbon credits", which would exist in the impact cert system.. but another ambiguity is introduced there in that the impact cert for a carbon capture job and a "carbon credit" would be subtly dif... (read more)

5RyanCarey3mo
Interesting points. I agree that impact certs differ from carbon credits by being by corresponding to a fraction of the impact of a whole project, or at least an amount of work (inputs), rather than a quanta of impact (outputs). But it does strike me that carbon credits still might be the most closely related among well-known existing concepts. This could suggest "project credits". If you say - on your resume for example - that you sold "project credits" for a company, or a research project, it seems this would give a naive reader more of an idea of what has gone on than many other terms - they are a way of assigning credit to patrons of the project. The main downside, as you allude to is that they sort-sound like someone might be owed something. But if talking to a naive outsider, you can just say that the credit is a certificate that commemorates their patronage of the project, similar to a carbon credit, which would seem to be clear enough... I think the problem with blame is that it sounds too negative - you won't want to write that on your resume. And if the term isn't used by recipients, then it's unlikely to catch on. Re "transferable attribution", to be a bit more concrete, if I say that I have sold the attribution for a paper I wrote, it sounds a bit like I am giving away the authorship to a funder, which would be some kind of academic malpractice. Since that's not always the case, it seems like we don't want the general term to sound like it is...
The Future Fund’s Project Ideas Competition

Note, VR is going to get really good in the next three years, so I wouldn't personally recommend getting too invested in any physical offices, but I guess as long as we're renting it won't be our problem.

4Jeff Kaufman1mo
I think it is pretty unlikely that VR improvements on the scale of 3y make people stop caring about being actually in person. This is a really hard problem that people have been working on for decades, and while we have definitely made a lot of progress if we were 3y from "who needs offices?" I would expect to already see many early adopters pushing VR as a comfortable environment for general work (VR desktop) or meetings.
2Chris Leong3mo
You think they'll get past the dizziness problem?
The Future Fund’s Project Ideas Competition

Lower cost of living, meaning you can have more people working on less profitable stuff.

I'm not sure 5000 free staters (out of 20k signatories) should be considered failure.

2RyanCarey3mo
Right, but it sounds like it didn't go well afterwards? https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project [https://www.google.com/amp/s/newrepublic.com/amp/article/159662/libertarian-walks-into-bear-book-review-free-town-project]
Will "impact certificates" value only impact?

I proposed renaming them to "Transferable Attribution", usually shortened to just "Attribution". I like this, because the point of selling these things really is to transfer credit, and attribute the act to the buyer.

This would make it less awkward to refer to fractions of the tokens than "impact certificates", as "attribution" is a quantity word, rather than a discrete object, you can have amounts of it. Sort of gets rid of the speedbump of having to explain how it's possible to own a portion of a token or a certificate. (I guess "patronage" does this too... (read more)

2Denis Drescher2mo
I like it! What do you think about “(Transferable) attribution contract”? “Contract” clarifies that it’s an agreement rather than something nature-given. On the downside, it’s not as much of a quantity word. Then again I’ve been imagining certificates (just like contracts) also as the thing that lists the fractions rather than something that gets cut up.
5RyanCarey3mo
"Transferable attribution" is exactly the right sort of idea! Or what about "responsibility certificates" or "Recs" for short? Arguably that is even better because 1) it keeps "certs", 2) "responsibility" has the connotation of being divisible, even moreso that "attribution", 3) "responsibility" gets closer to the relevant concept of "causal blame"/ "the cause of an effect" by Joe Halpern 4) doesn't have as many connotations of intellectual property as "attribution'. Other ideas: acclaim, approval, credit, blame, or attribution certificate.
Being an individual alignment grantmaker

expected impact of some action can be neutral even if it is bound to turn out (ex post) either extremely positive or extremely negative

I would recommend biting the decision theoretic bullet that this is not a problem. If you feel that negative outcomes are worse than positive outcomes of equal quantity, then adjust your units, they're miscalibrated.

Pot

So would The Pot be like, an organization devoted especially to promoting integrity in the market? I'm not sure I can see why it would hold together.

Maybe the goal (at least for the start) should be to create

... (read more)
2Denis Drescher3mo
I’m on board with that, and the second that you’re quoting seems to express that. Or am I misunderstanding what you’re referring to? (The quoted section basically says that, e.g., +100 utility with 50% probability and -100 utility with 50% probability cancel out to 0 utility in expectation. So the positive and the negative side are weighed equally and the units are the same.) Generally, this (yours) is also my critique of the conflict between prioritarianism and classic utilitarianism (or some formulations of those). Yeah, that’s how I imagine it. You mean it would just have a limited life expectancy like any company or charity? That makes sense. Maybe we could try to push to automate it and create several alternative implmentations of it. Being able to pay people would also be great. Any profit that it could use to pay staff would detract from its influence, but that’s also a tradeoff one could make. Oh, another idea of mine was to use Augur markets. But I don’t know enough about Augur markets yet to tell if there are difficulties there. I still need to read it, but it’s on my reading list! Getting investments from selfish investors is a large part of my motivation. I’m happy to delay that to test all the mechanisms in a safe environment, but I’d like it to be the goal eventually when we deem it to be safe. Yeah, it would be interesting to get opinions of anyone else who is reading this. So the way I understand this questions is that there may be retro funders who reward free and open source software projects that have been useful. Lots of investors will be very quick and smart about ferreting out what the long-tail of the 10,000s of tiny libraries are that are holding all the big systems like GPT-3 together. Say, maybe the training data for GPT-3 is extracted by a custom software that relies on cchardet to detect the encoding of the websites it downloads if they’re not declared, misdeclared, or ambiuously declared. That influx of funding to these tiny projec
Being an individual alignment grantmaker

Regarding public goods funding, and how it relates to AI Risk: Although I'm really interested in impact certificate markets and I have some mechanisms for it, I'd expect our current designs to make the alignment problem worse, on net.

The alignment problem is exactly the kind of problem that public goods funding markets we know of will totally mishandle. Progress in AI is largely about knowledge production and improving software infrastructure, which are non-excludable goods (which are generally always public goods), so public goods markets will tend to be ... (read more)

4Denis Drescher3mo
Thank you for the great comment! Correct me if I’m wrong, but this is close to what I’ve termed the “distribution mismatch” problem (see this unpublished draft [https://docs.google.com/document/d/1Ik5KoRITLOFN6-lulykgMoiLab5m_Jp_yslWMw2H-8g/edit#heading=h.gdxijpdl6tby] ). Ofer pointed it out here [https://forum.effectivealtruism.org/posts/HFBJMyCiuPyshRvWq/impact-certificates-on-a-blockchain?commentId=NZ4B5vHu7gA4XMcCD] , and it’s been the main problem that caused me headaches over the past months. I’m not confident that the solutions I’ve come up with so far are sufficient, but there are five, and I want to use them in conjunction if at all possible: ATTRIBUTED IMPACT “Attributed impact [https://docs.google.com/document/d/1Ik5KoRITLOFN6-lulykgMoiLab5m_Jp_yslWMw2H-8g/edit#heading=h.1dbw25yqlzw] ” is a construct that I designed to (1) track our intuitions for what impact is, but (2) exclude pathological cases. The main problem as I see it is that the ex ante expected impact of some action can be neutral even if it is bound to turn out (ex post) either extremely positive or extremely negative. For hypothetical pure for-profit investors that case is identical to that of a project that can turn out either extremely positive or neutral because their losses are always capped at their investment. Attributed impact is defined such that it can ever exceed the ex ante expected impact as estimated by all market participants. If adopted, it’ll not penalize investors later for investing in projects that turned out negative but it’ll make investments into projects that might turn out very negative unattractive to avoid the investments in the first place. Attributed impact has some more features and benefits, so that I hope that it’ll be in the rational self-interest of every retro funder to adopt it (or something like it) for their impact evaluations. Issuers who want to appeal to retro funders will then have to make a case why their impact is likely to be valuable in attri
Nuclear attack risk? Implications for personal decision-making

It should be mentioned that the border to NZ will be closed to most people (due to covid) until July, with a few exceptions opening up around March  https://covid19.govt.nz/international-travel/travel-to-new-zealand/when-new-zealand-borders-open/

  • Skilled workers earning at least 1.5x the median wage may be eligible to be granted an ‘other critical worker’ border exception.
  • [critical services include]
    • food production and its supply chain
    • key public services like health and emergency services
    • lifeline utilities such as power and water supplies
    • transport
    • c
... (read more)
thank machine doggo

thabk machine doggo.

The second image link is broken.

Metaverse democratisation as a potential EA cause area

Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely?

Fair prompt. I get the impression that the most impactful thing you can do is to make sure that the people leading the standards dialog have strong technical vision and good taste. That'll also make it more likely to even succeed at establishing a standard. I guess that's something that EA (with so much software engineering acumen) could probably do better than most NGOs! But yeah, it looks like that might already be ... (read more)

Metaverse democratisation as a potential EA cause area

I think the risk of a VR monopoly is probably low. A metaverse is just an office-compatible VRChat with external app screensharing, and portals. Lots of people are capable of making those. It's also not apparent to me that Meta are dangerously better at making the hardware than anyone else (apple, HTC and varjo are all stern competitors. Even HP are in the game. Also, by the way, you personally would probably be interested in SimulaVR, they're about to start making the first portable, self-contained VR workstation computer, and it runs NixOS. It is going t... (read more)

2Gavin3mo
Wonderful comment.
2NunoSempere3mo
Nice
3Pablo3mo
I wouldn't call it 'the' open source alternative to Roam, since the wording suggests it is the only such alternative. I personally use and recommend org-roam [https://www.orgroam.com/].
1Paul_Lang3mo
Thanks for sharing your insights Mako! After reading your response and the IEEE Spectrum article you mentioned, I am much more optimistic that the metaverse can/will move in the right direction. Is there anything that could be done (by governments, companies, NGOs, the general public, or whatever player) to make this even more likely? I also liked your example of Twitter, where addictiveness was not designed into the system, but happened accidentally. Accidents usually prompt investigations to improve regulations, for instance in the aircraft industry. Do you think there are any concrete key learnings from the case Twitter how to prevent similar accidents in the future of the internet or metaverse? If so, could or should some of these be baked into better designs, and are current incentives aligned with this or would it require some governmental regulations (since you are worried about liberalisation)? I still believe that Meta is a major player on the market. And while I do agree that they have no direct interest in destroying democracy or creating an unliveable world, I think they act in line of Milton Friedman and would do just try to maximise their profits. I am not sure if there is anything wrong with that in principle, as long as the rules of the game ensure that maximizing profits aligns well with overall utility. In the past, I don’t think the rules of the social media game aligned well with overall utility. And I am not sure that the need for and support of open standard by players like Meta alone is sufficient to align profit maximization with overall utility in the metaverse. If this assessment is correct, it would make sense to brainstorm ideas for such an alignment as the metaverse develops. Btw. thanks also for sharing your LW article on Webs of Trust (on my reading list) and your thoughts on RoamResearch (pm’d you with a question on Roam vs. Obsidian).
FLI launches Worldbuilding Contest with $100,000 in prizes

Correction: Metaculus's currency is just called "points", tachyons are something else. Aside from that, I have double-checked, and it definitely is a play-money prediction market (well, is it wrong to call it a prediction market it's not structured as an exchange, even if it has the same mechanics?) (Edit: I was missing the fact that, though there are assets, they are not staked when you make a prediction), and you do in fact earn points by winning bets.

and have an excellent forecasting track record

I'm concerned that the bettors here may be the types who h... (read more)

FLI launches Worldbuilding Contest with $100,000 in prizes

Metaculus currently gives ~20% probability to >60 months

I'd expect the bets there to be basically random. Prediction markets aren't useful for predictions about far out events: Betting in them requires tying up your credit for that long, which is a big opportunity cost, so you should expect that only fools are betting here. I'd also expect it to be biased towards the fools who don't expect AGI to be transformative, because the fools who do expect AGI to be transformative have even fewer incentives to bet: There's not going to be any use for metaculus po... (read more)

4aaguirre3mo
I'd note that Metaculus is not a prediction market and there are no assets to "tie up." Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I'd say it's a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there's heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.
Side-event, Discussion: The Qualia Research Institute: Reducing Consciousness

(why is the location marker wrong. Did it demand a street address?)

1Group Organizer4mo
Yeah I think it needs to match a location on Google Maps. I updated the address/location so hopefully it's correct now - you should also be able to edit the event location yourself if it's still wrong.
FLI launches Worldbuilding Contest with $100,000 in prizes

For the sake of coordination, I declare an intent to enter.

(It's beneficial to declare intent to enter, because if we see that the competition is too fierce to compete with, we can save ourselves some work and not make an entry, while if we see that the competition is too cute to compete with, we can negotiate and collaborate.)

I'll be working under pretty much Eliezer's model, where general agency emerges abruptly and is very difficult to align, inspect, or contain. I'm also very sympathetic to Eliezer's geopolitical pessimism, but I have a few tricks for ... (read more)

Is EA compatible with technopessimism?

If it was the only thing we wanted we could actually work to explicitly specify that as the AI's goal, and that's CEV and hence problem solved.

This is just an aside, but it might be informative. I actually think that

  • single alignment: "This specific blob of meat here "is" an "agent". Figure out its utility function and do that"

is going to be simpler to program than

  • Hard-coded: "make a large number of many "humanlike" things "experience" "happiness""

I think it's clear that.. there seem to be more things in the hard-coded solution that we don't know how to for... (read more)

-1acylhalide5mo
For me the very intuitive response to "figure out its utility function" is "this blob of meat doesn't have a utility function". A model that has a utility function is going to be a bad model of human behaviour. The module that causes humans to have desires is the very same module that causes humans to lose pennies [https://arbital.com/p/coherence_theorems/], you can't really have one without the other. I can try justifying this - I have a bunch of different sets of intuitions that all point in the same direction - but it isn't that well written up. Maybe I'll do a better job conversing with you :) https://kroma.substack.com/p/messy-post-intuitions-on-agi [https://kroma.substack.com/p/messy-post-intuitions-on-agi]
Is EA compatible with technopessimism?

Hm. Well feel free to notify me if you ever write it up.

2acylhalide5mo
Tbvh I'm not sure how to engage with alignment folk because I feel like I'm missing a lot of existing knowledge and mental models that they do. Like I don't get what people think of when they think of "aligned AI", that they go - you know what, we don't even know how to rigorously define this thing yet we're convinced it exists and it is a good thing. But I'll try: -- Bostrom mentions stuff like populating the cosmos with digital minds generating positive experiences, that seems like unusual and one of the things we could possibly want, but not necessarily the only thing we want. If it was the only thing we wanted we could actually work to explicitly specify that as the AI's goal, and that's CEV and hence problem solved. Basically humans want a lot of different things, and we're confused about which things we want more. That doesn't necessarily mean there exists some objective answer as to which of those we want more, that we need to "solve" it. Instead it could just be random - there are neurochemicals firing from different portions of the brain and sometimes one thing wins out, sometimes the other thing wins out. And if you input a sufficiently persuasive sequence of words into this brain it will prioritise some things more, but if you input a different sequence it will prioritise different things more. (An AI with sufficient human manipulation skills can find this easy imo.) Turing machines don't have "values" by default, they have behaviours based on their input. -- I also wrote up some stuff here a month back but idk how coherent it is: https://www.lesswrong.com/posts/uswN6jyxdkgxHWi7F/samuel-shadrach-s-shortform?commentId=phwddxuNNuumAbSxC [https://www.lesswrong.com/posts/uswN6jyxdkgxHWi7F/samuel-shadrach-s-shortform?commentId=phwddxuNNuumAbSxC]
Is EA compatible with technopessimism?

Yeah, that objection does also apply to humans, which is why, despite it being so difficult to extract a coherent extrapolated volition from a mammal brain, we must find a way of doing it, and once we have it, although it might not produce an agenty utility function for things like spiders or bacteria, there's a decent chance it'll work on dogs or pigs.

1acylhalide5mo
Right, I understand your opinion now. I do still have some intutions on why CEV shouldn't exist (even for humans), but I'm not sure what the best place to discuss is, it's probably not here.
Is EA compatible with technopessimism?

I'm referring to https://en.wikipedia.org/wiki/Uplift_(science_fiction) , sort of under the assumption that if we truly value animals we will eventually give them voice, reason, coherence. On reflection, I guess the most humane form of this would probably consist of just aligning an AI with the animals and letting it advocate for them. There's no guarantee that these beings adapted to live without speech will want it, but an advocate couldn't hurt.

1acylhalide5mo
I don't know, a lot of what you're saying still feels anthropocentric to me. (I'll assume mammals again since most animals don't have a brain or spinal chord.) Language isn't just about speech, it's about cognitive processing. "Do you want speech?" is likely not an object you can input to the algorithm of a mammal's brain and get a meaningful response, and the same applies to most of the mental objects humans manipulate as part of using language. Also there's no guarantee mammals even "want" consistent things, plenty of algorithms "want" nothing in particular, they just run. If at all they run in specific directions it's because outside forces terminated those running differently. (To some extent this also applies to humans.) Hence I have difficulty understanding what it means to align an AI to a mammal.
Is EA compatible with technopessimism?

...assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ?

  1. Why would you want to describe it that way?
  2. On reflection, I don't think it can be called anthropocentric, no. There are four big groups of beings involved here: Humanity, Animals, Transhumanist post-humanity (hopefully without value-drift), and Unaligned AI. Three of those groups are non-human. Those concerned with AI alignment tend to be fighting in favor of more of those non-human groups than they are fight
... (read more)
1acylhalide5mo
In my mind animals are defined by their inability to use language. Check out this list of things mostly unique to human language [https://en.wikipedia.org/wiki/Animal_language#Aspects_of_human_language]. P.S. I assume you mean higher animals such as mammals here.
Reasons and Persons: Watch theories eat themselves

There’s no answer for this

Sure there is. Just implement the decision theory whose nature is that which would have been the optimal nature for it to have always had.

That is, implement Logical Decision Theory.

I'm only being a little bit facetious. Logical Decision Theory often seems to me more like a mostly formal statement about the (arguably) perfect policy about coordination and pre-commitment and superrationality, rather than a method for actually unearthing it.

But pondering this statement does seem to have progressed my thinking a lot and I would genera... (read more)

1Charles_Guthmann5mo
Right, I had similar thoughts. The desert hitchhiker: My intuition here is that if you are completely rational, you realize that if you don’t believe you will pay later you won’t get a ride now. In this sense the question feels similar to simply going to the store and the clerk says, you have to pay for that, and you say no I don't, and they say yes you do, and you say, no really you can't make me, and they say, yes I can. At this point, you pay if you are rational. The only difference being, in this case, you don't actually have to pay, you just have to convince yourself you are going to pay. The same can be said for the firefighting example if you know they have a lie detector. Once you know you can’t lie, this simplifies down to a non temporal problem IMO other than you don't actually have to change your brainstate to make you help, you just have to convince yourself that that is the brainstate you have. For kate the writer, it feels like she isn't actually being selfish, but rather just not thinking long term. Would she really quit writing or just not write as much? Schelling’s answer to armed robbery: Is bluffing irrational? Only when the costs outweigh the gains. If bluffing is rational but you are too scared to bluff, simply change your brain to be less scared :). The alien virus I’m confused- so the virus makes us do good things but we don’t enjoy doing those things? So are we being mind controlled? What does it feel like to have this alien virus? It seems like the claim is more selfish = greater potential valence. Humans are mostly unique in that we are both able to have utility and have profound influences on others utility, hence there is an equilibrium where past which as consequentialists we need to change our worldview towards being selfish (but we are not close to this equilibrium imo, if you consider future humans plus animals probably have much more potential utility than us). If there is one human and one dog in the world thing that doesn’
Response to Phil Torres’ ‘The Case Against Longtermism’

We actually do have a good probability for a large asteroid striking the earth within the next 100 years, btw. It was the product of a major investigation, I believe it was 1/150,000,000.

Probabilities don't have to be a product of a legible, objective or formal process. It can be useful to state our subjective beliefs as probabilities to use them as inputs to a process like that, but also generally it's just good mental habit to try to maintain a sense of your level of confidence about uncertain events.

7James Kim5mo
If Ord is giving numbers like a 1/6 chance, he needs to back them up with math. Sure, the chance of asteroid extinction can be calculated by astronomers, but probability of extinction by climate change or rogue AI is a highly suspect endeavor when one of those things is currently purely imaginary and the other is a complex field with uncertain predictive models that generally only agree on pretty broad aspects of the planet.
What posts do you want someone to write?

Regarding "change from within", I have since found confirmation from the excellent growth economist Mushtaq Kahn https://80000hours.org/podcast/episodes/mushtaq-khan-institutional-economics/ people within an industry are generally the best at policing others in the industry, they have the most energy for it, they know how to measure adherence, and they often have inside access. Without them, policing corruption often fails to happen.

How does Amazon deforestation actually work? It's not about soy.

Maybe a moratorium concerning soy and beef from the Amazon region would be enough to settle this issue; even so, given that the first driver of deforestation is speculation with land prices (besides illegal timber and mining),  I'm afraid such a ban wouldn't be enough to stop it.

The question then is, where is the value of the land coming from, how much of it is coming from each possible use, loggers, soy farmers, or meat farmers? If you stop those uses, wont speculation stop?

All Possible Views About Humanity's Future Are Wild

Crazyism about a topic is the view that something crazy must be among the core truths about that topic. Crazyism can be justified when we have good reason to believe that one among several crazy views must be true but where the balance of evidence supports none of the candidates strongly over the others

Eric Schwitzgebel, Crazyism

A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare

I am really puzzled by those graphs, mm. But as to the Easterlin paradox, it's still alive: http://repec.iza.org/dp7234.pdf Happiness has been increasing, and so has GDP, but the rates of increase still don't seem to have much of a relationship.

1bfinn1y
Thanks for this. There's been quite a bit more research since that paper, including by Easterlin, so not sure how relevant it is now. The latest I know FWIW is from last year's book by Richard Layard, Can We Be Happier?, which says it's unclear but maybe economic growth often increases happiness but not always.
Report on Running a Forecasting Tournament at an EA Retreat

I was there and I can report that T is awesome in that particular way consistently.

Ranking animal foods based on suffering and GHG emissions

I'm not sure the maceration of male chicks induces any suffering. IIRC, it's approved as a humane killing method by the SPCA or someone like that.

Ranking animal foods based on suffering and GHG emissions

I'm glad to see the inclusion of anthropic units as a function of neuron count/brain mass. Turns out that makes a huge difference. Ideally I'd use brain mass*square(neuron count), but that would be overkill...

In building this, did you come across literature about this question of how anthropic measure relates to mass and neuron configuration? I'd love to see any if you have that. I've got quite an interest in the anthropic measure binding question, my somewhat unconventional stance influences my decisions regarding animal welfare, so I really ought to read whatever's out there.

What are some high impact companies to invest in?

Anything in the genomic medicine space, that is to say, the ARKG etf.

A lot of new opportunities opened up in this field recently due to crispr, and they haven't been realized yet, and the stocks are generally too low due to, I dunno, Theranos maybe. Some of the treatments are really amazing. Cures for previously incurable genetic diseases, better cancer treatments.

We should pause, though, and ask whether accelerating the realization of these technologies will accelerate the realization of extinctive biological weapons. I have not paused long enough over this question, myself. I can't really argue that the benefits outweigh the costs.

Things I recommend you buy and use.

I was a little concerned about the bid sniping recommendation, bad things often happen when a technique for subverting a system and getting an edge over others is widely adopted, but it occurred to me that all that would happen is ebay auctions would become, like, one shot simultaneous blind bids, which might well be an improvement. Auction processes, currently, are selected to benefit sellers, at the detriment of buyers, and at the detriment of pricing efficiency? (I'd expect the winner's curse to lead to overpricing), so it wouldn't be that surprising if... (read more)

1BenSchifman1y
Hey Mako! As to seasoning cast iron, here is the most in depth source [http://sherylcanter.com/wordpress/2010/01/a-science-based-technique-for-seasoning-cast-iron/] I have seen on the science. In general it is the same as the seasoning on a wok. If yours is flaking off you could try applying another coat or two.This wacky youtube guy [https://www.youtube.com/watch?v=vsthDhOodDs] gets into the science and seasons the crap out of his wok you might enjoy watching this! I agree with your assessment of b12 for vegans--certainly a good idea to take! Although there are some vegan sources like nutritional yeast which has 5 mcg of B12 per tablespoon, about double the daily recommended amount for adults. Cheers, Ben
Would you buy from an altruistic shop?

One reason I'd have difficulty donating through this channel is that I'm not sure I'd be able to get tax credits. If we get something in return, it might not count as a charitable contribution any more.

I wonder if you'd be able to just, only sell your stuff (at reasonable prices) to people who can show you a big donation receipt in their name, that would behave similarly, and they'd still be able to claim the tax credits.

Would you buy from an altruistic shop?

I don't think you should be so defensive in the face of accusations of promoting a bragging culture. Own it. If someone asked me "Isn't it unethical to brag" I would tell them that, no, contrary, it's positively ethical to brag.

The following is opinion, probably contains innacuracies, but would be important if true.

Bragging (well) about how good you are is a good norm.

If credibly signalling our goodness is normalized, there will emerge social pressures to do more goodness than we otherwise would have. If you normalize the right sort of bragging, it will cr... (read more)

Load More