Austin

Hey there~ I'm Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !

Topic Contributions

Comments

Impact is very complicated

Haha thanks for pointing this out! I'm glad this isn't an original idea; you might say robustness itself is pretty robust ;)

Impact is very complicated

It becomes clear that there's a lot of value in really nailing down your intervention the best you can. Having tons of different reasons to think something will work. In this case, we've got:

  1. It's common sense that not being bit by mosquitos is nice, all else equal.
  2. The global public health community has clearly accomplished lots of good for many decades, so their recommendation is worth a lot.
  3. Lots of smart people recommend this intervention.
  4. There are strong counterarguments to all the relevant objections, and these objections are mostly shaped like "what about this edge case" rather than taking issue with the central premise.

Even if one of these fails, there are still the others. You're very likely to be doing some good, both probabilistically and in a more fuzzy, hard-to-pin-down sense.

 

I really liked this framing, and think it could be a post on it's own! It points at something fundamental and important like "Prefer robust arguments".

You might visualize an argument as a toy structure built out of building blocks. Some kinds of arguments are structured as towers: one conclusion piled on top of another, capable of reaching tremendous heights. But: take out any one block and the whole thing comes crumbling down.

Other arguments are like those Greek temples with multiple supporting columns. They take a bit more time to build, and might not go quite as high; but are less reliant on one particular column to hold its entire weight. I call such arguments "robust".

One example of a robust argument that I particularly liked: the case for cutting meat out of your diet. You can make a pretty good argument for it from a bunch of different angles:

  • Animal suffering
  • Climate/reducing emissions
  • Health and longevity
  • Financial cost (price of food)

By preferring robustness, you are more likely to avoid Pascalian muggings, more likely to work on true and important areas, more likely to have your epistemic failures be graceful.

Some signs that an argument is robust:

  • Many people who think hard about this issue agree
  • People with very different backgrounds agree
  • The argument does a good job predicting past results across a lot of different areas

Robustness isn't the only, or even main, quality of an argument; there are some conclusions you can only reach by standing atop a tall tower! Longtermism feels shaped this way to me. But also, this suggests that you can do valuable work by shoring up the foundations and assumptions that are implicit in a tower-like argument, eg by red-teaming the assumption that future people are likely to exist conditional on us doing a good job.

What share of British adults are vegetarian, vegan, or flexitarian?

Thanks, this was really interesting; I love the visualization of how diets are changing over time!

I was inspired to start a prediction market on how my own diet will change (I'm currently pescatarian): https://manifold.markets/Austin/what-will-my-diet-look-like-over-th

Norms and features for the Forum

Sinclair has been working on allowing authors to embed Manifold prediction markets inside of a LessWrong/EA Forum post! See: https://github.com/ForumMagnum/ForumMagnum/pull/4907

So ideally, you could set up a prediction market for each of these things eg

  • "How many  epistemic corrections will the author issue in the next week?"
  • "Will this post win an EA Forum prize?"
  • "Will this post receive >50 karma?"
  • "Will a significant critique of this post receive >50 karma?"
  • "Will this post receive a citation in major news media?"

And then bet on these directly from within the Forum!

Why Helping the Flynn Campaign is especially useful right now

I actually do think that getting Flynn elected would be quite good, and would be open to other ways to contribute. eg if phonebanking seems to be the bottleneck, could I pay for my friends to phonebank, or is there some rule about needing to be "volunteers"?

Why Helping the Flynn Campaign is especially useful right now

I have donated $2900, and I'm on the fence about donating another $2900. Primarily, I'm not sure what the impact of a marginal dollar to the campaign will accomplish -- is the campaign still cash-constrained?

My very vague outsider sense was that the Flynn campaign had already blanked the area with  TV ads, so that additional funding might not do that much, eg from local coverage from a somewhat hostile source

What We Owe the Past

Thank you so, so much for  writing up your review & criticism! I think your sense of vagueness is very justified, mostly because my own post is more "me trying to lay out my intuitions" and less "I know exactly how we should change EA on account of these intuitions". I had just not seen many statements from EAs, and even less among my non-EA acquaintances, defending the importance of (1), (2), or (3) - great breakdown, btw. I put this post up in the hopes of fostering discussion, so thank you (and all the other commenters) for contributing your thoughts!

I actually do have some amount of confidence in this view, and do think we should think about fulfilling past preferences - but totally agree that I have not made those counterpoints, alternatives, or further questions available. Some of this is: I still just don't know - and to that end your review is very enlightening! And some is: there's a tradeoff between post length and clarity of argument. On a meta level, EA Forum posts have been ballooning to somewhat hard-to-digest lengths as people try to anticipate every possible counterargument; I'd push for a return to more of Sequences-style shorter chunks.


I think (2) is just false, if by utility we have in mind experiences (including experiences of preference-satisfaction), for the obvious reason that the past has already happened and we can't change it. This seems like a major error in the post. Your footnote 1 touches on this but seems to me to conflate arguments (2) and (3) in my above attempted summary. 

I still believe in (2), but I'm not confident I can articulate why (and I might be wrong!). Once again, I'd draw upon the framing of deceptive or counterfeit utility. For example, I feel that involuntary wireheading or being tricked into staying in a simulation machine is wrong, because the utility provided is not a true utility. The person would not actually realize that utility if they were cognizant that this was a lie. So too would the conversationist laboring to preserve biodiversity feel deceived/not gain utility if they were aware of the future supplanting their wishes.

Can we change the past? I feel like the answer is not 100% obviously "no" -- I think this post by Joe Carlsmith lays out some arguments for why:

Overall, rejecting the common-sense comforts of CDT, and accepting the possibility of some kind of “acausal control,” leaves us in strange and uncertain territory. I think we should do it anyway. But we should also tread carefully.

 (but it's also super technical and I'm at risk of having misunderstood his post to service my own arguments.)


In terms of one specific claim: Large EA Funders (OpenPhil, FTX FF) should consider funding public goods retroactively instead of prospectively. More bounties and more "this was a good idea, here's your prize", and less "here's some money to go do X".

I'm not entirely sure what % of my belief in this comes from "this is a morally just way of paying out to the past" vs "this will be effective at producing better future outcomes"; maybe 20% compared to 80%? But I feel like many people would only state 10% or even less belief in the first.

To this end, I've been working on a proposal for equity for charities -- still in a very early stage, but as you work as a fund manager, I'd love to hear your thoughts (especially your criticism!)

Finally (and to put my money where my mouth is): would you accept a $100 bounty for your comment, paid in Manifold Dollars aka a donation to the charity of your choice? If so, DM me!

What We Owe the Past

I deeply do not share the intuition that younger versions of me are dumber and/or less ethical. Not sure how to express this but:

  • 17!Austin had much better focus/less ADHD (possibly as a result of not having a smartphone all the time), and more ability to work through hard problems
  • 17!Austin read a lot more books
  • 17!Austin was quite good at math
  • 17!Austin picked up new concepts much more quickly, had more fluid intelligence
  • 17!Austin had more slack, ability to try out new things
  • 17!Austin had better empathy for the struggles of young people

This last point is a theme in my all-time favorite book, Ender's Game - that the lives of children and teenagers are real lives, but society kind of systematically underweights their preferences and desires. We stick them into compulsory schooling, deny them the right to vote and the right to work, and prevent them from making their own choices.

What We Owe the Past

Thanks - this comparison was clarifying to me! The point about past people being poorer was quite novel to me.

Intuitively for me, the strongest weights are for "it's easier to help the future than the past" followed by "there are a lot of possible people in the future", so on balance longtermism is more important than "pasttermism" (?). But I'd also intuit that pasttermism is under-discussed compared to long/neartermism on the margin - basically the reason I wrote this post at all.

Market Design Meets Effective Altruism

Yup, I think that should be possible. Here's a (very wip) writeup of how this could work: https://manifoldmarkets.notion.site/Charity-Equity-2bc1c7a411b9460b9b7a5707f3667db8

Load More