Samuel Shadrach's Shortform

20 comments, sorted by Highlighting new comments since Today at 4:07 AM
New Comment

Anyone thinks maximising utility = log(population) and not utility = population itself comes closer to approximating how humans typically think about creating new happy lives? Same as how finance often uses utility = log(wealth) not utility = wealth.

A world with 10,000 humans seems intrinsically better than a world with zero humans. A lot better than 7 billion versus 7.000001 billion. And not just because 10,000 can grow back to billions (let's assume they can't), but because they still embody the human spirit. Same way 10 million people on earth seems better than 10,000, but not as much better 7.01 billion is than 7 billion.

Just psychology over how we think about large numbers when forced to. I think it's cause our brain doesn't naturally code values with quantitative metrics it codes for heuristics ("more people good" "more wealth good"), which then interacts with a completely different cogntive module that knows how to quantify things and makes these quick log-like estimates.

Early brainstorm on interventions that improve life satisfaction, without directly attempting to improve health or wealth.

Will compare ITN afterwards, right now just listing out

 - mental health treatment

 - generating optimism in media and culture

 - reducing polarisation and hate in media and culture - could be via laws or content generation or creation of new social platforms or something else

 - worker protection laws, laws that promote healthy work-life balance, culture that does same

 - reducing financial anxiety - could be via job security, UBI or other incentives, appeal for reduction of consumerist lifestyle

 - social environments and co-living spaces designed to reduce loneliness

 - ....

Has anyone tried making a comprehensive map of possible solutions to "saving the world"? There's a lot of problems and a lot of solutions but I think you can classify them into very broad brackets. I have a map of sorts in my head, I could consider penning it down if nobody has done this yet.

Examples of solutions bracket "scientific innovation", "better institutional decision-making",  "voluntary donations", "better individual education", etc

I have voted for two posts in the decadal review prelim thingie.

https://forum.effectivealtruism.org/posts/FvbTKrEQWXwN5A6Tb/a-happiness-manifesto-why-and-how-effective-altruism-should

9 votes

https://forum.effectivealtruism.org/posts/hkimyETEo76hJ6NpW/on-caring

4 votes

Seems to me like perspectives I strongly agree with, but not everyone in the EA community does.

Anyone here wanna suggest me a good static site generator for my blog?

I can write code if I absolutely have to, but would prefer a solution that requires less effort, as I'm not a web dev specifically.

Don't want something clunky like wordpress, I like gwern.net's philosophy of not locking into one platform. I used pandoc markdown it seems cool.

Literally the main thing keeping me from next step in my EA career that I'm procrastinating on (make blog -> post ideas and work -> apply for summer internship)

[This comment is no longer endorsed by its author]Reply

https://github.com/daattali/beautiful-jekyll

Thank you for this, I found jekyll + github pages easiest to use too :)

Random idea: Social media platform where you are allowed to "enter" a finite number of discussion threads per day. Threads can only be read if you enter them, until then you just see the top-level discussion. The restriction can be hard-coded or enforced indirectly via social norms (like you are supposed to introduce yourself when you enter a thread).

Basically explores how public discussion could transition to semi-private. Right now it's usually public versus (or transitioning to) fully private between a tiny number of people. But semi-private is what happens irl.

What is the minimum network effect or benefits that a nation must have before it can mandate a tax to fund public goods? And not end up pushing everyone to leave. Can this effect be achieved purely virtually? (No physical land or services)

P.S. Just realised all public goods are not equal. City/state/national public goods (like roads) get funded easier than global ones (like carbon tax) for this reason.

Also this kinda makes public goods excludable. Pay tax or else leave the city/state/nation.

UNFILTERED THOUGHTS

Official posts might be coming later. Or never. I may never have enough epistemic certainty to post on this site lmao. And nobody looks at shortforms. Yay me.

 

tldr focus on social and economic incentives to design competitive systems that reduce competition

Idk if I'm really accepting very critical comments yet, this is a work in progress. But no harm if you wanna post them anyway, worst case I ignore them.


What are we trying to do here?


 

Create competitive systems that reduce competition and shift the status quo.


 

Competitive = some fraction of people should actually find these systems desirable, and they should not require the approval of govts and corporations to implement them. Govts and corporations are suboptimal and may not recognise value of your system


 

Reduce competition = if the system wins it should have better properties than the existing ones, in how people inside interact and subsystems and sub-institutions are created.



 

On Incentives


 

People’s desires are governed by Maslow’s hierarchy. Except there is no hierarchy, it’s just basic needs and sociopsychological needs as being equally important.


 

Sociopsychological needs are a huge set that deserve further analysis. For instance, desire to help others and desire to have higher social status than one’s neighbour both belong in this bracket. Once this is done, one can think about how social incentives and penalties can be engineered. Social incentives and penalties are essentially conferred moral judgements, high social status and praise, etc. on each other. Maslow for instance classifies sociopsychological needs into deficiency and growth needs depending on whether motivation increases or decreases on fulfillment of needs. I have no clue if this classification makes sense - further analysis needed.


 

People are driven both extrinsically and intrinsically. Extrinsic = make money which then lets you do X, intrinsic = get X directly.


 

Intrinsic motivation could fulfill sociopsychological needs

Extrinsic motivation could fulfill basic or sociopsychological needs.


 

In other words, everyone needs to earn money to fulfill basic needs in the current system. Not many people get basic needs for *free*, and not many get them purely in exchange for the social rewards they can confer others.


 

---


 

On Value capture


 

Value capture is relevant to extrinsic motivation. People may want the promise of a certain minimum X amount of money if they succeed at doing something that is useful for society, in order to want to try.


 

In some industries X is far higher than the minimum required to motivate people. Some companies, and more specifically, their C-suites fill this bracket. In others, X is too low (even zero) and you don’t get enough extrinsically motivated people. Such as research. Which means only those who are intrinsically motivated ended up working there.


 

X just emerges naturally as very different values in very different classes of problems. This is arbitrary, not entirely “fair”. Problem is allowing anyone to centrally mandate values for X in different industries gives them a lot of power and responsibility. Responsibility to deeply understand those industries, responsibility to be competent, responsibility to want to be honest. Govts aren’t always good at this.


 

Honesty is intrinsically motivated, in some humans more so than in others. A govt or judiciary with no intrinsic motivation for honesty will fall apart, i.e., all systems that make deliberate use of extrinsic motivation may need a core layer of intrinsically motivated actors to survive. Even neanderthals had intrinsic motivations, it is foolish not to take advantage of this when designing systems, just set wide bands of errors around how many and how honest people there exist.


 

Assuming govts are capable of some of this, they do in fact try to fix markets - hence the spectrum of how socialist they are.


 

A lot of fixation happens on the consumer side, i.e., consumers should not overpay for basic needs. That is an indirect way of reducing the amount of profit X the producer can make. But it is not directly optimising for the producer’s side of the equation and how motivated producers are.


 

Fixation happens in anti-trust lawsuits.


 

Fixation happens in patents, where govts try to determine the optimal number of years a patent should be valid, such that there is sufficient but not excess value capture by the producer.


 

All this stuff is probably called consumer and producer surplus in economic theory, I should go read that stuff when I’m more intrinsically motivated :p


 

---


 

Fungibility can be evil

Coordination purely on extrinsic motivations can default to evil, because there is no longer coordination on intrinsic motivations


 

Some thoughts on non-fungible stock and money


 

As a stock holder, assume I refuse to cross some moral line in order to make more profit. I will be misaligned with other board members who are in fact willing to cross that line. Now let’s say someone willing to cross that line wishes to buy my stock. They will likely value the stock higher than I do, so I will end up selling my stock to them. Even if I sell it to someone else, because of the anonymous fungible nature of stock purchases, the stock will finally end up in the hands of the person who values them most - possibly because they are willing to cross those moral lines to maximise profit.


 

When sellling stock I do not get to specify what the next owner of the stock is or isn’t allowed to do. And if I am selling my stock at a price determined by the discounted cashflow (DCF) estimate of crossing the moral line, I’m as good as enabling the moral line to be crossed in first place. (Although tbh I’m enabling it no matter what price I sell at, as long as the stock eventually ends up in the hands of someone with the highest DCF estimate)


 

Maybe it is my moral duty to sell my stock to someone who will respect the same values I do. There should be a way to codify this as legal contract. Obviously I will get a lower price for the stock if I do this. This also reduces fungibility and liquidity of my stock.


 

The same can apply to money at large. Once I pay someone money in return for a good, there is no way for me to specify what they do with that. They could donate it to charities or burn some finite good whose real value is not priced in. Money is more liquid and fungible than anything you can purchase with it.


 

Maybe this a problem, and the solution is for me to convert my money into money-with-obligations (MWO), namely that I give them tokens that can only be spent in some finite set of classes of ways I deem as morally acceptable. Obviously the purchasing power of my money reduces if I do this, and I am personally paying to restrict the next person from a certain class of actions. However this reduction now depends on who the buyer is, it is not a universal fungible reduction. Someone who anyway wants to do morally good things with the money-with-obligations will be more willing to accept it as payment, as compared to someone who wants to do morally questionable things with it. Even the person who wants to morally good things with it may deem the money as less valuable than money (without obligations) though, because they can further only pay it to other people who accept the obligations without a discount.


 

This is assuming the obligations are permanent, no matter how many times the MWO changes hands. Then the (local) rate between MWO and money will assign a discount based on the expected number of changed hands. An economy full of people who are moral will assign no discount. An economy with 50% moral 50% immoral means even the moral people would assign discount, because they may have desires they want to pay the immoral people for.


 

Assume instead that obligations are not permanent, and decay after money has changed hands some pre-defined number of times. Need to look into ways people will game the system that counts the number of times. Now the discount might be lesser even in societies with mix of moral and immoral.


 

----


 

Stockholder systems are majority-trample-minority systems, only legal balances check them.


 

(Maybe DAOs suck.)


 

This is inevitable. One of the worst cases is when some stakeholders are maximally charitable (say they literally want no profit) and some are maximally profit-seeking. All real-life circumstances are less extreme, but a) there can never be 100% moral value alignment between stockholders b) there can never be 100% alignment on the best course of action, even once assuming some set of values, including profit ofcourse.


 

Legal systems try to curtail the worse end of the spectrum. For instance if you register as for-profit and take investment from VCs, it is illegal for you to vote to convert into a charity that wants no profit, even if you have the majority vote. It is illegal for the majority to vote to distribute profits only among the majority, and kick minorities. Less blatant misalignments are acceptable, and somebody gets the short end of the stick.


 

This fact might also apply to government bodies, election systems, cooperatives and literally every institution that grants formal voting rights, be it one person one vote or one unit of stock one vote. And maybe even bodies without formalised votes. But I haven’t analysed those yet.


 

---


 

On social proximity and circles


 

Sociopsychological need fulfillment is very strongly determined by social circles, a lot of which are local not global.


 

Individuals can change social circles to find those with norms more conducive to their need fulfillment. They might still need to change global norms to ensure sufficient liquidity of people for local social circles with norms they deem desirable. But this is a much weaker requirement, often individuals don’t need to impact global norms. This makes creating new social circles with different social incentives a powerful tool.


 

Example of a social circle would be one where making money is looked down on (social cost) and some class of intrinsic motivations is rewarded, perhaps those pertaining to virtue ethics.


 

Another circle would be one where making money is okay, but not demanded, and instead the focus is on hedonism - people are happy for others in the circle who are also happy - no matter the means. Here the social rewards and penalties are being conferred accordingly. 


 

Another circle would be where being richer or having more consumer goods signalled as socially important - is looked up to. As in those who have these goods earn the envy of those who don’t, and subsequently they feel pride which is a positive social reward.


 

Thoughts on cost-effectiveness of voting versus not voting?

It's mostly a fun exercise but meh

Cost: Some amount of time, mental energy, physical effort expended

Benefit: Some marginal increase in probability of the better candidate out of some finite set being elected, and subsequent marginal benefits to you and other people you care about.

How much is the probability increase?

If you assume your voting behaviour is completely private, and influences nobody else's voting behaviour, this is very tiny. Assume N population.

Assuming the probability of a person voting for a party is constant, total number of votes expected for each candidate roughly follows binomial distribution. This means very strong concentration around the mean.

https://www.wolframalpha.com/input/?i=binomial+distribution+10000%2C+0.6

More specifically, if there are a million voters, and the odds of each person voting for candidate A are 0.6, the odds of getting between 5.9 million and 6.1 million votes are as good as unity.

However in the real world, we don't know the probability of a person voting (which we assumed as 0.6 here). So you'll have to assume some probability for that too - perhaps a normal distribution. This distribution is what will finally matter. For instance if the probability of (the probability of a person voting is 0.6) is 0.3, then the probability of (0.6 of the population voting) is also 0.3, because if the probability of a person voting was anything other than 0.6, it's contribution to the probability of (0.6 of the population voting) is negligible.

Idk the notation to formalise this, but it seems intuitive to me.

Anyway, now that you've gotten a distribution for expected number of votes, you need to ask the next question -what is the probability that this number crosses some threshold that gets that candidate in power. (Ofcourse this threshold is also contigent on the number of votes the other candidate gets, will get to that later, for now assume it is a constant set in stone.)

If the threshold is very close to the mean of your distribution, I'd assume your vote can have impact the probability of the candidate winning by more than 1/N. Probably not greater than 10*1/N though. Might be worth calculating. If the threshold is not close to the mean, your impact is less than 1/N, sometimes even less than 1/10*1/N. Basically, the more sure you are about the result of the election, the lower utility your own votes is. But either way it lies close to 1/N, more strictly 1/N' where N' is people who actually voted.


If you're an impartial utilitarian 1/N impact on probability is sufficient, because the impact of the election is also on N people. Assume a world where you were the only vote. You could pick which candidate out of the finite set would win, you only had to expend effort, time, etc to visiting the election booth. Assume the marginal benefits in your personal life due to the better candidate are non-negligible. In that case you would want to vote.

Now if you're an impartial utilitarian, whatever that marginal benefit roughly gets multiplied by N, because everybody else also faces the marginal benefit or harm that you do. Or if not you, what the "average person" does. With a bunch of caveats, but there's likely not an order of magnitude of difference between the benefit or harm to 2 randomly selected people. i.e. the variance in individual harm or benefit across all individuals shouldn't be too high.

Humans are not impartial utilitarians ofcourse (evene if they want to be), so more analysis needed.

I'm too lazy to type out the rest and I have an exam, sorry.

Are there any posts on short-term versus long-term costs for poverty alleviation?

For instance, spending $2 per person every 2 years to send mosquito nets to people in Congo, versus a $100* upfront cost to upskill people in Congo so they can earn enough to afford their own nets versus $10* to educate those who can already afford nets on why nets are worth buying.

*these figures are fake but I wonder what the real figures are

Is x-risk really more important than building better human alignment tools? (Not to be confused with AI)

People state x-risk as important because if we go extinct, all the other work doesn't matter. But the same goes for institutions - if humans don't find better ways to align with each other, we are inevitably going to create more scientific capabilities and x-risks, faster than we can deal with the x-risks or ensure equitable impact of those capabilities.

Alignment tools could be better institutions, better governments, better markets and incentive structures, better epistemic standards, better moral training, so on and so forth.

I get the criticism that this is hard because finding a solution on paper is different from being in a position to implement. But that's just another problem (which might be solveable) - create solutions that are sufficiently appealing right now, i.e., simultaneously both succeed at and beat the malthusian trap. You want solutions that can both outcompete other existing solutions, and create environments that reduce this competition once they become dominant.

In favour of population reduction
----


Has anyone in EA put forth arguments in favour of reducing population size by having less children? Either out of pure individual choice or with incentives from the state.

Consider a population with 100 million versus one with 7 billion. Some thoughts:

  1. Solving coordination problems is hugely important to our long-term survival.
  2. Solving coordination problems is harder the more people you have. We don't have global governance yet. And we have principal-agent problems at every level of government, be it community, district, national or international. Smaller population will be lot more coordinated.
  3. Smaller population (of 100M people) does not have significantly higher odds of extinction than larger population (of 7B people). Which means both can eventually create large number of offspring at some point in the future if desired.
  4. We haven't even solved coordination with 100M people yet, atleast we'll get a chance to try

Cons:

  1. Less number of people to do anything - be it scientific innovation or think about coordination problems
  2. How to transition to this society

The really big con which is that people are awesome, and 1/70th of the people is way, way less awesome than the current number of people. Far, far fewer people reading fan fiction, falling in love, watching sports, creating weird contests, arguing with each other, etc is a really, really big loss.

Assuming that if it could be done, that it would be an efficient in utility loss/gain terms way to improve coordination, I think it probably goes way too slow to be relevant to the current risks from rapid technological change. It seems semi-tractable, but in the long run I think you'd end up with the population evolving resistance to any memetic tools used to encourage population decline.

I don't actually share the intuition on more people being better. But I can totally see there are people who do. But yes as you say it can be efficient over long-term even for people who really want larger populations.

You're right it'll take atleast 3-4 generations to properly happen assuming we don't kill people. So some existential risk is not avoided, but we will avoid those risks which are created say 2 generations in the future. 

> I think you'd end up with the population evolving resistance to any memetic tools used to encourage population decline.

Why do you feel this?

Assuming that some people respond to these memetic tools by reducing the amount of children they have more than other people do, the next generation of the population will have an increased proportion of people who ignore these memetic tools. And then amongst that group, those who are most inclined to have larger numbers of children will be the biggest part of the following generation, and so on.

The current pattern of low fertility due to cultural reasons seems to me to be very unlikely to be a stable pattern. Note: There are people who think it can be stable, and even if I'm right that it is intrinsically unstable, there might be ways to plan out the population decline to make it stable without the substantial use of harsh coercive measures.

But really, fewer people being a really, really bad thing is the core of my value structure, and promoting any sort of anti natalism is something I'd only do if I was convinced there was no other path to get the hoped for good things.

Makes sense that groups who want to grow more children will grow bigger. I'm not sure what stable means here. Even if some groups choose to have less children, it's net less children compared to a world where no group chooses this.

Consider India (1.4B population, 2.2 births per woman) and US (300M population, 1.7 births per woman). US population could reduce over generations, India to increase. Doesn't mean Americans will become more susceptible to being convinced by Indians to have more children than vice versa. Doesn't mean India will have more increase in gross wealth or per capita than US either.

Eventually countries may hit economic limits where people will have less children simply because they cannot sustain economically. Wealth is not being freely distributed to them from richer countries - aid is tiny. Idk how soon these limits are hit though. Maybe once these limits are hit, once people are forced to change their behaviour, they might be more amenable to retaining their behaviour even when later it becomes cheaper to have more children.

Also, economic incentives can be controlled by states themselves, as has been shown in China. China deliberately incentivised less children because they figured that's what makes them more wealth and more prosperity for each person.

Pro / Anti natalism guess let's skip that, whole topic by itself.

[+][comment deleted]1mo 1