PK

Puggy Knudson

41 karmaJoined Dec 2021

Posts
1

Sorted by New

Comments
11

I think there’s a case to be made for exploring the wide range of mediocre outcomes the world could become.

Recent history would indicate that things are getting better faster though. I think MacAskill’s bias towards a range of positive future outcomes is justified, but I think you agree too.

Maybe you could turn this into a call for more research into the causes of mediocre value lock-in. Like why have we had periods of growth and collapse, why do some regions regress, what tools can society use to protect against sinusoidal growth rates.

This will be like a March madness bracket for chess! Come join us.

Thank you for your help Lorenzo

A question for the community

Is a there a web page which gives rewards for solving problems in philanthropy? I’m imagining something like a Fiverr or website which lists bounties for problems.

If this hasn’t been made yet it should be! Imagine someone had a lot of free time or an organization was looking for a new project to work. They could go to the CharityBounty website and select a problem to work on.

You could have problems which are small such as “build a website for our new charity”or as large as “create a medicine which cures this neglected tropical disease”.

Donors would contribute to project ideas like “XYZ Foundation has attached a $300 million dollar bounty for any team which cures malaria” or “XYZ individual is offering $5000 to whoever can make a spreadsheet which discovers regions likely to be hit by an earthquake”.

Bounty hunters would go on this website and work on projects to solve problems and get money. These bounty hunters could sort problems by difficulty or reward amount.

Metaculus betters could bet on the likelihood that each bounty would be solved or when it would be solved.

The likelihood that a project would be completed would increase as more money was donated towards the award.

The website could list all of the active people working on a project so teams could get formed in real time (and they could work together to get the money).

Everyday people could become grant makers and people who are passionate about a particular cause area could target the cause they find most important.

Forums could list and organize knowledge related to solving the bounty to encourage people who might solve the bounty and lower the barriers for people who might end of solving the problem.

There could be grand challenges. Maybe open philanthropy creates a major problems list. “$100 million dollar questions” like “cure Alzheimer’s” or something very ambitious. And researchers could list minor problems which their field sees as important steps towards solving a major problem (find the mechanism which causes Alzheimer’s)

There could even be a bounty attached to creating the bounty list.

Yea absolutely. It takes planning and discipline but you can go to the gym after a 10 or 12 hour work day. occasionally having dry snacks like nuts or Clif bars helps when working 50-60 hours. I like picking up fruits from the store every third day or so.

I think the wheels come off at 70+, and the type of work that can be done for 70+ hours is probably work that isn’t cognitively demanding.

50-68 hours is my sweet spot where I don’t compromise my diet and I can workout 4 times a week

I like the emphasis on working hard and I think working longer hours is good. Something happens when you start working 60+ hours a week where (in my experience) you begin to have blinders to everything else outside of that work.

For me it becomes the only thing I think about for weeks on end, and it becomes something in my life that I’m subconsciously working on even when I’m not doing the task. Like the mathematician who gets the answer to a proof she’s working on when swimming laps at the pool.

But I’m very very pessimistic about hard stimulants. Nicotine, caffeine, adderall, etc have diminishing returns and tolerance increases the dose required to get the originally stimulating effects. We have heard it before but it is worth mentioning. I would not mentor my bright 16 year old cousin to become reliant on any stimulant.

Weight lifting is underrated. Consciously placing yourself in positive environments is underrated. Maintaining strong mutually beneficially relationships is underrated. And eating a wide variety of fruits and vegetables is underrated.

Marketing AI reform:

You might be able to have a big impact on AI reform by changing the framing. Right now framing it as “AI” alignment sells the idea that there will be computers with agency. Or something like free will. Or they will choose acts like a human.

It could instead be marketed as something like preventing “automated weapons” or “computational genocide”.

By emphasizing the fact that a large part of the reason we work on this problem is that humans could use computers to systematically cleanse populations, we could win people to our side.

Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other.

You could probably get more funding, more serious attention, and better reception by just marketing the idea in a better way.

Who knows, maybe some previously unsympathetic billionaire or government would be willing to commit hundreds of millions to this area just by changing the way we talk about it.

[This comment is no longer endorsed by its author]Reply

Has anyone ever considered the possibility that if the cyclic theory of the universe is correct then that would create an opportunity for agents to encode messages into the big bang seconds before it occurs? 

 

Kinda bizarre idea there, but I wrote up something quick and choppy to get out this idea. It must have been thought before. This is all almost certainly incorrect. 

Carrick Flynn lost the nomination, and over $10 million dollars from EA aligned individuals went to support his nomination.

So these questions may sound pointed:

There was surely a lot of expected value in having an EA aligned thinker in congress supporting pandemic preparedness, but there were a lot of bottlenecks that he would have had to go through to make a change.

He would have been one of hundreds of congresspeople. He would have had to get bills passed. He would have had to win enough votes to make it past the primary. He would have had to have his policies get churned through the bureaucratic agencies and it’s not entirely clear any bill he would’ve supported would have kept it’s same form through that process.

What can we learn from the political gambling that was done in this situation? Should we try this again? What are the long term side effects of aligning EA with any political side or making EA a political topic?

Could that $10+ million wasted on Flynn have been better used in just trying to get EA or longtermist bureaucrats in the CDC or other important decision making institutions?

We know the path that individuals take to get these positions, we know what people usually get selected to run pandemic preparedness for the government, why not spend $10 million in gaining the attention of bureaucrats or placing bureaucrats in federal agencies?

Should we consider political gambling in the name of EA a type of intervention that is meant for us to get warm fuzzies rather than do the most good?

Load more