I think the axis of Imaginary Time has been entirely neglected. It is time chauvinism to prefer one dimension of time over any other.
This post claims the financial system could collapse due to Reasons. I am pretty skeptical but haven't looked at it closely. Signal-boosting due to the low chance it's right. Can someone else who knows more finance analyze its claims?
Is this in-person or virtual? (I haven't clicked on the link yet.)Edit: I found my answer, but leaving this for others' benefit.
I am seeking funding so I can work on my collective action project over the next year without worrying about money so much. If this interests you, you can book a call with me here. If you know nothing about me, one legible accomplishment of mine is creating the EA Focusmate group, which has 395 members as of writing.
The URL has a period at the end that needs removing. :)
DonyChristie (a programmer who already build a prototype website)
I don't know if I'm much of a programmer yet enough to be called one. That Google Site was just a quick attempt at a proof-of-concept and explanation of my thoughts back in June.
The trappings of organized religion are a hollow shell of the mystic states at their core. Make sure the texts you focus on constitute the heart of that system's spiritual practice.
This was something I needed, thank you!
Do you ever feel this?
It's terrifying to really begin building an organization, especially one with as grand an ambition as saving the world, with a good chance of failure from any number of directions.
And to wonder... Am I taking the right action with this choice? Is this even the right choice to be focusing on?
Past me precommitted to work on this for a year for a reason. He knew I would face self-doubt.
Knowing that of all the sources of failure, the biggest ones are endogeneous.
And it sucks, going through a metaruminatory loop, knowing that I can't just fix my errors. That my own awareness of my bugs is itself an impediment.
To be uncertain whether the uncertainty is the kind to accept, or the kind to change.
To be frozen in fear, imagining others observing my frozenness and feedbacking to me that this is unacceptable if I want to strive to perform at the tempo that they feel confident is symbolic and symbiotic of progress.
I am a longtermist. Morever I am a ponderer, a dilettante, an explorer. Yet I am also supposed to "move fast and break things". I need to be hypercompetent in 47 different ways, yet I need to expose my incompetency to learn how to be competent.
And the Pointed Questions from Projections Of Mine.
"What's your plan?"
Well, uh, like, it's a fractal ball of wibbly wobbly stuff. Very ambitious endgoal as compass. I have a very clear plan but it's not descriptively legible to you and it very quickly decomposes into a bunch of question marks.
"How is your thing different from X?"
It's, uh, it's not all about X. That's just a necessary utility to start with that I want to play with partly because it's aesthetically interesting--
"Things aren't working at this pace, you should take on this collaborator for increased motivation and success."
Well, how will that result in 20 years? Are all of their incentives aligned? Cofoundership is like a marriage.
"Given you're sharing all of this lack of confidence, maybe it's a sign this isn't the thing to work on?"
That one's just in my head, I think (unless there's an illegible memory behind it). But I'll respond: I think the probably correct answer is that I would feel the same regardless of what I was working on to the point where I took the thing seriously enough to start feeling these feelings when hitting roadblocks. The hypothetical asker probably is typicalminding from their own differing psychology.
It's also the highest-impact thing to work on uncertain, counterfactually neglected projects! Probably. In my worldview, at least.
"Is it really neglected though? What if there are competitors better than us? There are more competent organizations already out there, aren't there? Shouldn't we just go work for them?"
Uh. Well. I mean. I don't know. Can I work for another human being? Typically not? Most people can't take most jobs though, right? Neither of us knows how to fly a jet plane.
"That's a problem then. You should fix what's keeping you from getting a Real Job."
Oh...kay? Like I haven't debugged bits and pieces of that? And why would it matter? The world is burning and if you want to stop that you have to git gud at things that are related to putting the fire out.
"You should just go to college."
And put off working on important things for years? How am I going to learn more than through a startup? The option palette that comes to mind for you is conveniently shaped from a high-level ghost perspective that doesn't take into account that I am in the territory, not the map, and am navigating trailhead by trailhead. Your statement has no skin in the game. It sounds like you're saying you're not confident in my ability to bite and chew off high-variance objectives. (Maybe everyone in the process of constructing success gets shit-tested by people with bad advice.)
"My point is to do a scoped-down version of the thing you want in a training environment with plenty of slack."
Which is... sort of what I'm doing, with my R&D and slow MVP-building?
"But you should speed up and go faster."
Wha--? But you just said--
"Work on a different side-project that will make more money faster."
That's Goodharting! That's Mara and/or Moloch! Why the hell would you think that's more impactful than directly working on a thing of actual value rather than perceived value?
"Well you need money to comfortably work on an altruistic project."
So then shouldn't I... ask for fundraising?
"Justify why you think you deserve fundraising over ALL the other effective altruists asking for it, who are clearly more competent than you and have way more stuff on their resume."
I... okay. I'll meek out as small an income as I can to survive.
"You should get funding if this is a serious project. Also where is your website, why aren't you writing a nicely-worded whitepaper, where is your Github repo with code you've written, where is your stamp of approval from Prestigious Institution? Where the hell is your funding, how can I take this seriously if no one else has confidence in you enough to fund it?"
Well I have my Patreon...
"Are you really providing enough benefits to the world to justify that? Haven't you bought too many Taco Bell burritos, with meat in them to boot? Is it okay that your Patreon has grown statistically bigger than most people's yet you don't slave to provide artistic compensations like they do?"
Well I was just applying for foodstamps during the previous Focusmate session earlier...
"Do we deserve foodstamps!? We are taking from the collective coffer. We are privileged and if we're taking benefits then we're lazy, good-for-nothing--"
YES. WE DO. SHUT UP. WE ARE WORKING ON SECURING CIVILIZATION'S CONTINUED EXISTENCE. I THINK WE HAVE JUSTIFIED THAT RELATIVE TO OTHER BENEFICIARIES.
"Isn't that pretty entitled of you to assume that your project is 99.99% more valuable than anything else? Your ego seems pretty involved in this."
Yes. I am pretty emotionally invested with the mission. Critiques of it are bucket-errored with attacks on my survival. Perhaps this is a crux of my insecurity. I feel I have to succeed at saving the world and therefore this project. I would like to be more dispassionately objective about the situation.
"Why not just maintain a portfolio of projects?"
Which ones, of the 20-50 ideas I have? Where does the buck stop in one's decision to invest various amounts of resources in different things? Aren't we supposed to focus on high-potential things and Kelly bet with our resources?
(The solution actually I think is that you can multitask if it's constructing a vertical of synergistic components.)
Why is failure so bad anyway? We just keep on trying until we hit a homerun; the costs of swinging are actually basically nil, all perceptions otherwise.
Few want to hear vulnerable talk such as this, at least in my broader culture. Evolutionarily, we don't want to have leaders who are losers, or we will, well, lose in the zero-sum games if we are part of their coalition. We want certain answers to important questions. Even if it's a lie, as long as it's confident and leads to strong coordination, and we believe others believe it even if we don't, then we'll accept lies from strong leaders who don't apologize for their bullshit. I find the levels of intentionality fascinating as a lens into understanding much social behavior.
Well, my serotonin isn't high enough, bucko. If you ask me anything skeptical I'll basically assume I'm about to be ostracized from the tribe forever for my flagrant stupidity.
I suppose therefore my only recourse is to register my Forever Incompetency in the face of all possible agents. There is always a human or an AI that is better. There is always a more advantaged comparator. I'm never not going to be this way against the biggest challenge I can, so I might as well get used to it and half ass it with everything I've got.
None of this is to say I don't have plenty confidence, or longterm grit. I just wanted to get this out there before I was tempted to make progress reports look shinier than they are, constantly adjusting the rough draft to make it look nice and impeachable, a possibility which becomes so costly that I end up not writing it.
We aren't in as many zero-sum games as we think. I think there is a fierce, blazing positive-sum game ahead of us, read to be built. On the Ethereum blockchain, naturally. ;)
I have made quite a bit of progress in the past month, subjectively speaking. Just don't ask me to quickly justify that statement in a few words. :)
And since I notice I didn't Declare it in the previous post, I will note:
I am committing a year of my life to making this work.
If this post resonated with you in some way and you want to talk, you can book a call here.
Better collective decisionmaking could lead a group to cause more harm to the world than good via entering a valley of bad decisionmaking. This presumes that humans tend to have a lot of bad effects on the world and that naively empowering humans can make those effects worse.
e.g. a group with better epistemics and decisions could decide to take more effective action against a hated outgroup. Or it could lead to better economic & technological growth, leading to more meat eating or more CO2 production.
Humans tend to engage in the worst acts of violence when mobilized as a group that thinks it's doing something good. Therefore, helping a single group improve its epistemics and decisionmaking could make that group commit greater atrocities, or amplify negative unintended side effects.