niplav

300Joined Jun 2020

Comments
50

epistemic status: I saw the best minds of my generation

Limply balancing mile-deep cocktail glasses and clamoring for laughable Thielbucks on the cramped terraces of Berkely,
who invite you to the luring bay, promising to pay for your entire being while elegantly concealing the scarlet light evaporating from their tearducts and nostrils,
who down 3¾ bottles of Huel™ and sprint to their next fellowship or retreat or meetup or workshop, in the pore-showingly lit hallways with whiteboards and whiteboards and whiteboards and whiteboards,
who want to crown random nerds with aureal sigmas fished from the manifold crevaces of self-denying deities,
who write epics of institutionalized autists bent on terraforming these overpopulated hypothetical hells,
pointing out dynamic inconsistency in the comments and melting hectoton steel-marbles around civilizational recipes,
who improve their aim hurling yeeting philosophical tomes at unsuspecting passerbys and ascend urban lampposts to loudly deduce their mechanical liturgies,
who cut down oaks and poppies in the parks to make oil and space for those ones yet to come and who hunker down in fortified coalmines in Montana and the Ruhrgebiet, propelled into the rock by a Ordite reactor, a Bostromite dynamo,
who exitedly pour & plaster tabulated exponents into webscripted canisters just below the teal-tinded ceiling,
who can name the 11 AI forecasting organisations and the 4 factors of successful nonprofits and the 7 noble ways of becoming more agentic and might even find Rwanda on a map not made in Silicon Valley,
contemplating hemispherectomies to purify their nascent idealism on the verge of a hope-ash London dawn,
who catch a feral heart in the garden behind the A100 rack and save it into a transparent domicile, injecting it with 17000 volts to illuminate all the last battery cages equally,
who empty out their pockets with uncountable glaring utilons onto innocent climate activists, promising to make them happy neutron stars one day,
Microscopically examining the abyssal monstrosities their oracles conjure up out of the lurching empty chaos,
who fever towards silver tendrils bashing open their skulls and eating up their brains and imaginations, losslessly swallowed into that ellipsoid strange matter gut pulsing out there between the holes

May I ask why you started by learning category theory?

As far as I've heard, learning category theory makes most sense if one knows a lot of mathematics already because it establishes equivalences between different parts of mathematics. I think that humans learn somewhat better in a examples→abstract pattern way and not the other way around, so I'd've personally put category theory relatively late when learning mathematics.

But maybe your mind works differently from most humans' in that regard?

Updating existing content is great :-D

That sounds promising! I might get back to you on that :-)

The ones that come to my mind are Momentum, Gravity well, Embedding and Pulsar.

But you might want to contact naming what we can for further suggestions (maybe you could even get "Constellation" or "Lightcone" and they get another name!)

Another way to do this is like the rationality community does: Its highest status members are often pseudonymous internet writers with sometimes no visible credentials and sometimes active disdain for credentials (with the observation that argument screens off from authority).

Gwern has no (visible) credentials (unless you count the huge & excellent website as one), Yudkowsky disdains them, Scott Alexander sometimes brings them up, Applied Divinity Studies and dynomight and Fantastic Anachronism are all pseudonymous and probably prefer to keep it that way…

I think it's much easier to be heard & respected in the EA community purely through online writing & content production (for which you "only" need intelligence, conscientiousness & time, but rarely connections) than in most other communities (and especially academia).

You seem to have abandoned Ergo. What were the reasons for doing so, and what would the maximal price you'd be willing to pay (in person-consulting-hours) to keep it maintained?

I'm asking because I have worked on a tentative prototype for a library intended to make it easier to deal with forecasting datasets which, if finished, would duplicate some of the work that went into Ergo, and I was considering merging the two.

epistemic status: Borderline schizopost, not sure I'll be able to elaborate much better on this, but posting anyway, since people always write that one should post on the forum. Feel free to argue against. But: Don't let this be the only thing you read that I've written.

Effective Altruism is a Pareto Frontier of Truth and Power

In order to be effective in the world one needs to coordinate (exchange evidence, enact plans in groups, find shared descriptions of the world) and interact with hostile entities (people who lie, people who want to steal your resources, subsystems of otherwise aligned people who want to do those things, engage in public relations or zero-sum conflict). Solving those often requires trading off truth for "power" on the margin, e.g. by nudging members to "just accept" conclusions for action believed to be a basis for effective action (since making elaborate arguments common knowledge is costly and agreement converges slowly to ε difference with bits on evidence-sharing), by misrepresenting beliefs to other actors to make them more favorable towards effective altruism, or by choosing easy-communicable Schelling categories that minmax utility to the lowest-bounded agents.

On the one side of the Pareto frontier one would have an even more akrasia-plagued version of the rationality community with excellent epistemics but which would be universally hated, on the other hand one would have the attendants of this party.

Members of effective altruism seem not explicitely aware of this tradeoff or tension between truth-seeking and effectiveness/power (maybe for power-related reasons?) or at least don't talk about it, even though it appears to be relevant.

In general, the thinking having come out of Lesswrong in the last couple of years strongly suggests that while (for ideal agents) there's no such tension in individual rationality (because true beliefs are convergently instrumental), this does not hold for groups of humans (and maybe also not for groups of bounded agents in general, although there's some people who believe strong coordination is easy for highly capable bounded agents).

Load More