T

tobyj

314 karmaJoined Working (6-15 years)Mile End, London, UK

Posts
4

Sorted by New
2
tobyj
· · 1m read
23
tobyj
· · 9m read
13
tobyj
· · 3m read

Comments
23

Topic contributions
1

I wanted to get some perspective on my life so I wrote my own obituary (in a few different ways).

They ended up being focussed my relationship with ambition. The first is below and may feel relatable to some here!

Auto-obituary attempt one:

Thesis title: “The impact of the life of Toby Jolly”
a simulation study on a human connected to the early 21st century’s “Effective Altruism” movement

Submitted by:
Dxil Sind 0239β
for the degree of Doctor of Pre-Post-humanities
at Sopdet University 
August 2542

Abstract
Many (>500,000,000) papers have been published on the Effective Altruism (EA) movement, its prominent members and their impact on the development of AI and the singularity during the 21st century’s time of perils. However, this is the first study of the life of Toby Jolly; a relatively obscure figure who was connected to the movement for many years. Through analysing the subject’s personal blog posts, self-referential tweets, and career history, I was able to generate a simulation centred on the life and mind of Toby. This simulation was run 100,000,000 times with a variety of parameters and the results were analysed. In the thesis I make the case that Toby Jolly had, through his work, a non-zero, positively-signed impact on the creation of our glorious post-human Emperium (Praise be to Xraglao the Great). My analysis of the simulation data suggests that his impact came via a combination of his junior operations work, and minor policy projects but also his experimental events and self-deprecating writing.

One unusual way he contributed was by consistently trying to draw attention to how his thoughts and actions were so often the product of his own absurd and misplaced sense of grandiosity; a delusion driven by what he would describe himself as a “desperate and insatiable need to matter”. This work marginally increased the self-awareness and psychological flexibility amongst the EA community. This flexibility subsequently improved the movement's ability to handle its minor role in the negotiations needed to broker power during the Grand Transition - thereby helping avoid catastrophe.

The outcomes of our simulations suggest that through his life and work Toby decreased the likelihood of a humanity-ending event by 0.0000000000024%. He is therefore responsible for an expected 18,600,000,000,000,000,000 quality adjusted experience years across the light-cone, before the heat-death of the universe (using typical FLOP standardisation). Toby mattered.

Ethics note: as per standard imperial research requirements, we asked the first 100 simulations of Toby if they were happy being simulated. In all cases, he said “Sure, I actually, kind of suspected it…look, I have this whole blog about it”

See my other auto-obituaries here :)

I really enjoyed this and found it really clarifying. I really like the term deep atheism. I'd been referring to the thing you're describing as nihilism, but this is a much much better framing.

I wrote up my career review recently! Take a look(also, did you know that Substack doesn't change the URL of a post, even if you rename it?!)

As a counter-opinion to the above, I would be fine with the use of GPT-4, or even paying a writer. The goal of most initial applications is to asses some of the skills and experience of the individual. As long as that information is accurate, then any system that turns that into a readable application (human or AI) seems fine, and more efficient seems better. 

The information this looses, is the way someone would communicate their skills and experience unassisted, but I'm skeptical that this is valuable in most jobs (and suspect it's better to test for these kinds of skills later in the process).

More generally I'm doubtful of the value of any norms that are very hard to enforce and disadvantage scrupulous people (e.g.  "don't use GPT-4 or "only spend x hours on this application").

I have now turned this diagram into an angsty blog post. Enjoy!

Pareto priority problems

Thanks for this Akash. I like this  post and like these examples. 

One thing I find has been helpful for me here is to stop talking and thinking about "status" so much and focus more on acknowledging my own emotional experience. e.g. the bullets at the start of this post for example are all example of someone feeling shame

Shame can be a useful signal about how it would be best for you to act in any given situation, but often isn't particularly accurate in its predictions. Focusing on concepts like "status" instead of the emotions that are informing the concept has, at times, pushed me towards seeing status as way more real than it actually is. 

This isn't to say that status is not a useful model in general. I'm just skeptical its anywhere near as useful for understanding your own experience as it is to analyzing the behavior of groups  more objectively.

Thank Michael, I connect with the hope of this post a lot and EA still feels unusually high-trust to me.

But I suspect that a lot of my trust comes via personal interactions  in some form. And it's unclear to me how much of the  high level of trust  in the EA community in general  is due to private connections vs. public signs of trustworthiness.

If it's mostly the former, then I'd be more concerned. Friendship-based trust isn't particularly scalable, and reliance on it seems likely to maintain diversity issues. The EA community will need to increasingly pay bureaucratic costs to keep trust high as it grows further.

I'd be interested in any attempts at quantifying the costs to orgs of different governance interventions and their impact on trust/trustworthiness.

Thanks Jeff - this is helpful! 

I don't know who would be best placed to do this, but I can imagine it would be really helpful to have more expansive versions of these diagrams. Especially ones that make the specific nature of the relationships between orgs clear (i.e. going beyond fiscal sponsorship) .  A lot of the comments/discussion on this post seem to be about speculation about the specific nature of these relationships. 

Here is what I imagined:

 

I suspect doing something like this would end up being pretty subjective, it would change over time, and there will be disagreement between people involved. e.g. things like "strength of strategic oversight" are going to be pretty ambiguous. But the value in attempts at  creating some common knowledge here seems high given the current level of confusion.

(and alongside increasing trustworthiness, this kind of transparency would also be  valuable for people setting up new orgs in the ecosystem. Currently, if you want to figure out how your org can/should fit in you have to try and build a picture like the above yourself)

Thank you for this post! One thing I wanted point out was that, this post talks about governance failures by individual organizations.  But EA orgs are unusually tightly coupled, so I suspect a lot more work needs to be done on governance at the ecosystem level.

I most recently worked for a government department. This single organsiation  was bigger, more complex, and  less internally value-aligned than the ecosystem of EA orgs. EA has fuzzier boundaries, but for the most part, functions more cohesively than a single large organisation.

I haven't thought a tonne about how to do this in practice, but I read this report on "Constellation Collaboration" recently  and found it compelling. I suspect there is a bunch more thinking that could be done at the ecosystem level.

Load more