Right, I had similar thoughts.
The desert hitchhiker: My intuition here is that if you are completely rational, you realize that if you don’t believe you will pay later you won’t get a ride now. In this sense the question feels similar to simply going to the store and the clerk says, you have to pay for that, and you say no I don't, and they say yes you do, and you say, no really you can't make me, and they say, yes I can. At this point, you pay if you are rational. The only difference being, in this case, you don't actually have to pay, you just have to convince yourself you are going to pay.
The same can be said for the firefighting example if you know they have a lie detector. Once you know you can’t lie, this simplifies down to a non temporal problem IMO other than you don't actually have to change your brainstate to make you help, you just have to convince yourself that that is the brainstate you have.
For kate the writer, it feels like she isn't actually being selfish, but rather just not thinking long term. Would she really quit writing or just not write as much?
Schelling’s answer to armed robbery: Is bluffing irrational? Only when the costs outweigh the gains. If bluffing is rational but you are too scared to bluff, simply change your brain to be less scared :).
The alien virus
I’m confused- so the virus makes us do good things but we don’t enjoy doing those things? So are we being mind controlled? What does it feel like to have this alien virus?
It seems like the claim is more selfish = greater potential valence.
Humans are mostly unique in that we are both able to have utility and have profound influences on others utility, hence there is an equilibrium where past which as consequentialists we need to change our worldview towards being selfish (but we are not close to this equilibrium imo, if you consider future humans plus animals probably have much more potential utility than us).
If there is one human and one dog in the world thing that doesn’t get the virus (and let's say they live forever), and we say the dog has up to 2 potential units of utility and the human has 0 when unselfish and 2 when selfish, the virus should regulate my behavior to switch between being selfish and unselfish to max out the sum of the utility. I guess you might run into problems of getting stuck in local equilibria here though.
Also- I enjoyed the post alot, thought experiments are always fun.
The link for the trustworth AI wasn't broken for me? https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/#Use-of-AI-by-the-Federal-Government
But unsurprisingly, it mostly seems like they are talking about bigoted algorithms and not singularity.
However it did link this:
Find their abriged 2021 report here:
Personally, this looked more promising than anything else I had seen. There was a section titled adversarial AI, which I thought might be about AGI, but upon further reading, it wasn't. So this appears to also be in the vein of what Ozzie is saying. However, It seems they have events semi-frequently. I think someone from EA should really try to go to they are allowed. The second link is the closest chapter in the report to AGI stuff if anyone wants to take a look- again though it's not that impressive.
And Also I found this: https://www.dod-coe4ai-ml.org/leadership-members
But I can't really tell if this is the DODs org or Howard universities; it seems like they only hire Howard professors and students so probably the latter.
Closest paper I could find from them to anything AGI related: https://www.techrxiv.org/articles/preprint/Recent_Advances_in_Trustworthy_Explainable_Artificial_Intelligence_Status_Challenges_and_Perspectives/17054396/1
https://www.ai.gov/ What do you make of this?
Fair point. First let me add another piece of info about the congress: "The dominant professions of Members are public service/politics, business, and law."
Now on to your point.
I think there is also more to say about the variety of reasons people feel more comfortable giving their input on economic, social, and foreign policy issues (even if they have no business doing so), which I think could leak into leaders just naturally trending towards dealing with those issues, but I think this is a much more delicate argument that I don't feel comfortable fleshing out right now.
I think aogaras point above is reasonable and mostly true, but I don't think it goes to the level of explaining the discrepancy. This is incredibly skewed because of who I associate with(not all of my friends are eas though), but anecdotally I think AGI is starting to gain some recognition as a very important issue among people my age (early 20s), specifically those in STEM fields. Not a lot, but certainly more than it is talked about in the mainstream. Let's be real though, none of my friends will ever be in the military or run for office, nor do I believe they will work for the intelligence agencies. My point is, In addition to age, we have a serious problem with under-representation of stem in high up positions and over-representation of lawyers. It would be interesting to test the leaders of various Gov departments on their level of computer science competency/comprehension.
"The average age of Members of the House at the beginning of the 117th Congress was 58.4 years; of Senators, 64.3 years."
I have the same feeling. I have an aversion to utility tiling as you describe it but I can't exactly pinpoint why other than that I guess I am not a utilitarian. As consequentialists perhaps we should focus more on the ends ends, i.e. aesthetically how much we like the look of future potential universes, rather than looking at the expected utility of said universes. E.g. Star wars is prettier than expansive VN probe network to me so I should prefer that. Of course this is just rejecting utiliarianism again.
"The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, >explore, learn, and change."
If you're a total utilitarian, you don't care about these things other than how they serve as a tool for utility. By the structure of the repugnant conclusion, there is no amount of appreciating life that will make the total utility in smaller world greater than total utility in bigger world.
I see your point but my response is that I don't need historians to study history. Again, you keep saying that history is useful, I'm not contesting that(though it seems like you may think it is more important than me). I'm contesting that the way you are taught history in the classroom as being specifically useful. I've personally found reading macro history type blogs and doing very general overviews on wiki to be more useful than taking specific courses on a topic in school, in terms of understanding my place in the world/trajectory of the world. You say historians are not supposed to be predictive. That is literally my point. If historians are just a source of data, what makes a historian/history different than Wikipedia in any real sense, outside of the motivation to actually do the material because grades. Why would I take a history class that has no value added from reading sources when I could have a professional writer coach me on writing skills.
Again, how do you use historical data when attempting to predict things?
Take for example guessing about what politician wins some election. You might use historical data of how the previous elections went to make a prediction (hopefully your model isn't fully just based on historical data with no account for how things have changed). However, it just doesn't seem like taking academic history provides you with anything here. Maybe they are the people who combed the primary sources so that the data is on the internet in the first place, but absent them having a monopoly on that data, I'd trust a cs/rationalist type more to use that data in a useful way. Historians will probably claim some story about why something happened, IMO that is antithethic to what we are trying to do here, unless that story is more predictive.
Again like if some history professor at your school teaches in a really quantitative way or teaches a class that is like about large scale historical trends that seems like it could be useful but that has not been my experience taking history classes.
To be clear, I think history is important. I don't believe that college history classes are the best forum for learning history\the important aspects of history for prediction was my point. Also, to reiterate: If history as taught by academics is so important for prediction, shouldn't we then expect academic historians to be the best forecasters(to be fair maybe they are, I'm not an expert on who the best forecasters are but kind of assume its gonna be cs people\rationalists)? comment currently sitting at -7 but no one has even contested that point or said why it doesn't make sense. Also I condone taking macro classes.