All of Samuel Shadrach's Comments + Replies

9Linch7hNo need to un-endorse your comment. Clarification questions are good!
4Linch7hNo worries, glad to help!
How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

I think you're misunderstanding this question. I am not asking for how much funding per unit x-risk reduction is moral in the abstract, I'm asking to get a sense of what the current margin of funding looks like, as a way to help researchers and others prioritize our efforts.

Sorry. I still haven't understood the question though.

When you say current margin of funding, what is the cost-effectiveness of x-risk reduction being compared against? Is it near-termist donation like AMF? (If so, I feel like trading off units of x-risk reduced against units of near-term suffering prevented is inherently a moral debate.)

2Linch10hMostly other xrisk interventions (usually this means extinction risk, or things that are similar to extinction risk, but some people think about things like stable totalitarianism or other forms of non-extinction lock-in).
How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

I'm new to the space so forgive me if I'm getting things wrong.

a) Genuinely curious - do there exist interventions where billions of dollars can reduce x-risk at the rate of $100M per 0.01% with very high certainty? I somehow had the impression that once say $1-10B is used, all the high certainty opportunities are consumed. And when that happens, the uncertainty in the moral debate of "how much funding per unit x-risk reduction is moral?" would get overshadowed by the uncertainty in the more practical debate of "how much x-risk reduction could this interve... (read more)

[This comment is no longer endorsed by its author]Reply
3Linch12ha) I don't think "very high certainty" interventions exist for xrisk, no. But I think there exists interventions where people can produce relatively robust estimates if given enough time, in the sense that further armchair thinking and near-term empirical feedback are unlikely to affect the numbers by more than say 0.5 orders of magnitude. I think you're misunderstanding this question. I am not asking for how much funding per unit x-risk reduction is moral in the abstract, I'm asking to get a sense of what the current margin of funding looks like, as a way to help researchers and others prioritize our efforts. Now in theory with perfect probabilistic calibration, assessment and coordination, EA should just fund the marginally most cost-effective thing to do until we are out of money. But in practice we just have a lot of uncertainty, etc. Researchers often have a sense (not necessarily very good!) of how cost-effective a few of the projects they are investigating are, and maybe a larger number of other projects, but may not have a deep sense of at what margin funders are sufficiently excited to fund (I know I at least didn't have a good idea before working through this question! And I'm still somewhat confused). If we have a sense of what the margin/price point looks like (or even rough order of magnitude estimates), then it's easier to be actively excited to do research or incubate new projects much below that price point, to deprioritize those research at much above that price point, and work hard on figuring out more accurate pricing for projects around that price point.
Scott Alexander – Nobody Is Perfect, Everything Is Commensurable

Agree with most of the post but I feel like you might be underestimating the cost-effectiveness of advocacy among friends and acquaintances. (And in general, some classes of activist approaches.) Devoting 10-40 hours of your life to convert someone towards EA or be less classist is pretty good returns. (Although I do feel that 1-on-1 conversation works a lot better than reblogging on tumblr.)

Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go?

Thanks for replying. I should have probably spent more time on my answer, I can still delete it if that'll help.

a) I think proportion of bonus to base rate would be the same,

I didn't get the meaning of this.

b) why would this be different if they were on (differing) flat salaries?

Fair enough, but the social incentives can be different. It's possible that nobody in the social group is explicitly telling you to try your best to move into a better paying EA position for the pay or recognition, and creating a competitive environment around it. (And even if you ... (read more)

5Linch1dI will encourage you to apply, see the landscape, and decide based on that. I'm personally not too worried about the financial status of EAs doing FT work at fairly well-established orgs. I'm at least somewhat worried about people who have career uncertainties, e.g. going from internships/contracts to contracts with some gaps in between, and/or people who go through multiple rounds of rejections, and think people in roughly that position should seriously consider taking up non-EA jobs for a while before coming back to direct work.
1Arepo2dThis is opening quite a can of worms! Personally I'd say you should get industry experience first and only apply for EA jobs if you think you might be uniquely qualified for them, but others here may disagree.
1Arepo2dI wouldn't delete it, just clarify anything you think was unclear, so we don't have to restart the conversation :) I mean that compared to a normal flat payment system, where person A got an annual $x salary and person B got, say, $2x salary, in the graduated system you'd just have person A get, say, $0.9x salary and a bonus of between $0-0.2x, where person B got $1.8x salary and a bonus between $0-0.4x . In other words, their relative salaries wouldn't be noticeably different.
Samuel Shadrach's Shortform

I have voted for two posts in the decadal review prelim thingie.

https://forum.effectivealtruism.org/posts/FvbTKrEQWXwN5A6Tb/a-happiness-manifesto-why-and-how-effective-altruism-should

9 votes

https://forum.effectivealtruism.org/posts/hkimyETEo76hJ6NpW/on-caring

4 votes

Seems to me like perspectives I strongly agree with, but not everyone in the EA community does.

Profit maximisation and obligations on shareholders

I didn't understand the 'share-specific obligations' idea. How might such an obligation look like?

People who use the voting rights of those shares can't vote for decisions that violate those obligations.

--

Nobody might want founding members / CEOs to retain a controlling interest on all matters, especially after the exit. The founding member themselves may not want this responsibility once they exit the company. People buying shares may not want it, and regulators may not like it because the agency problem becomes extreme here. (Cause here in the extreme, n... (read more)

Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go?

Imo the implications on social norms need to be thought out. Shifting economic incentives also has a tendency to shift social incentives, norms and behaviours.

An ideal EA is probably past the point where they have any significant personal needs or desires that are unmet - and if given more money they would donate it because they believe someone else needs it more than they do. Ofcourse in practice idk how many that holds true for (including myself).

Your proposal might create status dynamics where people look at their coworkers pay and think they need to ma... (read more)

[This comment is no longer endorsed by its author]Reply
3Arepo2d'An ideal EA is probably past the point where they have any significant personal needs or desires that are unmet - and if given more money they would donate it because they believe someone else needs it more than they do. Of course in practice idk how many that holds true for (including myself).' This seems meaningfully false for all the people I have known working at EA nonprofits except perhaps the founders of the biggest name ones. The salaries range from 'pretty damn stingy' to 'lower end of what you'd expect for market rate'. Obviously they're enough to live on, so what you say is true in a 'at least halfway up Maslow's hierarchy of needs' kind of way, but they're certainly not enough to dispense with all financial worries. 'Your proposal might create status dynamics where people look at their coworkers pay and think they need to make more - not in absolute sense but relative to their coworker.' I might be missing something here - how would you get this effect if all the staff were compensated this way? We might eventually end up paying some staff more than others, but a) I think proportion of bonus to base rate would be the same, and b) why would this be different if they were on (differing) flat salaries?
Scihub backups for open research

Yup this makes sense.

Whoever works on this could probably accept cryptocurrency deposits directly from individual donors. And EA orgs could possibly add this information to their websites with a notice like "do as you see fit". Would that model work?

Wikipedia editing is important, tractable, and neglected

I'd like to link my post on maintaining Scihub backups here, if that's alright

Scihub is probably atleast 1% as impactful as Wikipedia, and shouldn't take more than $1M to save forever (the actual figure is likely lesser).

EA orgs should integrate with The Giving Block for cryptocurrency donations

This is useful info. The button html embed is a nice touch.

But this still doesn't satisfy the requirements I'm suggesting. Namely 1 (no website visit) and 4 (can be used by DAOs and smart contracts), and to some extent 3 (anonymous)

I am personally excited by the potential use cases with DAOs and smart contracts. Essentially the list of actions under "Reason 4: One can experiment ...". There is a difference between opening a yield-earning app that has button that takes you to a different site that requires you to fill an e-mail address and donate, versus th... (read more)

4SamDeere4dWe do, although it looks like it's not showing on the site. I'm just fixing the issue, will update when it's there.
Announcing EffectiveCrypto.org, powered by EA Funds

This is great! I was going to suggest a UI on your earlier post, but refrained because it felt outside of the window of things that can be done easily. I will share this with people I know.

I'd still like to link my post here on features beyond this that may be worth supporting at some point, for anyone else here.

Samuel Shadrach's Shortform

Thank you for this, I found jekyll + github pages easiest to use too :)

Samuel Shadrach's Shortform

Anyone here wanna suggest me a good static site generator for my blog?

I can write code if I absolutely have to, but would prefer a solution that requires less effort, as I'm not a web dev specifically.

Don't want something clunky like wordpress, I like gwern.net's philosophy of not locking into one platform. I used pandoc markdown it seems cool.

Literally the main thing keeping me from next step in my EA career that I'm procrastinating on (make blog -> post ideas and work -> apply for summer internship)

[This comment is no longer endorsed by its author]Reply
1quinn8dhttps://github.com/daattali/beautiful-jekyll
Samuel Shadrach's Shortform

Random idea: Social media platform where you are allowed to "enter" a finite number of discussion threads per day. Threads can only be read if you enter them, until then you just see the top-level discussion. The restriction can be hard-coded or enforced indirectly via social norms (like you are supposed to introduce yourself when you enter a thread).

Basically explores how public discussion could transition to semi-private. Right now it's usually public versus (or transitioning to) fully private between a tiny number of people. But semi-private is what happens irl.

Donating crypto on EA Funds: more coins, low fees

Update: I've written the email.

The Giving Block seems to support the features I am looking for, I would be keen to know your opinion on them.

Donating crypto on EA Funds: more coins, low fees

Happy to take this conversation further on email

Getting money out of politics and into charity

in the event that it becomes a problem,

It's a problem before it becomes a problem though. As long as donors think this could happen, they'll abandon your system and ensure it never happens. And if the candidates think it's a problem, they too will explicitly try try push money away from your platform.

I think you should try plotting out a utility function U(a,b) for utility to A given donations amounts to A and B as a and b respectively. It's definitely not linear, both parties want a minimum budget nobody wants to receive literally zero funds as that ... (read more)

Donating crypto on EA Funds: more coins, low fees

Could you add if you accept anonymous deposits? Or deposits from smart contracts (such as DAOs)?
 

Update: I've made it a separate post because I think it'll be a strong determinant to how much funding you receive this way. Keeping crypto addresses anonymous is valued very highly by many for both ideological and practical reasons.

We currently only require an email address to make deposits under $500k (so if you have an email address that doesn't identify you then this would get you most of the way there). With larger donations we'd likely need to collect more information for KYC purposes. 

The system as described above isn't really set up to take deposits from smart contracts. The main issue is that EA Funds doesn't  take 'unrestricted' funding (as a normal charity would), and in order for us to allocate the donation correctly, we need a matching payment record from the do... (read more)

Samuel Shadrach's Shortform

Early brainstorm on interventions that improve life satisfaction, without directly attempting to improve health or wealth.

Will compare ITN afterwards, right now just listing out

 - mental health treatment

 - generating optimism in media and culture

 - reducing polarisation and hate in media and culture - could be via laws or content generation or creation of new social platforms or something else

 - worker protection laws, laws that promote healthy work-life balance, culture that does same

 - reducing financial anxiety - could be via jo... (read more)

Does climate change deserve more attention within EA?

they would ask what the point of preserving the future if climate change will make it much less positive than our current lives.

I find this a tiny bit neglected in EA too. Basically attempts at directly measuring life satisfaction instead of outcomes like wealth and health. There's likely a number of interventions that improve life satisfaction without impacting health or wealth, that aren't studied by EA.

Also there's subjectivity - different people can have different impressions when it comes to whose lives are "not worth living", "worth living" or "great", how do you compare them,  or how you account for hedonic adaptation.

Samuel Shadrach's Shortform

P.S. Just realised all public goods are not equal. City/state/national public goods (like roads) get funded easier than global ones (like carbon tax) for this reason.

Also this kinda makes public goods excludable. Pay tax or else leave the city/state/nation.

We’re Rethink Priorities. Ask us anything!

When will you be accepting internship applications next? Is there anything you'd recommend potential applicants to do meanwhile to increase their chances of being selected?

2Peter Wildeford14dAnswered here [https://forum.effectivealtruism.org/posts/D499oMCiFiqHT92TT/we-re-rethink-priorities-ask-us-anything?commentId=cuAf6geHLe8aBCnMe] and here [https://forum.effectivealtruism.org/posts/D499oMCiFiqHT92TT/we-re-rethink-priorities-ask-us-anything?commentId=3Lv5pisWAypCcuoew] and here [https://forum.effectivealtruism.org/posts/D499oMCiFiqHT92TT/we-re-rethink-priorities-ask-us-anything?commentId=AQoHCK7v3TkLvjxgt] .
Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

Awesome! Hope I didn't come off too strong, just wanted to exhaustively list out reasons.

D0TheMath's Shortform

I've read that paper :)

I'll take the standard approach then, is there any material you'd recommend?

4D0TheMath17dI don't know what the standard approach would be. I haven't read any books on evolutionary biology. I did listen to a bit of this online lecture series: https://www.youtube.com/watch?v=NNnIGh9g6fA&list=PL848F2368C90DDC3D [https://www.youtube.com/watch?v=NNnIGh9g6fA&list=PL848F2368C90DDC3D] and it seems fun & informative.
D0TheMath's Shortform

I can't think of any pathways for how a species could increase it's inclusive genetic fitness by making acausal trades with it's counterparts in non-causally-reachable Everett branches, but I also can't think of any proof for why it's impossible.

Makes sense.

then evolution has just made a minor update towards species which care about their future Everett selves.

Valid.

As an aside, would you recommend any material or book of evolutionary bio? Ideally focussed particularly on human behaviour, cooperation, social behaviours, psychology, that kind of stuff. Just out of curiosity, since you seem more knowledgible than me.

4D0TheMath18dI’ve been using the models I’ve been learning to understand the problems associated with inner alignment to model evolution during this discussion, as it is a stochastic gradient descent process, so many of the arguments for properties that trained models should have can be applied to evolutionary processes. So I guess you can start with Hubinger et al’s Risks from Learned Optimization? But this seems a nonstandard approach to trying to learn evolutionary biology.
D0TheMath's Shortform

Great reply!

Evolution doesn't select for that

I'd be keen to know why you say that, although it feels less important in the discussion after reading your other points.

the value "care about yourself, and others" is simpler than the value "care about yourself, and others except those in other Everett branches", so we should expect people to generalize "others" as including those in Everett branches, in the same way that they generalize "others" as including those in the far future.

Yep this is valid. If I had a deeper understanding about Everett branches maybe... (read more)

4D0TheMath18dIt likely depends on what it means for evolution to select for something, and for a species to care about it's copies in other Everett branches. It's plausible to imagine a very low-amplitude Everett branch which has a species that uses quantum mechanical bits to make many of it's decisions, which decreases its chances of reproducing in most Everett branches, but increases it's chances of reproducing in very very few. But in order for something to care about it's copies in other Everett branches, the species would need to be able to model how quantum mechanics works, as well as how acausal trade works if you want it to be able to be selected for caring how it's decision making process will affect non-causally-reachable Everett branches. I can't think of any pathways for how a species could increase it's inclusive genetic fitness by making acausal trades with it's counterparts in non-causally-reachable Everett branches, but I also can't think of any proof for why it's impossible. Thus, I only think it's unlikely. For the case where we only care about selecting for caring about future Everett branches, note that if we find ourselves in the situation I described in the original post, and the proposal succeeds, then evolution has just made a minor update towards species which care about their future Everett selves.
Why Undergrads Should Take History Classes

I don't need historians to study history.

I might agree, if you're smart. But then you don't need computer scientists if you want to study computer science or economists if you want to study economics either. Everything is available as material online, data, concepts, reasoning patterns, everything, you can self-study. Some people just benefit from a structured environment to learn those from. Be it for motivation or easier understanding.

There's a difference between studying history purely as a predictive tool and just studying history. A history class prim... (read more)

D0TheMath's Shortform

Got it. I personally find it counter-intuitive to care about infinitely many realities I cannot causally impact. (And there really are practically infinitely many, molecules move at tens if not hundreds of metres per second) I'm pretty sure many people won't take it seriously for the same reason. But maybe some could, if you post more about it.

I'm unkeen to comment on whether we should or shouldn't care in an prescriptivist sense.

What I will note however is that we are likely not trained to care for them, as part of our genetic training history, in a purel... (read more)

4D0TheMath18dEvolution doesn't select for that, but it's also important to note that such tendencies are not disselected for, and the value "care about yourself, and others" is simpler than the value "care about yourself, and others except those in other Everett branches", so we should expect people to generalize "others" as including those in Everett branches, in the same way that they generalize "others" as including those in the far future. Also, while you cannot meaningfully influence Everett branches which have split off in the past, you can influence Everett branches that will split off some time in the future.
Why Undergrads Should Take History Classes

Historical examples are often quoted in economics and political science papers. Historians themselves are not supposed to be primarily predictive, history is just one more source of data to combine with a lot of other data and domain knowledge, if making predictions in some field.

I'm not sure why computer science is even relevant. And rationality stuff can be useful to everyone but a lot of it isnt original and its certainly no substitute for actual domain knowledge of whichever field we're discussing.

4Charles_Guthmann19dI see your point but my response is that I don't need historians to study history. Again, you keep saying that history is useful, I'm not contesting that(though it seems like you may think it is more important than me). I'm contesting that the way you are taught history in the classroom as being specifically useful. I've personally found reading macro history type blogs and doing very general overviews on wiki to be more useful than taking specific courses on a topic in school, in terms of understanding my place in the world/trajectory of the world. You say historians are not supposed to be predictive. That is literally my point. If historians are just a source of data, what makes a historian/history different than Wikipedia in any real sense, outside of the motivation to actually do the material because grades. Why would I take a history class that has no value added from reading sources when I could have a professional writer coach me on writing skills. Again, how do you use historical data when attempting to predict things? Take for example guessing about what politician wins some election. You might use historical data of how the previous elections went to make a prediction (hopefully your model isn't fully just based on historical data with no account for how things have changed). However, it just doesn't seem like taking academic history provides you with anything here. Maybe they are the people who combed the primary sources so that the data is on the internet in the first place, but absent them having a monopoly on that data, I'd trust a cs/rationalist type more to use that data in a useful way. Historians will probably claim some story about why something happened, IMO that is antithethic to what we are trying to do here, unless that story is more predictive. Again like if some history professor at your school teaches in a really quantitative way or teaches a class that is like about large scale historical trends that seems like it could be useful bu
D0TheMath's Shortform

Do you care about Everett branches other than your own? In a moral sense.

4D0TheMath19dI’m not certain. I’m tempted to say I care about them in proportion to their “probabilities” of occurring, but if I knew I was on a very low-“probability” branch & there was a way to influence a higher “probability” branch at some cost to this branch, then I’m pretty sure I’d weight the two equally.
Samuel Shadrach's Shortform

Anyone thinks maximising utility = log(population) and not utility = population itself comes closer to approximating how humans typically think about creating new happy lives? Same as how finance often uses utility = log(wealth) not utility = wealth.

A world with 10,000 humans seems intrinsically better than a world with zero humans. A lot better than 7 billion versus 7.000001 billion. And not just because 10,000 can grow back to billions (let's assume they can't), but because they still embody the human spirit. Same way 10 million people on earth seems bet... (read more)

Thoughts on an EA bond fund?

I agree, yours should also exist. It's just that equity becomes more opinionated and not everyone agrees on stuff but maybe that's okay. And I wasn't sure about the crowdsourced decision-making either.

Open Thread: November 2021

Aren't all ethical principles / virtues by default biased towards human beings? Except the ones that explicitly attempt to include animals in the moral circle.

I assume most people value human lives higher than animal lives, even within EA, and even if they believe society currently undervalues animal lives.

Not that that makes it objectively right or wrong ofcourse, you're free to value animal lives as highly as human if that is something you are drawn to.

P.S. Valuing animal lives highly doesn't mean human extinction is neutral, it is still a bad thing beca... (read more)

Samuel Shadrach's Shortform

What is the minimum network effect or benefits that a nation must have before it can mandate a tax to fund public goods? And not end up pushing everyone to leave. Can this effect be achieved purely virtually? (No physical land or services)

1Samuel Shadrach16dP.S. Just realised all public goods are not equal. City/state/national public goods (like roads) get funded easier than global ones (like carbon tax) for this reason. Also this kinda makes public goods excludable. Pay tax or else leave the city/state/nation.
Reasons for and against posting on the EA Forum

Thank you, I will check those posts out.

I'd also be keen on understanding what you mean by "from the point of view of the universe", as a moral non-realist myself, but I totally understand your time constraint.

What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?

Agreed that you don't know for sure, but "≥ 10% chance of it happening in ≤ 2 years" must have some concrete reasoning backing it. Maybe this reasoning doesn't convince them at first, but it may still be high EV to "explain harder" if you have a personal connection with them. Bring in more expert opinions, appeal to authority - or whatever other mode of reasoning they prefer.

You're right that second option is hard and messy. I think it depends on a) are you able to use the code to get any form of strategic advantage without unleashing misaligned AGI b) wha... (read more)

If I have a strong preference for remote work, should I focus my career on AI or on blockchain?

Regarding AGI, I think even a 10% chance of AGI being developed in this century, means the field deserves more people working on it than currently are. What odds do you assign to AGI being developed this century?

Reasons for and against posting on the EA Forum

Thanks for your reply. Yup I meant "good".

And aren't your claims subjective, i.e., they come from your perspective. Like:

Person A believes EA is important. Person A's values are not drifting

Person B believes EA is important. However, person B's values are drifting towards EA being less important.

Your comment across as you being person A determining what is good or bad about person B. Which is a valid thing to do, but I was more keen on why person B themselves would reason whether drift in their own values is a good or bad thing for them.

5MichaelA20dYeah, I'm indeed thinking about what's good in a moral sense / for the world / "from the point of view of the universe" / from my perspective, not what's good from another person's perspective. But it can also obviously be the case that from Person B's current perspective, their values drifting would be bad. And we could also think about what people want in the moment vs what they want when looking back / reflecting vs what a somewhat "idealised" version of their values would want, or whatever. In any case, this sort of thing is discussed much further in posts tagged value drift [https://forum.effectivealtruism.org/tag/value-drift], so you might find those posts interesting. (I won't discuss it in detail here since it's a bit of a tangent + due to busyness.)
Reasons for and against posting on the EA Forum

It isn't clear to me why reducing value drift - either in favour or against some EA ideas - is a bad thing universally. Keen to get your viewpoint.

2MichaelA20dDo you mean "It isn't clear to me why reducing value drift - either in favour or against some EA ideas - is a good thing universally"? Usually - and in my comment above - the term value drift is used to refer to something more like drifting away from EA values as a whole, rather than shifting one's focus between different EA values/ideas/causes, which I think is obviously often good and probably more often good than bad (i.e., people probably update in good directions at least somewhat more often than in bad directions). I think value drift away from EA values as a whole is usually bad, but even that's obviously not alwaysbad (at least when you consider cases where a person was very focused on EA explicitly and then they move towards pursuing similar goals but with less focus on EA specifically). And indeed, I note above that "increas[ing] your engagement with & retention in EA, & mitigat[ing]value drift [https://forum.effectivealtruism.org/tag/value-drift] "would often be a good thing, though it’s unclear how often, and it may also often be a bad thing".
Discussion with Eliezer Yudkowsky on AGI interventions

Same, but I don't trust my model of him enough so thought I should post. I'm happy to be proven wrong.

What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?

Okay. I'm still gonna assume that they have atleast read some AI alignment theory. Then I think some options are:

  • convincing them they are close to AGI / convinve them their AGI is misaligned, whichever of the two is important.
  • getting them stopped by force

Convincing them requires opening dialogue with them, not being hostile and convincing them that you're atleast somewhat aligned with them. Money might or might not help with this.

Getting them to stop could mean appealing to the public, or funding a militia that enters their facility by force.

Appealing... (read more)

2Greg_Colbourn20dI think the main problem is that you don't know for sure that they're close to AGI, or that it is misaligned [https://intelligence.org/2017/10/13/fire-alarm/], beyond saying that all AGIs are misaligned by default, and what they have looks close to one. If they don't buy this argument -- which I'm assuming they won't given they're otherwise proceeding -- then you probably won't get very far. As for using force (lets assume this is legal/governmental force), we might then find ourselves in a "whack-a-mole" situation, and how do we get global enforcement (/cooperation)?
Should Earners-to-Give Work at Startups Instead of Big Companies?

My focus was more on newer people who are more likely to be updated, since it seems like this class of people is more impressionable.

Maybe this should be part of the content of the post, the disclaimers or who is or isn't a good fit. And someone could link to them whenever a new post comes up. Rather than not making the posts because we don't trust people's judgement and don't see scope for improving it.

Diversity by algorithm seems problematic.

My bad, wasn't aware. Editorial digest works too. I just meant that people should be free to post what they... (read more)

Why Undergrads Should Take History Classes

History as an academic field provides basically no tools for understanding this complicated world

I very strongly disagree with this. First, in the very trivial sense, literally everything we learn is from history. That 10-year old research paper you are studying is history. That 30-year old failed attempt at communism is history.

Now let's say you say that everything that can predict the future has been invented in the last 30 years. This too I disagree with. Some fields like politics and economics just don't allow you to create enough useful data in 30 yea... (read more)

1Charles_Guthmann19dTo be clear, I think history is important. I don't believe that college history classes are the best forum for learning history\the important aspects of history for prediction was my point. Also, to reiterate: If history as taught by academics is so important for prediction, shouldn't we then expect academic historians to be the best forecasters(to be fair maybe they are, I'm not an expert on who the best forecasters are but kind of assume its gonna be cs people\rationalists)? comment currently sitting at -7 but no one has even contested that point or said why it doesn't make sense. Also I condone taking macro classes. >
Load More