My intuition after playing around with many of these models is that GPT 3.5 is probably not good enough at general reasoning to produce consistent results. It seems likely to me that either GPT 4 or Claude 2 would be good enough. FWIW, in a recent video Nathan Labenz said that he originally suggested to use GPT 4 and then go from there when people asked him for recommendations. The analysis gets more complicated with Claude 2 (perhaps slightly worse at reasoning, longer context window).
Kind of nerdy point, but I think the 0.8 effect size is likely inflated. A recent analysis of the meta-analyses on psychotherapy for depression found that the average summary effect size for meta-analyses was 0.56.
There seemed to be evidence for publication bias:
Meta-analyses that excluded high risk of bias studies, mean g=0.61, 95% CI (0.27, 0.95) with k=2413 included samples, produced larger effect size estimates than meta-analyses including only low risk of bias studies, mean g=0.45, 95% CI (0.19, 0.72) with k=1034.
Also, many studies used improper controls:
Meta-analyses that included samples compared with a wait-list control group, mean g=0.66, 95% CI (0.35, 0.96) with k=836, produced larger effect size estimates than treatments compared with care-as-usual, mean g=0.52, 95% CI (0.22, 0.82) with k=1194.
This analysis was specifically on therapy for depression, but I would expect the main criticisms/reasons for effect size inflation would apply to other mental health problems as well. FWIW, therapy for anxiety seems to have larger effect sizes than therapy for depression, so I would expect better performance on anxiety even when taking into account possible publication bias and inadequate controls.
My understanding is that the technical translation is: 70% of the variance in that trait is attributable to genes, given the time and place of the studied population.
For example, 70% of the variance in intelligence is attributable to genes, given a white American population, living in non-abusive homes, from the 1960s to the 1990s. (The specifics are just to provide a concrete example.)
The farther one gets from the originally studied population, the less one can extrapolate exact findings. And vice versa.
It is not clear to me that taking action on non-extinction x-risks would be in conflict with neartermist goals:
Value lockin -> like an AI singleton locking in a scenario that would not be optimal for longtermist goals? Isn't that akin to the alignment problem, and so directly intertwined with extinction risk?
Irreversible technological regression -> wouldn't this be incredibly bad for present humans and so coincide with neartermist goals?
Any discrete event that prevents us from reaching technological maturity -> wouldn't this essentially translate to reducing extinction risk as well as ensuring we have the freedom and wealth to pursue technological advancement, thus coinciding with neartermist goals?
Am I missing something?
Fwiw, I don't think a crash in the FTT token would've crashed FTX as a business (assuming no funny business with extending loans to other parties collateralized in FTT). Afaik FTT was basically a revenue-share token, essentially like common stock.
Just as Meta shares falling 70% didn't affect their core business of showing users ads, a crash in FTT shouldn't have affected the core exchange business. It's just the going rate for rights to future profits.
A relevant Twitter thread by Dustin Moskovitz:
Two seemingly contradictory things I believe about the SBF situation: a/ The Effective Altruism community will need to have a strong response/crucible moment b/ The simplest explanation for his behavior does not imply utilitarianism means-justifying, as widely assumed.
Two seemingly contradictory things I believe about the SBF situation:
a/ The Effective Altruism community will need to have a strong response/crucible moment
b/ The simplest explanation for his behavior does not imply utilitarianism means-justifying, as widely assumed.
I think the quotes from Sam's blog are very interesting and are pretty strong evidence for the view that Sam's thinking and actions were directly influenced by some EA ideas.
I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in "official" EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views).
Most of the relevant documentation is linked in Will's post.
Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned.
He goes on to say in the very next paragraph:
This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that's probably not a very good model.
He concludes the post by stating that the multiplicative model, which he thinks is more likely, indicates that both reducing x-risk and improving the future are important.
None of this proves anything. But it's significantly changed my prior, and I now think it's likely that the EA movement should heavily invest in multiple causes, not just one.
There's another post on that same page where he denotes his donations for 2016 and they include donations to x-risk and meta EA orgs, as well as donations to global health and animal welfare orgs.
So nevermind, I don't think those blog posts are positive evidence for Sam being influenced by EA ideas to think that present people don't matter or that fraud is justified.
I just wanted to note that I agree with everything you've said here.
Edit: This is kinda whatever but I guess I'm getting downvoted for agreeing here? There is no agree vote for the OP, so the only way I can differentiate between upvoting because I thought it was high quality vs actually agreeing with the post is by commenting...
I may be overpleading the case here, but I feel like a big reason why there is default skepticism about crypto is that in most people's minds, they think "crypto" equals "bitcoin".
Bitcoin makes up a plurality of total crypto market cap, but imo it's generally boring, uninteresting, and not a particularly high potential part of the space. If crypto was just bitcoin, I wouldn't think that highly of it either. Bitcoin may still be able serve as a kind of "digital gold", but that seems overall not that net positive or interesting.
Imo by far the most interesting part of the space is "smart contract blockchains", the most prominent being Ethereum. That is where the actual innovation and interesting potential use cases are.
Some examples (sorry for using specific projects, I feel like it's easier to illustrate with concrete examples rather than abstractions):
Aave, the leading borrowing/lending protocol. Users can earn yield on their assets and borrow loans against their collateral. It's basically a bank that runs entirely on code that anyone around the world can access 24/7.
Uniswap, the leading decentralized exchange. Users can swap assets or earn yield by providing liquidity for any token pair. It's basically an exchange that runs entirely on code that anyone around the world can access 24/7.
Lens, the leading decentralized social media protocol (this one is still very early, but it shows the potential of the technology). It enables Twitter/FB style social networks, but where everything is on a blockchain so it's impossible to censor or de-platform users, and users can take their followers/friends to other sites (instead of the current siloing we see in current social media platforms).
Proof of Humanity, a project that is trying to build a sybil-proof digital identification system that could enable things like a true global UBI (this project is also very early and I have no idea if it will actually end up being valuable).
There are hundreds, maybe thousands of projects like these that are building cool stuff enabled by blockchain and smart contracts.
Right, I'm trying to say - just like normal fiat currency, crypto is meant to be a money, it's not an end in itself. So using the bar "I wouldn't buy this thing if I couldn't then trade it for something else" doesn't really make sense, because the whole point of the thing is that you can eventually trade it for something else.
The value in money is in its properties to facilitate store of value, unit of account, and unit of exchange. Crypto in principal can do these things.
Also, insofar as there is value in other blockchain based applications (e.g. DeFi), you need crypto in order to use those applications (for "gas" to pay for transactions).
Also, you can use crypto/blockchain to transact in dollars - so called "stablecoins". So you can get the international transfer benefits but without the natural volatility that occurs in crypto prices.