All of Charles Dillon 🔸's Comments + Replies

You seem to be hung up on proving comparative advantage "not applying" for some reason, but there is basically no circumstance where it does not apply, in theory.

It simply doesn't matter, because human equilibrium wages can still go to zero regardless of comparative advantage still applying, and that matters far more. Comparative advantage is a distraction - that is the point of this post.

-5
SLermen

Comparative advantage is not at all load bearing here. The only thing that is load bearing is skepticism that AI capabilities will advance sufficiently and be sufficiently cheap. 

 

From a post on my substack @Nathan Young linked below (now on the EA forum)

 

Comparative advantage alone doesn’t mean you get to eat
This is Arthur B’s point above, and mine when I pointed out that there are fewer horses these days than in the past.

It is simply a claim that the market clearing price of labour does not have to be sufficient for people to get as much r

... (read more)

Note that Matthew responded at length on my substack, and there is further discussion there.

If you care about animals I think it is obviously the case that the best bet is to work on finding an angle that Reform will go along with. Given their poll position, the shallowness of their bench and their relatively minimal set of policy commitments, not doing this because they are "problematic" strikes me as putting low-consequence deontological considerations (you aren't going to change the probability they win, it seems unlikely anybody cares about your endorsement) over a potentially huge opportunity for impact.

It's the same thing - if I think the expected value of one thing vs another is 10x, all things considered, then that is what I think the expected value is, already factoring in whatever chance I think there is that I am various versions of wrong, which is very under specified here.

For example, let's say I do a back of the envelope calculation that says ABC is 20x as valuable as XYZ, but I see lots of people disagree with me. Then my estimate of the relative value of ABC vs XYZ will not be 20x, but probably some lower number, which could be 15x or 2x or 0.5... (read more)

Personally speaking, if I say I think something is 10x as effective, I mean that as an all-things-considered statement, which includes deferring however much I think it is appropriate to the views of others.

3
Milli🔸
That's not what I asked: In percentage points, how likely do you think you are right (and people who value e.g. GHWB over Animal Welfare are wrong)?

This seems likely to be incorrect to me, at least sometimes. In particular I disagree with the suggestion that the improvement on the margin is likely to be only on the order of 5%.

Let's take someone who moves from donating to global health causes to donating to help animals. It's very plausible that they may think the difference in effectiveness there is by a factor of 10, or even more.

They may also think that non-EA dollars are more easily persuaded to donate to global health initiatives than animal welfare ones. In this case, if a non-EA dollar is 80% l... (read more)

3
Milli🔸
How sure are you are right and the other EA (who has also likely thought carefully about their donations) is wrong, though? I'm much more confident that I will increase the impact of someone's donation / spending if they are not in EA, rather than being too convinced of my own opinion and causing harm (by negative side effects, opportunity costs or lowering the value of their donation).

Similarly if you think animal charities are 10x global health charities in effectiveness, then you think these options are equally good:

  • Move 10 EA donors from global health to animal welfare
  • Add 9 new animal welfare donors who previously weren't donating at all

To me, the first of these sounds way easier.

I think if there were proven methods to persuade such people to give away the excess, the world would look very different.

I hope you find success in persuading those around you to give, but I don't think the process of giving to neighbours and polling resources rather than going directly to supporting specific causes where they could have a lot of impact makes much sense.

Just had another glance at this and I think the delta vs implied vol piece is consistent with something other than a normal/log normal distribution. Consider: the price is $13 for the put, and the delta is 5. This implies something like - the option is expected to pay off a nonzero amount 5% of the time, but the average payoff when it does is $260 (despite the max payoff definitionally being 450). So it looks like this is really being priced as crash insurance, and the distribution is very non normal (i.e. circumstances where NVDA falls to that price means something weird has happened)

Generally I just wouldn't trust numbers from Yahoo and think that's the Occam's Razor explanation here.

Delta is the value I would use before anything else since the link to models of reality is so straightforward (stock moves $1 => option moves $0.05 => clearly that's equivalent to making an extra dollar 5% of the time)

9
Charles Dillon 🔸
Just had another glance at this and I think the delta vs implied vol piece is consistent with something other than a normal/log normal distribution. Consider: the price is $13 for the put, and the delta is 5. This implies something like - the option is expected to pay off a nonzero amount 5% of the time, but the average payoff when it does is $260 (despite the max payoff definitionally being 450). So it looks like this is really being priced as crash insurance, and the distribution is very non normal (i.e. circumstances where NVDA falls to that price means something weird has happened)

Right now the IV of June 2025 450 calls is 53.7, and of puts 50.9, per Bloomberg. I've no idea where your numbers are coming from, but someone is getting the calculation wrong or the input is garbage.

The spread in the above numbers is likely to do with illiquidity and bid ask spreads more than anything profound.

The IV for puts and calls at a given strike and expiry date will be identical, because one can trivially construct a put or a call from the other by trading stock, and the only frictions are the cost of carry.

The best proxy for probability an option will expire in the money is the delta of the option.

5
Lorenzo Buonanno🔸
  Thank you. Here's an explanation from Wikipedia for others like me new to this. Looking at the delta here and here, the market would seem to imply a ~5% chance of NVDA going below 450, which is not consistent with the ~15% in the article derived from the IV. Is it mostly because of a high risk-free interest rate? I wonder which value would be more calibrated, or if there's anything I could read to understand this better. It seems valuable to be able to easily find rough market-implied probabilities for future prices.
4
MichaelDickens
Can you explain? I see why the implied vols for puts and calls should be identical, but empirically, they are not—right now calls at $450 have an implied vol of 215% and puts at $450 have an implied vol of 158%. Are you saying that the implied vol from one side isn't the proper implied vol, or something?

"Nvidia’s implied volatility is about 60%, which means – even assuming efficient markets – it has about a 15% chance of falling more than 50% in a year.

And more speculatively, booms and busts seem more likely for stocks that have gone up a ton, and when new technologies are being introduced."

Do you think the people trading the options setting that implied volatility are unaware of this?

3
Benjamin_Todd
Agree it's most likely already in the price. Though I'd stand behind the idea that markets are least efficient when it comes to big booms and busts involving large asset classes (in contrast to relative pricing within a liquid asset class), which makes me less inclined to simply accept market prices in these cases.
2
Lorenzo Buonanno🔸
If I understand correctly, you are interpreting the above as stating that the implied volatility would be higher in a more efficient market. But I originally interpreted it as claiming that big moves are relatively more likely than medium-small moves compared to other options with the same IV (if that makes any sense) Taking into account volatility smiles and all the things that I wouldn't think about, as someone who doesn't know much about finance and doesn't have Bloomberg Terminal, is there an easy way to answer the question "what is the option-prices-implied chance of NVDA falling below 450 in a year?" I see that the IV for options with a strike of 450 next year is about ~70% for calls and ~50% for puts. I don't know how to interpret that, but even using 70%, this calculator gives me a ~16% chance, so would it be fair to say that traders think there's a ~15% chance of NVDA falling below 450 in a year? In general, I think both here and in the accompanying thread finance professionals might be overestimating people's average familiarity with the field.

Seems like a rather vague collection of barely connected anecdotes haphazardly strung together.

I am not particularly concerned as I don't see this persuading anybody.

Gonna roll the dice and not click the link, but will guess that Torres and/or Gebru gets cited extensively! https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty - such a shame this excellent piece doesn't get more circulation

Thanks for posting this - on a quick read it looks pretty accurate to me and I'll be glad to have this as a resource to point people to when they seem not to understand exactly why what FTX did was so bad.

I don't understand why you think this is the case. If you think of the "distribution of grants given" as a sum of multiple different distributions (e.g. upskilling, events, and funding programmes) of significantly varying importance across cause areas, then more or less dropping the first two would give your overall distribution a very different shape.

5
Aaron Bergman
Yeah you're right, not sure what I missed on the first read

I think getting enough people interested in working on animal welfare has not usually been the bottleneck, relative to money to directly deploy on projects, which tend to be larger.

2
Aaron Bergman
This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.

Seems pretty unsurprising - the animal welfare fund is mostly giving to orgs, while the others give to small groups or individuals for upskilling/outreach frequently.

8
Michael St Jules 🔸
I think the differences between the LTFF and AWF are largely explained by differences in salary expectations/standards between the cause areas. There are small groups and individuals getting money from the AWF, and they tend to get much less for similar duration projects. Salaries in effective animal advocacy are pretty consistently substantially lower than in AI safety (and software/ML, which AI safety employers and grantmakers might try to compete with somewhat), with some exceptions. This is true even for work in high-income countries like the US and the UK. And, of course, salary expectations are even lower in low- and middle-income countries, which are an area of focus of the AWF (within neglected regions). Plus, many AI safety folks are in the Bay Area specifically, which is pretty expensive (although animal advocates in London also aren't paid as much).
8
Aaron Bergman
Yeah but my (implicit, should have made explicit lol) question is “why this is the case?” Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.

Type 1 diabetic and long time EA here.

Generally when I have donated to help people directly (most of my recent donations have not been of this form, to be clear, in recent years my donations have been focused on research or on helping animals) I am not really thinking about how big the problem is. I am thinking "what will the consequence of this donation be?" If I am donating less than millions of dollars, I'm not likely to solve the whole issue, so the question of if the issue is big or small in a global sense just isn't very important.

For type 1 diabetes... (read more)

2
FionaConner
Great answer, thank you!

Did someone say it would be bad? Where?

-1[anonymous]
I honestly can’t remember.

I think the layout of this post is quite reader unfriendly.

I strongly suggest you start with a full summary rather than just an intro, and don't bury your conclusions midway between the post and some very long appendices which are unlikely to be very useful to 90% of readers.

As it is, anyone wishing to respond in depth would basically have to do the work of summarizing the post themselves, which increases the friction on feedback.

4
trammell
I agree! As noted under Richard’s comment, I’m afraid my only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work. (And I won’t be free again for a while…) If you or anyone else reading this manages to write one in the meantime, send it over and I’ll stick it at the top.

The article gives a magnitude for fish farming. It does not talk about wild fish. Why is the scale of wild fish relevant?

Did you read the article? It is about intensive fish farming, and addresses all your points in detail, which you do not acknowledge.

I was not aware of the enormous weigth of aquaculture on final fish production. I was thinking it was around 10%, but it is close to one half.

https://ourworldindata.org/rise-of-aquaculture

Onmizoid is rigth, and I have retracted my comment. 

2
Vasco Grilo🔸
Thanks! In that case, 92.5 % (= 160/173) of the predictions for a population loss of 95 % due to climate change given a 10 % loss due to climate change were made with the 1 % lower limit. So I assume 0.0228 % chance for a 95 % population loss due to climate change is still an overestimate.

This conceptually seems similar to the meat eater problem argument against global health interventions.

You may be aware of this already, but I think there is a clear difference between saving an existing person who would otherwise have died - and in the process reducing suffering by also preventing non-fatal illnesses - and starting a pregnancy because before starting a pregnancy the person doesn't exist yet.

I think a lot of this coordination is implicit rather than explicit, and I don't think it's very well publicised (and there's room for marginal donations to change whether the org gets funded to their high Vs medium target for example, and signalling value that individuals think this is good, so I do not mean to say that this is the only consequence of a donation).

I think there is a misconception here - when it is said that these charities will be fully funded anyway, what that can mean is that they will try to fundraise for a certain budget (perhaps with high/medium/low targets) and larger donors will often choose to fill the remaining gap in their fundraising late in the fundraising process.

This means you are often not really giving the charity extra on top of their budget, but in practice funging with the largest donors. The largest donors will then often give slightly less to them and give to their next best opt... (read more)

5
Maxim Vandaele
Hello, thank you for clarifying. I didn't know that the fundraising process is coordinated in this sort of way. I get the impression that many introductory materials on effective altruism don't really explain this too well, leading to the sort of misconception I may have had when I wrote my question.

I think it would follow from this and your radical uncertainty with regard to non long term interventions that you would want to include these donations as positively impactful.

4
Vasco Grilo🔸
I accounted for donations going to the area of "creating a better future" which were tagged as "multiple cause areas". GWWC tagged 11 % going to creating a better future, but I assumed 13.3 % (= 11/(65 + 7 + 11) = "tagged as creating a better future"/("tagged as improving human welfare" + "tagged as improving animal welfare" + "tagged as creating a better future")) went to creating a better future. This may not be accurate if the donations going to "multiple cause areas" are disproportionally going to "creating a better future", so I take the point that it would be better to explicitly analyse where the donations in the bucket of "multiple cause areas" are going to.

Do you know how they tag the cause area of a given donation?

Is EA community building work considered separately, or included in "creating a better future"?

5
Vasco Grilo🔸
Hi Charles, Good question! The correspondence is here. It depends. If such work were funded by the Long-term Future Fund, it would be included in "creating a better future". If it were funded by Centre for Effective Altruism, or the EA Infrastructure Fund, it would be included in "multiple cause areas".

Suggestion: pre-commit to a ranking method for forecasters. Chuck out questions which go to <5%/>95% within a week. Take the pairs (question, time) with 10n+ updates within the last m days for some n,m, and no overlap (for questions with overlap pick the time which maximises number of predictions). Take the n best forecasters per your ranking method in the sample and compare them to the full sample and the "without them" sample.

Can you quantify how much work recency weighting is doing here? I could imagine it explaining all (or even more than all) of the effect (e.g. if many "best" forecasters have stale predictions relative to the community prediction often).

3
nikos
Not sure how to quantify that (open for ideas). But intuitively I agree with you and would suspect it's at least a sizable part

I expect the population of users will have similar propensity to update on most questions. The biggest reason for updating some questions more often is new facts emerging which cause changes of mind. This is a massive confounder here, since questions with ex ante surprising updates seem harder to predict almost by definition.

6
titotal
Yes, it seems like more uncertain and speculative questions with fewer available evidence would have larger swings in beliefs. So it's possible that updating does help, but not enough to overcome the difficulty of the problems. If this is what happened, the takeaway is that we should more be more skeptical of predictions that are more speculative and more uncertain, which makes sense.  I could see a way for updating to make predictions worse, if there was systematic bias in whether pro or anti proposition evidence is seen, or a bias in how pro or anti evidence is updated on. To pick an extreme example, if someone was trying to evaluate whether the earth was flat, but only considered evidence from flat earth websites, then higher amounts of updating would simply drag them further and further away from the truth. This could also explain why metacalculus is doing worse on AI prediction than other predictions, if there was a bias specifically in this field. 

Unfortunately not - the person never followed up and when I asked them a few months later they did not respond.

I don't have many strong opinions on this topic, but one I do have and think should be standard practice is recusing oneself from decisions involving current or former romantic partners.

That means not being involved in hiring processes and grantmaking decisions involving them, and not giving them references without noting the conflict of interest. This is very standard in professional organisations for good reason.

I think the point is well made by Lorenzo, as someone who understands what the linked text is referring to and doesn't need to click on the link. I think it is good that the link is there for those who do not know what he meant or want clarification.

In general I think it is a bad idea to demand more work from people communicating with you - it discourages them from trying to communicate in the first place. This is similar to the trivial inconvenience point itself.

3
Luca Parodi
To be fair mine regarding the link-to-articles tendency is not a well-formed opinion, just something I've felt during some online and offline conversations. Especially from other fellow rationalists, when they quote a Scott's article or an obscure post on the sequences when not absolutely needed.  By the way, I think it's also a bad idea to demand more work from people you are communicating with, like informally requesting them to read a full article instead of trying to explain your point in plain terms.  Let's put it this way: we can have the privilege to link/refer to articles/concepts in our bubble because we kinda know what we're talking about and we are people who like to spend time reading, but what if we have to communicate with someone who is from outside the bubble? We will not have that privilege and we will have to explain ourselves in plain terms. It's not a trivial inconvenience: if we don't exercise our ability to reduce the inferential distance (yes, I am guilty of the same sin) between "us" and "others" starting from ourselves we will always be unable to communicate our ideas properly. But, again, I haven't thought about this issue properly so I reserve to myself the faculty to take some time to refine or abdicate my arguments.  

I think there should be much more focus on the question of whether this is actually a positive intervention than just one paragraph noting that you haven't thought about the benefits.

The claim that most smokers don't seem to want to quit seems really important to me, and could reduce the scale of the problem to the effects of secondhand smoke vs net benefits to smokers, which might be better treated with other policies (like indoor smoking bans for example).

The Gruber paper (linked below in my comment) suggests that reducing smoking actually makes the population of smokers and potential smokers happier.

In any case, it doesn't appear to me true that most smokers don't want to quit - see data on the US and even in China where most people don't want to quit, a strong majority (70%) supports the government doing more to control smoking.

Interesting post. I haven't conducted the depth research to verify most of the figures, but I do think the idea that you have a 55% chance of success with a $208k 1 year advocacy campaign pretty implausible and suspect there's something dubious going on with the method of estimating P(success) here.

I think an appropriate fact to incorporate which I did not see would be "actual costs of lobbying in the US" and "frequency of novel regulations passing" on which I presume there is quite a bit of data available.

3
Joel Tan🔸
The probability of advocacy success is a fairly critical variable, and I agree that the estimate provided could well be too optimistic. It really depends on (a) what reference class you take, and (b) how you weigh it against subjective inside view estimates. For example, my estimate of (b) as informed by working in the public sector/politics is fairly low, but if you look do a case study of when sugar taxes were actually advocated (and implemented or not), it's really impressive (~90%), and the real challenge becomes adjusting for selection bias - both with respect to it being tried (in countries where political conditions were more favourable in the first place), and successful attempts being noted in the news (while failed ones die inside the government, unreported). On the one hand, sugary drinks taxes really  aren't that uncommon, so it's not that surprising that it wouldn't be too difficult to advocate for (relative to something like sodium tax advocacy, which is probably a quarter as tractable). I would also caution against using US lobbying costs, since that isn't necessarily representative (i.e. the modal  campaign wouldn't be hiring K-street lobbyists in the US, so much as an NGO talking to low and middle-income countries governments, which tend to defer to NGOs than western governments do). In general, I hope to get a better sense of this by talking to experts (even while noting that the public health experts may well also be overoptimistic due to halo effects/wishful thinking!)
2
Rina
My general sense is that a lot of policy advocacy projects look really well in terms of CEAs as the scope tends to be high but few properly discount for likelihood of success or indeed, as you suggest, actual lobbying costs over time and relevancy, frequency, take up of regulations.

Just a note on Jane Street in particular - nobody at Jane Street is making a potentially multi year bet on interest rates with Jane Street money. That's simply not in the category of things that Jane Street trades. If someone at Jane Street wanted to make betting on this a significant part of what they do, they'd have to leave and go elsewhere and find someone to give them at least hundreds of millions of dollars to make the bet.

A few thoughts, though I wouldn't give them too much weight:

The considerations I can think of look something like:

(1)Sonnen does work with some positive externalities.

(2)Sonnen makes some profit, which either goes to Shell shareholders, net of taxes, or might be used to finance other Shell activities.

(3)Shell might be able do other things with negative externalities and suffer fewer consequences due to positive PR effects from Sonnen.

Since Shell will probably evaluate other projects on their own merit, and can easily borrow money in financial markets, (2) ... (read more)

I didn't really think it was rude, more a somewhat aggravating tone, which may or may not be a different thing, depending on who you ask. I just took that it was for the sake of not having to litigate the point.

I think banning someone for a pattern of comments like this would be overly heavy handed and reflect badly on the forum, especially when many of Sabs' comments are fairly productive (I just glanced through recent comments and the majority had positive karma and made decent points IMO).

To be concrete about it, I think a somewhat rude person with good points to make, coming here and giving their perspective, mostly constructively, is something we should want more of rather than less at the current margin. It's not like the EA forum is in any short term danger of becoming a haven for trolling and rudeness, and if there are concerns it is heading in that direction at any point it should be possible to course correct.

8
Sabs
Thanks for the support, but can I ask a genuine question: how on earth is this comment rude? It does not personally attack the OP, or indeed anyone at all. Indeed it doesn't even criticize OP or their post! It simply gives a warning with a jokey but also sincere reference to the FTX scandal, where I genuinely think that it's quite likely that amphetamine abuse played a fairly important role in what went wrong - both from my own personal information that I've received and from what's been written up on e.g Milkyeggs. I do think the EA cult of productivity is a dangerous thing, or at least it can be! A lot of other people feel the same! 

I agree strongly here re: GWWC. I think it is very odd that they endorse a charity without a clear public explanation of why the charity is effective which could satisfy a mildly skeptical outsider. This is a bar that this clearly does not reach in my opinion. They don't need to have the same evidential requirements as Givewell, but the list of charities they recommend is sufficiently long that they should prefer to have a moderately high bar for charities to make that list.

To admit my priors here: I am very skeptical of Strong Minds effectiveness given th... (read more)

Your "best guess" is that the effect of a deworming treatment on happiness is a sudden benefit followed by a slow decline relative to no treatment? Do you have any theory of action that explains why this would be the case?

Trying to draw conclusions from such a dramatically underpowered study (with regard to this question) strikes me as absurd.

8
Ryan Dwyer
Hi Charles, Our takeaway from this data is that there is not evidence of an effect (positive or negative). We take these data to be our best guess because there are no prior studies of the effect of deworming on SWB, and the evidence of impact on other outcomes is very uncertain. However, all the effects are non-significant. We don’t have a theory of action because we think the overall evidence points to there being no effect (or at least just a very small one). We ran the cost-effectiveness analysis as an exercise to see how deworming would look if we took the data at face-value. The point estimate was negative, but the confidence interval was so wide that the results were essentially uninformative, which converges with our conclusion that there is not a substantial effect of deworming on long-term wellbeing.  That being said, we can make assumptions that are favorable to deworming, such as assuming the effect cannot be negative. This, of course, involves overriding the data with prior beliefs — prior beliefs that we lack strong reasons to hold. In any case, we explore the results under these favorable assumptions in Appendix A2. In all plausible cases,  deworming is still less cost-effective than StrongMinds, so even these exploratory analyses —which, again, we don’t endorse— don’t change our conclusion to not recommend deworming over StrongMinds.    Regarding power It is unclear what evidence you use to claim the study is underpowered. As Joel mentioned in his comment to MichaelStJules (repasted below), we had 98% power to detect effect sizes of 0.08 SDs, the effect size that would make deworming more cost-effective than StrongMinds . 

"However, maybe a small minority happy to do it would gradually build momentum over time." This seems possible, but if the goal is to maximise resources, I would be quite surprised if e.g. the number of billionaires willing to give away 99.99%+ of their wealth was even 1/10th as high as the number willing to give away 90%. Clearly nobody truly needs $100m+, but nonetheless I would be very wary of potentially putting off a Bill Gates (who lives in a $150m house ) due to being too demanding, when 99% of his wealth does approximately 99% as much good as all o... (read more)

I think a compelling reason for not doing this is mostly that it is past what I would guess the optimal level of demandingness would be for growing the movement. I would expect far fewer high earners would be willing to take on a prescription that they keep nothing above that sort of level than that they donate a substantial fraction.

I for one would find it too demanding, and I think it would be very bad if others like me (for context, I will be donating over 50% of my income this year) bounced off the movement because it seemed too demanding.

4
Vasco Grilo🔸
Thanks for answering, Charles! I guess the demandingness can be adjusted (downwards or upwards) by adapting the annual consumption and total savings. The numbers I provided are not supposed to be an iron rule. As I said: I tend to agree with you that: However, maybe a small minority happy to do it would gradually build momentum over time. Happy to know you will be donating over 50 %! It would indeed be sad if people bounced off because of that. That being said, I would expect people to continue to see donation norms as non-binary. In the same way that it is fine to donate less than 10 %, it would be fine to have an annual consumption per person greater than 41.3 k$ (or other), or total savings per person greater than 82.7 k$ (or other).

Weak disagree but upvoted - I think that Kelsey has played this game enough to know what's up

"I genuinely thought SBF was comfortable with our interview being published and knew that was going to happen. "

This is not credible, and anyone who thinks this is credible is engaged in motivated reasoning.

I still think you should have published the interview, but you don't need to lie about this.

There are options between credible and lying. It's possible, for one thing, that Kelsey was engaged in some motivated reasoning herself, trying to make these trade-offs between her values while faced with a clear incentive in one direction.

"Typically, this term refers to a rhetorical strategy where the speaker attacks the character, motive, or some other attribute of the person making an argument rather than addressing the substance of the argument itself."

Jonas said that Nathan was making overblown claims here and on Twitter. In particular the inclusion of "and on Twitter" points to Nathan as someone engaged in irresponsible conduct, without addressing his substance, and thus meets the definition of an ad hominem IMO.

My second point addresses your point 2. As I said, there are many people w... (read more)

Thanks for the response. I still do not think the post made it clear what its objective was, and I don't think it's really the venue for this kind of discussion.

5
Evan_Gaensbauer
I meant the initial question literally and sought an answer. I listed some general kinds of answers and clarified that I'm seeking answers to what potential factors may be shaping Musk's approaches that would not be so obvious. I acknowledge I could have written that better and that the tone makes it ambiguous whether I was trying to slag him disguised as me asking sincere question.

I think this is an irresponsible ad hominem to be posting without any substance or link to substance whatsoever. There are many EAs who know a lot about crypto and read the forum - if there are substantial criticisms to be made I think you can expect them to make them without this vague insinuation.

It's important that this is not an ad hominem.

I'm torn between:

  1. It is pretty annoying when Nathan has come in with a best-guess doc, being very transparent, to get such a blanket and vague statement argued from authority. An EA community that lost its ability to have open discussion and relied on authority like that would be a worse one indeed. And:
  2. If Jonas has received a tip from someone, but does not want to reveal his source, and his source does not want to post more details, this is the best Jonas can do. Jonas has added information to the commons, and been rewarded by losing karma.

Retracted it, didn't mean to attack Nathan personally. Apologies.

Load more