All of Charles Dillon 's Comments + Replies

Thanks for posting this - on a quick read it looks pretty accurate to me and I'll be glad to have this as a resource to point people to when they seem not to understand exactly why what FTX did was so bad.

I don't understand why you think this is the case. If you think of the "distribution of grants given" as a sum of multiple different distributions (e.g. upskilling, events, and funding programmes) of significantly varying importance across cause areas, then more or less dropping the first two would give your overall distribution a very different shape.

5
Aaron Bergman
4mo
Yeah you're right, not sure what I missed on the first read

I think getting enough people interested in working on animal welfare has not usually been the bottleneck, relative to money to directly deploy on projects, which tend to be larger.

2
Aaron Bergman
4mo
This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.

Seems pretty unsurprising - the animal welfare fund is mostly giving to orgs, while the others give to small groups or individuals for upskilling/outreach frequently.

8
MichaelStJules
4mo
I think the differences between the LTFF and AWF are largely explained by differences in salary expectations/standards between the cause areas. There are small groups and individuals getting money from the AWF, and they tend to get much less for similar duration projects. Salaries in effective animal advocacy are pretty consistently substantially lower than in AI safety (and software/ML, which AI safety employers and grantmakers might try to compete with somewhat), with some exceptions. This is true even for work in high-income countries like the US and the UK. And, of course, salary expectations are even lower in low- and middle-income countries, which are an area of focus of the AWF (within neglected regions). Plus, many AI safety folks are in the Bay Area specifically, which is pretty expensive (although animal advocates in London also aren't paid as much).
8
Aaron Bergman
4mo
Yeah but my (implicit, should have made explicit lol) question is “why this is the case?” Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.

Type 1 diabetic and long time EA here.

Generally when I have donated to help people directly (most of my recent donations have not been of this form, to be clear, in recent years my donations have been focused on research or on helping animals) I am not really thinking about how big the problem is. I am thinking "what will the consequence of this donation be?" If I am donating less than millions of dollars, I'm not likely to solve the whole issue, so the question of if the issue is big or small in a global sense just isn't very important.

For type 1 diabetes... (read more)

2
FionaConner
7mo
Great answer, thank you!

Did someone say it would be bad? Where?

-1
wes R
7mo
I honestly can’t remember.

I think the layout of this post is quite reader unfriendly.

I strongly suggest you start with a full summary rather than just an intro, and don't bury your conclusions midway between the post and some very long appendices which are unlikely to be very useful to 90% of readers.

As it is, anyone wishing to respond in depth would basically have to do the work of summarizing the post themselves, which increases the friction on feedback.

4
trammell
8mo
I agree! As noted under Richard’s comment, I’m afraid my only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work. (And I won’t be free again for a while…) If you or anyone else reading this manages to write one in the meantime, send it over and I’ll stick it at the top.

The article gives a magnitude for fish farming. It does not talk about wild fish. Why is the scale of wild fish relevant?

Did you read the article? It is about intensive fish farming, and addresses all your points in detail, which you do not acknowledge.

I was not aware of the enormous weigth of aquaculture on final fish production. I was thinking it was around 10%, but it is close to one half.

https://ourworldindata.org/rise-of-aquaculture

Onmizoid is rigth, and I have retracted my comment. 

2
Vasco Grilo
10mo
Thanks! In that case, 92.5 % (= 160/173) of the predictions for a population loss of 95 % due to climate change given a 10 % loss due to climate change were made with the 1 % lower limit. So I assume 0.0228 % chance for a 95 % population loss due to climate change is still an overestimate.

This conceptually seems similar to the meat eater problem argument against global health interventions.

You may be aware of this already, but I think there is a clear difference between saving an existing person who would otherwise have died - and in the process reducing suffering by also preventing non-fatal illnesses - and starting a pregnancy because before starting a pregnancy the person doesn't exist yet.

I think a lot of this coordination is implicit rather than explicit, and I don't think it's very well publicised (and there's room for marginal donations to change whether the org gets funded to their high Vs medium target for example, and signalling value that individuals think this is good, so I do not mean to say that this is the only consequence of a donation).

I think there is a misconception here - when it is said that these charities will be fully funded anyway, what that can mean is that they will try to fundraise for a certain budget (perhaps with high/medium/low targets) and larger donors will often choose to fill the remaining gap in their fundraising late in the fundraising process.

This means you are often not really giving the charity extra on top of their budget, but in practice funging with the largest donors. The largest donors will then often give slightly less to them and give to their next best opt... (read more)

5
Maxim Vandaele
1y
Hello, thank you for clarifying. I didn't know that the fundraising process is coordinated in this sort of way. I get the impression that many introductory materials on effective altruism don't really explain this too well, leading to the sort of misconception I may have had when I wrote my question.

I think it would follow from this and your radical uncertainty with regard to non long term interventions that you would want to include these donations as positively impactful.

4
Vasco Grilo
1y
I accounted for donations going to the area of "creating a better future" which were tagged as "multiple cause areas". GWWC tagged 11 % going to creating a better future, but I assumed 13.3 % (= 11/(65 + 7 + 11) = "tagged as creating a better future"/("tagged as improving human welfare" + "tagged as improving animal welfare" + "tagged as creating a better future")) went to creating a better future. This may not be accurate if the donations going to "multiple cause areas" are disproportionally going to "creating a better future", so I take the point that it would be better to explicitly analyse where the donations in the bucket of "multiple cause areas" are going to.

Do you know how they tag the cause area of a given donation?

Is EA community building work considered separately, or included in "creating a better future"?

5
Vasco Grilo
1y
Hi Charles, Good question! The correspondence is here. It depends. If such work were funded by the Long-term Future Fund, it would be included in "creating a better future". If it were funded by Centre for Effective Altruism, or the EA Infrastructure Fund, it would be included in "multiple cause areas".

Suggestion: pre-commit to a ranking method for forecasters. Chuck out questions which go to <5%/>95% within a week. Take the pairs (question, time) with 10n+ updates within the last m days for some n,m, and no overlap (for questions with overlap pick the time which maximises number of predictions). Take the n best forecasters per your ranking method in the sample and compare them to the full sample and the "without them" sample.

Can you quantify how much work recency weighting is doing here? I could imagine it explaining all (or even more than all) of the effect (e.g. if many "best" forecasters have stale predictions relative to the community prediction often).

3
nikos
1y
Not sure how to quantify that (open for ideas). But intuitively I agree with you and would suspect it's at least a sizable part

I expect the population of users will have similar propensity to update on most questions. The biggest reason for updating some questions more often is new facts emerging which cause changes of mind. This is a massive confounder here, since questions with ex ante surprising updates seem harder to predict almost by definition.

6
titotal
1y
Yes, it seems like more uncertain and speculative questions with fewer available evidence would have larger swings in beliefs. So it's possible that updating does help, but not enough to overcome the difficulty of the problems. If this is what happened, the takeaway is that we should more be more skeptical of predictions that are more speculative and more uncertain, which makes sense.  I could see a way for updating to make predictions worse, if there was systematic bias in whether pro or anti proposition evidence is seen, or a bias in how pro or anti evidence is updated on. To pick an extreme example, if someone was trying to evaluate whether the earth was flat, but only considered evidence from flat earth websites, then higher amounts of updating would simply drag them further and further away from the truth. This could also explain why metacalculus is doing worse on AI prediction than other predictions, if there was a bias specifically in this field. 

Unfortunately not - the person never followed up and when I asked them a few months later they did not respond.

I don't have many strong opinions on this topic, but one I do have and think should be standard practice is recusing oneself from decisions involving current or former romantic partners.

That means not being involved in hiring processes and grantmaking decisions involving them, and not giving them references without noting the conflict of interest. This is very standard in professional organisations for good reason.

I think the point is well made by Lorenzo, as someone who understands what the linked text is referring to and doesn't need to click on the link. I think it is good that the link is there for those who do not know what he meant or want clarification.

In general I think it is a bad idea to demand more work from people communicating with you - it discourages them from trying to communicate in the first place. This is similar to the trivial inconvenience point itself.

3
Luca Parodi
1y
To be fair mine regarding the link-to-articles tendency is not a well-formed opinion, just something I've felt during some online and offline conversations. Especially from other fellow rationalists, when they quote a Scott's article or an obscure post on the sequences when not absolutely needed.  By the way, I think it's also a bad idea to demand more work from people you are communicating with, like informally requesting them to read a full article instead of trying to explain your point in plain terms.  Let's put it this way: we can have the privilege to link/refer to articles/concepts in our bubble because we kinda know what we're talking about and we are people who like to spend time reading, but what if we have to communicate with someone who is from outside the bubble? We will not have that privilege and we will have to explain ourselves in plain terms. It's not a trivial inconvenience: if we don't exercise our ability to reduce the inferential distance (yes, I am guilty of the same sin) between "us" and "others" starting from ourselves we will always be unable to communicate our ideas properly. But, again, I haven't thought about this issue properly so I reserve to myself the faculty to take some time to refine or abdicate my arguments.  

I think there should be much more focus on the question of whether this is actually a positive intervention than just one paragraph noting that you haven't thought about the benefits.

The claim that most smokers don't seem to want to quit seems really important to me, and could reduce the scale of the problem to the effects of secondhand smoke vs net benefits to smokers, which might be better treated with other policies (like indoor smoking bans for example).

The Gruber paper (linked below in my comment) suggests that reducing smoking actually makes the population of smokers and potential smokers happier.

In any case, it doesn't appear to me true that most smokers don't want to quit - see data on the US and even in China where most people don't want to quit, a strong majority (70%) supports the government doing more to control smoking.

Interesting post. I haven't conducted the depth research to verify most of the figures, but I do think the idea that you have a 55% chance of success with a $208k 1 year advocacy campaign pretty implausible and suspect there's something dubious going on with the method of estimating P(success) here.

I think an appropriate fact to incorporate which I did not see would be "actual costs of lobbying in the US" and "frequency of novel regulations passing" on which I presume there is quite a bit of data available.

3
Joel Tan
1y
The probability of advocacy success is a fairly critical variable, and I agree that the estimate provided could well be too optimistic. It really depends on (a) what reference class you take, and (b) how you weigh it against subjective inside view estimates. For example, my estimate of (b) as informed by working in the public sector/politics is fairly low, but if you look do a case study of when sugar taxes were actually advocated (and implemented or not), it's really impressive (~90%), and the real challenge becomes adjusting for selection bias - both with respect to it being tried (in countries where political conditions were more favourable in the first place), and successful attempts being noted in the news (while failed ones die inside the government, unreported). On the one hand, sugary drinks taxes really  aren't that uncommon, so it's not that surprising that it wouldn't be too difficult to advocate for (relative to something like sodium tax advocacy, which is probably a quarter as tractable). I would also caution against using US lobbying costs, since that isn't necessarily representative (i.e. the modal  campaign wouldn't be hiring K-street lobbyists in the US, so much as an NGO talking to low and middle-income countries governments, which tend to defer to NGOs than western governments do). In general, I hope to get a better sense of this by talking to experts (even while noting that the public health experts may well also be overoptimistic due to halo effects/wishful thinking!)
2
Rina
1y
My general sense is that a lot of policy advocacy projects look really well in terms of CEAs as the scope tends to be high but few properly discount for likelihood of success or indeed, as you suggest, actual lobbying costs over time and relevancy, frequency, take up of regulations.

Just a note on Jane Street in particular - nobody at Jane Street is making a potentially multi year bet on interest rates with Jane Street money. That's simply not in the category of things that Jane Street trades. If someone at Jane Street wanted to make betting on this a significant part of what they do, they'd have to leave and go elsewhere and find someone to give them at least hundreds of millions of dollars to make the bet.

A few thoughts, though I wouldn't give them too much weight:

The considerations I can think of look something like:

(1)Sonnen does work with some positive externalities.

(2)Sonnen makes some profit, which either goes to Shell shareholders, net of taxes, or might be used to finance other Shell activities.

(3)Shell might be able do other things with negative externalities and suffer fewer consequences due to positive PR effects from Sonnen.

Since Shell will probably evaluate other projects on their own merit, and can easily borrow money in financial markets, (2) ... (read more)

I didn't really think it was rude, more a somewhat aggravating tone, which may or may not be a different thing, depending on who you ask. I just took that it was for the sake of not having to litigate the point.

I think banning someone for a pattern of comments like this would be overly heavy handed and reflect badly on the forum, especially when many of Sabs' comments are fairly productive (I just glanced through recent comments and the majority had positive karma and made decent points IMO).

To be concrete about it, I think a somewhat rude person with good points to make, coming here and giving their perspective, mostly constructively, is something we should want more of rather than less at the current margin. It's not like the EA forum is in any short term danger of becoming a haven for trolling and rudeness, and if there are concerns it is heading in that direction at any point it should be possible to course correct.

8
Sabs
1y
Thanks for the support, but can I ask a genuine question: how on earth is this comment rude? It does not personally attack the OP, or indeed anyone at all. Indeed it doesn't even criticize OP or their post! It simply gives a warning with a jokey but also sincere reference to the FTX scandal, where I genuinely think that it's quite likely that amphetamine abuse played a fairly important role in what went wrong - both from my own personal information that I've received and from what's been written up on e.g Milkyeggs. I do think the EA cult of productivity is a dangerous thing, or at least it can be! A lot of other people feel the same! 

I agree strongly here re: GWWC. I think it is very odd that they endorse a charity without a clear public explanation of why the charity is effective which could satisfy a mildly skeptical outsider. This is a bar that this clearly does not reach in my opinion. They don't need to have the same evidential requirements as Givewell, but the list of charities they recommend is sufficiently long that they should prefer to have a moderately high bar for charities to make that list.

To admit my priors here: I am very skeptical of Strong Minds effectiveness given th... (read more)

Your "best guess" is that the effect of a deworming treatment on happiness is a sudden benefit followed by a slow decline relative to no treatment? Do you have any theory of action that explains why this would be the case?

Trying to draw conclusions from such a dramatically underpowered study (with regard to this question) strikes me as absurd.

8
Ryan Dwyer
1y
Hi Charles, Our takeaway from this data is that there is not evidence of an effect (positive or negative). We take these data to be our best guess because there are no prior studies of the effect of deworming on SWB, and the evidence of impact on other outcomes is very uncertain. However, all the effects are non-significant. We don’t have a theory of action because we think the overall evidence points to there being no effect (or at least just a very small one). We ran the cost-effectiveness analysis as an exercise to see how deworming would look if we took the data at face-value. The point estimate was negative, but the confidence interval was so wide that the results were essentially uninformative, which converges with our conclusion that there is not a substantial effect of deworming on long-term wellbeing.  That being said, we can make assumptions that are favorable to deworming, such as assuming the effect cannot be negative. This, of course, involves overriding the data with prior beliefs — prior beliefs that we lack strong reasons to hold. In any case, we explore the results under these favorable assumptions in Appendix A2. In all plausible cases,  deworming is still less cost-effective than StrongMinds, so even these exploratory analyses —which, again, we don’t endorse— don’t change our conclusion to not recommend deworming over StrongMinds.    Regarding power It is unclear what evidence you use to claim the study is underpowered. As Joel mentioned in his comment to MichaelStJules (repasted below), we had 98% power to detect effect sizes of 0.08 SDs, the effect size that would make deworming more cost-effective than StrongMinds . 

"However, maybe a small minority happy to do it would gradually build momentum over time." This seems possible, but if the goal is to maximise resources, I would be quite surprised if e.g. the number of billionaires willing to give away 99.99%+ of their wealth was even 1/10th as high as the number willing to give away 90%. Clearly nobody truly needs $100m+, but nonetheless I would be very wary of potentially putting off a Bill Gates (who lives in a $150m house ) due to being too demanding, when 99% of his wealth does approximately 99% as much good as all o... (read more)

I think a compelling reason for not doing this is mostly that it is past what I would guess the optimal level of demandingness would be for growing the movement. I would expect far fewer high earners would be willing to take on a prescription that they keep nothing above that sort of level than that they donate a substantial fraction.

I for one would find it too demanding, and I think it would be very bad if others like me (for context, I will be donating over 50% of my income this year) bounced off the movement because it seemed too demanding.

4
Vasco Grilo
1y
Thanks for answering, Charles! I guess the demandingness can be adjusted (downwards or upwards) by adapting the annual consumption and total savings. The numbers I provided are not supposed to be an iron rule. As I said: I tend to agree with you that: However, maybe a small minority happy to do it would gradually build momentum over time. Happy to know you will be donating over 50 %! It would indeed be sad if people bounced off because of that. That being said, I would expect people to continue to see donation norms as non-binary. In the same way that it is fine to donate less than 10 %, it would be fine to have an annual consumption per person greater than 41.3 k$ (or other), or total savings per person greater than 82.7 k$ (or other).

Weak disagree but upvoted - I think that Kelsey has played this game enough to know what's up

"I genuinely thought SBF was comfortable with our interview being published and knew that was going to happen. "

This is not credible, and anyone who thinks this is credible is engaged in motivated reasoning.

I still think you should have published the interview, but you don't need to lie about this.

There are options between credible and lying. It's possible, for one thing, that Kelsey was engaged in some motivated reasoning herself, trying to make these trade-offs between her values while faced with a clear incentive in one direction.

"Typically, this term refers to a rhetorical strategy where the speaker attacks the character, motive, or some other attribute of the person making an argument rather than addressing the substance of the argument itself."

Jonas said that Nathan was making overblown claims here and on Twitter. In particular the inclusion of "and on Twitter" points to Nathan as someone engaged in irresponsible conduct, without addressing his substance, and thus meets the definition of an ad hominem IMO.

My second point addresses your point 2. As I said, there are many people w... (read more)

Thanks for the response. I still do not think the post made it clear what its objective was, and I don't think it's really the venue for this kind of discussion.

5
Evan_Gaensbauer
1y
I meant the initial question literally and sought an answer. I listed some general kinds of answers and clarified that I'm seeking answers to what potential factors may be shaping Musk's approaches that would not be so obvious. I acknowledge I could have written that better and that the tone makes it ambiguous whether I was trying to slag him disguised as me asking sincere question.

I think this is an irresponsible ad hominem to be posting without any substance or link to substance whatsoever. There are many EAs who know a lot about crypto and read the forum - if there are substantial criticisms to be made I think you can expect them to make them without this vague insinuation.

It's important that this is not an ad hominem.

I'm torn between:

  1. It is pretty annoying when Nathan has come in with a best-guess doc, being very transparent, to get such a blanket and vague statement argued from authority. An EA community that lost its ability to have open discussion and relied on authority like that would be a worse one indeed. And:
  2. If Jonas has received a tip from someone, but does not want to reveal his source, and his source does not want to post more details, this is the best Jonas can do. Jonas has added information to the commons, and been rewarded by losing karma.

Retracted it, didn't mean to attack Nathan personally. Apologies.

4
Evan_Gaensbauer
1y
Musk has for years identified that one of the major motivators for most of his endeavours is to ensure civilization is preserved. From EA convincing Elon Musk to take existential threats from transformative AI seriously almost a decade ago, to his recent endorsement of longtermism and William MacAskill's What We Owe the Future on Twitter for millions to see, the public will perceive a strong association him and EA. He also continues to influence the public response to potential existential threats like unaligned AI and the climate crisis, among others. Even if Musk has more hits than misses, his track record is mixed enough that it's worth trying to notice any real patterns across his mistakes so the negative impact could be mitigated. Given Musk's enduring respect for EA, the community may be better able than most to inspire him to make better decisions in the future as it relates to having a positive social impact, i.e., become better at calibration.

There are writing issues and I'm not sure the net value of the post is positive.

But your view seems ungenerous, ideas in paragraphs like this seem relevant:

This isn't a snide jab at Will MacAskill. He in fact recognized this problem before most and has made the wise choice of not being the CEO of the CEA for a decade now even though he could have kept the job forever if he wanted. 

This is a general problem in EA of many academics having to repeatedly learn they have little to no comparative advantage, if not a comparative disadvantage, in people and o

... (read more)

I'm pretty sure the answer is no, you can't, for exactly the same reasons as why it would look dodgy to any regulators.

Additionally, the potential reputational harm to EA from this sort of thing probably should be taken into account.

There are enough sufficiently wealthy EAs that you might well be able to get funding from them if you have a good startup plan, without any reputational risk (and if you can't then this would be a weak but maybe valuable signal that your plan is not as good as you think).

That is why I left quite large margins for error, one of which you note, the other being that those 6 were only earning 1m+, not donating.

Demand for plant based meat having peaked is evidence against meat consumption declining, not in favour of it. And I don't think any serious unbiased analysts have suggested that lab grown meat would exceed 10% of global supply by 2040, if it ever becomes viable. See https://forum.effectivealtruism.org/posts/2b9HCjTiFnWM8jkRM/forecasts-estimate-limited-cultured-meat-production-through for a much more typical and pessimistic take.

I would guess that you could ballpark the marginal value somewhere around market prices in the US, which a Google search says is $50-75 per visit. Plausibly this is higher in the UK due to shortages brought on by the lack of a market, this is not clear to me.

Does the NHS ever pay to import blood? If so, that number, times the average cost efficiency of the NHS, which I think is approx £20k per year of healthy life, should not be way off, though of course it is oversimplified in numerous ways.

Given the above, I would be a little surprised if any reasonable version of this calculation got an answer substantially higher than 1 quality adjusted life day.

Well done for doing this! I think attempted replications or re-examinations of existing work are under-done in EA and wish more were conducted.

Can you give an example of a point or points in there you found compelling?

That article looks like the usual "utilitarianism is bad" stuff (an argument which predates EA by a long time and has seen little progress in recent times) combined with some strong mood affiliation and straightforward misunderstandings of economic thinking to me.

I've edited it slightly to work on this, though it is not easy to make this point without appearing slightly callous, I think.

What was this distinct reason? If this was mentioned in the post, I didn't see it.

If it wasn't mentioned in the post, it feels disingenuous of you to not mention it and give the impression that you were left in the dark and had to come up with your own list of hypotheses. It's quite difficult for a third party to come to any conclusions without this piece of information.

This comment feels unnecessarily combative, even though I agree with the practical point that without this piece of information, 3rd party observers can’t really get an accurate picture of the situation. So I agreed with but downvoted the comment.

I'm a bit confused by all the drive by downvotes of someone sharing a quickly sketched out plausible-sounding idea.

I think we'd be better off of we encouraged this sort of thing rather than discouraged it, at least until there actually seems to be a problem with too many half baked novel ideas being posted - if people disagree I'd like to know why.

Very few. The 2020 EA survey had only 6 people earning 1m+, which doesn't necessarily equate to donating as much. It's unlikely, I think, that fewer than 5% of such people took the survey given it had 2,000 responses and I doubt think there are more than 10,000 committed EAs, so I think there are likely under 100 such people.

https://forum.effectivealtruism.org/posts/nb6tQ5MRRpXydJQFq/ea-survey-2020-series-donation-data

I think it's reasonably likely that people earning $1m / year are systematically less inclined to bother with the survey, so I would be cautious about using the community response rate to extrapolate.

(On the other hand, 2000 is 5% of 40000, not 10000)

I expect much of the data is out there, because the majority of billionaires either want to give publicly, or they need to disclose when they change their shareholdings in their main source of wealth (in the case of the typical company founder) due to regulations, and donating to charity is seen as a good excuse to do this.

It may be rather difficult to gather though, as I don't expect there to be a nice centralised source.

1
Rob Percival
2y
I guess the harder the data is to gather the more valuable the resource would be! If it is actually something people are interested in that is...

62 pages is quite long - I understand then why you wouldn't put it on the forum.

I really dislike reading PDFs, as I read most non work things on mobile, and on Chrome based web browsers they don't open in browser tabs, which is where I store everything else I want to read.

I think I'd prefer some web based presentation, ideally with something like one web page per chapter/ large section. I don't know if this is representative of others though.

7
James Özden
2y
I've made a Google Doc version if this is better - thanks for the feedback, it's been very useful! 

I'm glad you produced this. One thing I found annoying, though, was that you said:

"The evidence related to each outcome, and how we arrived at these values, are explained further in the respective sections below."

But, they weren't? The report was just partially summarised here, with a link to your website. Why did you choose to do this?

3
James Özden
2y
Ah yes thanks for pointing that out! That sentence is there because we literally copied it from the report executive summary (so in that case it is below). It's a fair point though so will change that to reflect it not actually being below. I chose to not put the whole report on the forum just because it's so long (62 pages) and I was worried it would (a) put people off reading anything at all, (b) take me longer than what was reasonable to get all the formatting right and (c) provide a worse reading experience given it was all formatted for PDF/Google Docs (and that I wasn't going to spend loads of time formatting it perfectly). Curious though - would you prefer to read / find it easier if it was all on the Forum, or what would be your preferred reading location for longer reports?
Load more