All of Jamie_Harris's Comments + Replies

Unfortunately this was quite a while ago at the last org I worked at; I don't have access to the  relevant spreadsheets, email chains etc anymore and my memory is not the best, so I don't expect to be able to add much beyond what I wrote in the comment above. 

I tried doing this a while back. Some things I think I worried about at the time:

(1) disheartening people excessively by sending them scores that seem very low/brutal, especially if you use an unusual scoring methodology (2) causing yourself more time costs than it seems like at first, because (a) you find yourself needing to add caveats or manually hide some info to make it less disheartening to people, (b) people ask you follow-up questions (3) exposing yourself to some sort of unknown legal risk by saying something not-legally-defensible about the candi... (read more)

2
Joseph Lemien
11d
Regarding "disheartening people," I once got feedback for a hiring round and the organization shared what scores I got, and even shared scoring info for the other (anonymized) candidates. It was the best and most accurate data I have ever been given as feedback. I scored very low, much lower than I had expected. Of course I felt sad and frustrated. I wish that I knew more details about their scoring methodology, and part of me says that it was  an unfair process because they weren't clear on what I would be evaluated on. But I draw a analogies to getting rejected from anything else (such as a school application or a romantic partner): it sucks, but you get over it eventually. I felt bad for a day or two, and then the feelings of frustration faded away.
1
a guy named josh
11d
Okay, I definitely see those concerns! Unknown legal risk - especially as it relates to in many cases hiring in a lot of different countries at the same time with potentially different laws seems like a good reason not to release scores.  For me personally getting a rejection vs getting a rejection and being told I had the lowest score among all applicants, probably wouldn't make much of a difference - it might even save me time spent on future applications for similar positions. But on that maybe just releasing quarter percentiles would be a better less brutal alternative?  I think a general, short explainer of the scoring methodology used for a hiring round could/should be released to the applicants, if only for transparency's sake. So, explainer + raw scores and no ranking might also be another alternative?  Maybe I am misguided in my idea that 'this could be a low-time-cost way of making sure all applicants get a somewhat better sense of how good/bad their applications were.' I have after all only ever been on the applicant side of things and it does seem the current system is working fine at generating good hires. 
2
Joseph Lemien
12d
Jamie, I've been contemplating writing up a couple of informal "case study"-type reports of different hiring practices. My intention/thought process would be to allow EA orgs to learn about how several different orgs do hiring, to highlight some best practices, and generally to allow/encourage organizations to improve their methods. How would you feel about writing up a summary or having a call with me to allow me to understand how you tried giving feedback and what specific aspects caused challenges?

Thank you!

I understand the reasons for ranking relative to a given cost-effectiveness bar (or by a given cost-effectiveness metric). That provides more information than constraining the ranking to a numerical list so I appreciate that.

Btw, if you had 5-10 mins spare I think it'd be really helpful to add explanation notes to the cells in the top row of the spreadsheet. E.g. I don't know what "MEV" stands for, or what the "cost-effectiveness" or "cause no." columns are referring to. (Currently these things mean that I probably won't share the spreadsheet with people because I'd need to do a lot of explaining or caveating to them, whereas I'd be more likely to share it if it was more self-explanatory.)

4
Joel Tan
14d
Hi Jaime, I've updated to clarify that the "MEV" column is just "DALYs per USD 100,000". Have hidden some of the other columns (they're just for internal administrative/labelling purposes).

Thanks! When you say "median in quality" what's the dataset/category that you're referring to? Is it e.g. the 3 ranked lists I referred to, or something like "anyone who gives this a go privately"?

2
calebp
14d
Sorry, it wasn't clear. The reference class I had in mind was cause prio focussed resources on the EA forum.

Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!

First I think precise ranking of "cause areas"is nearly impossible as its hard to meaningfully calculate the "cost-effectiveness" of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has  probably already been tried and researched to some degree at least.

There's a lot going o... (read more)

Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of:

  1. (Systematically) exploring cause areas
  2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
  3. Sharing their list and reasons publicly.[2]

The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list.

Related things I appreciate, but aren't quite what I'm envisioning:

  • Tools and m
... (read more)
4
calebp
15d
I think people/orgs do some amount of this, but it's kind of a pain to share them publicly. I prefer to share this kind of stuff with specific people in Google Docs, in in-person conversations, or on Slack. I also worry somewhat about people deferring to random cause prio posts, and I'd guess that on the current margin, more cause prio posts that are around the current median in quality make the situation worse rather than better (though I could see it going either way).

Thank Jamie, I think cause prioritisation is super important as you say, but I don't think its as neglected as you think, at least not within the scope of global health and wellbeing. I agree that the substance of your the 3 part list being important, but I wouldn't consider the list the best measure of how much hard cause prioritisation work has been done. It seems a bit strawman-ish as I think there are good reasons (see below) why those "exact" things aren't being done. 

First I think precise ranking of "cause areas"is nearly impossible as its hard ... (read more)

3
Brad West
16d
I had thought a public list that emphasized potential Impact of different interventions and the likely costs associated with discovering the actual impact would be great.

Oh my suggestion wasn't necessarily that they'realternatives to receiving any donations; they could be supplements. They could be things you experiment with that could help to make the channel more sustainable and secure.

3
Jeroen Willems
17d
As additional sources of funding, I agree they're good ideas!

Sad news for https://pivotalcontest.org/

(I'm shocked that EA now has two "Blue Dot"s and two "Pivotal"s -- neither of which has the words "effective", "institute", or "initiative" anywhere to be seen.)

It seems like video release frequency is a significant bottleneck for you?

I'm not sure what the main time costs are. But some guesses of things that might help:

E.g.

  • freelancers, as you say
  • going for less thoroughly edited videos
  • doing some crowdsourcing or having volunteers/collaborators help write scripts
  • using LLMs more in the writing
  • doing interviews, article readouts, or other formats that enable you to produce long-form content fairly quickly (perhaps mixed in with the existing formats)
  • just setting yourself aggressive targets and working it out as y
... (read more)
2
Jeroen Willems
18d
Hi Jamie! You're right, output is definitely the biggest bottleneck. Right now, I'm focusing on making shorter videos that cover narrower, more specific topics. I'm also trying to incorporate more real-world footage to keep things visually interesting without requiring so much editing time. Unfortunately, my lead poisoning video and the video I'm currently working on turned out to be a lot more ambitious than I expected. I'm already working on your first four suggestions. I'm hesitant about the fifth point. I've tried the last point many times, but it never really worked out well. I think finding a collaborator who's willing to dedicate time to the project could be really helpful with this. I worry the other routes to monetization won't provide enough financial security at the current size of the channel for me to be able to reliably output videos.

Thanks a lot for this! I may reply in more detail later but I wanted to send a quick interim note; this is exactly the sort of useful feedback and info I was hoping to elicit with this post!

I don't disagree with any specific point in this but somewhat disagree with the overall thrust of the recommendation. I suspect most people could learn more (and more quickly) by trying out more specialised roles, especially in high-quality, established organisations with better mentorship and support networks.

(I've never been a uni group organiser so not sure what the mentorship and support networks are actually like; I'm mostly just guessing and extrapolating from my own experience having been a generalist researcher then running a talent search org cove... (read more)

I'm a big fan of these intervention reports. They're not directly relevant to anything I'm working on right now so I'm only skimming them but they seem high quality to me. I especially appreciate how you both draw on relevant social science external to the movement, and more anecdotal evidence and reasoning specific to animal advocacy.

When you summarise the studies, I'd find it more helpful if you summarised the key evidence rather than their all-things-considered views.

E.g. in the cost-effectiveness section you mention that costs are low, seeming to assum... (read more)

2
Ren Ryba
1mo
Great suggestion, I'll adopt for future reports. Thank you :)

Thanks! IIRC, we focused on it substantially because a lot of the sign ups for our programmes (e.g. online course) were coming from LinkedIn even when we hadn't put much effort into it. The number of sign ups and the proportion attributed to LinkedIn grew as we put more effort into it. This was mostly the work of our wonderful Marketing Manager, Ana. I don't have access to recent data or information about how it's gone to make much of a call on whether it was worth it, relative to other possible uses of our/Ana's time.

1
James Herbert
1mo
Very interesting! We have made exactly the same observation so we’ve started investing in it more, but we’re still learning how best to go about this.

Not a criticism of your post or any specific commenter, but I think it's a shame (for epistemics related reasons) when discussions end up more about "how EA is X" as opposed to "how true is X? How useful is X, and for what?".

3
James Herbert
1mo
Yeah I see what you’re saying but I guess if you know the answer to the Q ‘is it EA?’ then you have a data point that informs the probability you give to a bunch of other things, e.g., do they prioritise impartiality, prioritisation, open truth seeking, etc., to an unusual degree? So it’s a heuristic. And given they’re a new org it’s much easier to answer the Q ‘is it EA’ than it is ‘is it valuable’. But I agree, knowing whether it’s actually useful is always far more valuable. Apart from anything else, just because the founders prioritise things EAs often prioritise, it doesn’t mean they’re actually doing anything of value.

Side comment / nitpick: Animal Advocacy Careers has 13k LinkedIn followers (we prioritised it relatively highly when I was working there) https://www.linkedin.com/company/animal-advocacy-careers/

1
James Herbert
1mo
Oh nice! Congrats with that. Do you know if it was a good use of resources?

I was funded with long delays. I wouldn't have said "straightforwardly unprofessional" communication in my case.

It was a fairly stressful experience, but seemed consistent with "overworked people dealing with a tough legal situation", both for EVF in general and my specific grant.

I did suggest on their feedback form that misleading language about timeframes on the application form be removed. It looks like they've done that now, although I have no idea when the change was made. (In my case this was essentially the only issue; the turnaround wasn't necessarily super slow in itself -- a few months doesn't seem unreasonable -- it's just that it was much slower than the form suggested it should be.)

2
Linch
1mo
I believe we changed the text a bunch in August/early September. I think there were a few places we didn't catch the first time, and we made more updates in ~the following month (September). AFAIK we no longer have any (implicit or explicit) commitments for response times anywhere, we only mention predictions and aspirations. Eg here's the text at near the beginning of the application form: 

I do not know if anything like this.

I agree that "Luke Muehlhauser's work on early-movement growth and field-building comes closest." Animal Ethics' case studies are also helpful for academic fields https://www.animal-ethics.org/establishing-new-field-natural-sciences/

My impression of the academic social movement studies is that a decent chunk is interested in how movements mobilise their resources, recruit, etc, but often more from a theoretical perspective (e.g. why do people do this, given rational choice theory) rather than statistical/empirical. I don... (read more)

3
jackva
2mo
Thanks, Jamie! Indeed quite helpful to know that there's nothing obvious I am missing. Yes, agree on the last point -- I am just surprised this has not been done as EA grant makers frequently face the decision, I think.

From a quick skim, the fellowship seems promising!

(Basing this mostly just off (1) solid application numbers given a launch late last year and (2) positive testimonials.)

Less anecdotal but only indirectly relevant and also hard to distinguish causation from correlation:

Ctrl+f for "Individuals who participate in consumer action are more likely to participate in other forms of activism" here

https://www.sentienceinstitute.org/fair-trade#consumer-action-and-individual-behavioral-change

"It now feels to me like the systematic, weighted-factor-model approach we used for project research wasn't the best choice. I think that something more focused on getting and really understanding the views of central AI x-risk people would have been better."

I'd be interested in a bit more detail about this if you don't mind sharing? Why did you conclude that it wasn't a great approach, and why would better understanding the views of central AI x-risk people help?

4
Ben Snodin
3mo
Like a lot of this post, this is a bit of an intuition-based 'hot take'. But some quick things that come to mind: i) iirc it didn't seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so understanding their view is very helpful, iv) having a deep understanding of the project and the space just seems very important for figuring out what if anything should be done and what kinds of profiles might be best for the potential founders

I think this probably just saved me 0.2-2 hours over the course of the next few weeks (plus some stress / 'urch' feelings). Thanks!

This post was from over a year ago (sorry to comment so late) but came across it via the imposter syndrome tag, and just wanted to highlight that a recent 80k after hours podcast discussed this somewhat; especially segment from 16:42.

Thank you! That's a kind offer and I may take you up on it some time.

(I had been meaning to edit this post with a note that ChatGPT has a new feature that's better set up for doing this with a little more effort than what I proposed here.)

Hi Geoffrey! I did try a campaign with paid Meta ads for History to shape history, mostly on Instagram, and it went really quite poorly. But (1) this was partly due to technical issues with my account, and (2) I know that Non-Trivial and Atlas have had much more success with paid ads. (My suspicion is that having a financial incentive for programme participation is a big multiplier on the effectiveness of paid ad campaigns, at least for this age group.)

It sounds like you're asking more about broad outreach rather targeted promotion of specific programmes. I could share miscellaneous thoughts on this, but I don't think I really have any particular insight or evidence on this based on the work I've done.

2
Geoffrey Miller
5mo
Jamie - yes, I was thinking mostly about general outreach and EA education, rather than paid ads.  I could imagine a series of short videos for TikTok explaining some basic EA concepts and insights, for example. 

Those additional unpublished-but-referenced results are v helpful comparisons, thank you!

I've noticed a fair few times when people (myself included, in this case) are gesturing or guessing about certain factors, and then you notice that and leave a detailed comment adding in relevant empirical data. I'm a big fan of that, so thank you for your contributions here and elsewhere!

I'll tone down the phrasing about Singer and Ted talks and make a couple of other wording tweaks.

Agree with your caveats!

Definitely overlap, although that seems broader and things aren't being listed there in practice. E.g. these posts were examples of the sort of thing I was thinking of, and weren't tagged there.

(Meta thought, not sure who this should be addressed to)

Is it worth making a Forum tag to the effect of "X-risk without longtermism"? There are quite a few posts on the Forum to this effect now, and it'd be handy to be able to find or link to them all in one place!

2
MichaelStJules
5mo
Global catastrophic risks might already do the job.

FYI I was also confused by the probability metric, reading after your edits. I read it multiple times and couldn't get my head round it.

"Probability of event occurring given protests - Probability of event occurring without protests"

The former number should be higher than the latter (assuming you think that the protests increased the chance of it happening) and yet in every case, the first number you present is lower, e.g.:

"De-nuclearization in Kazakhstan in early 1990s (5-15%*)"

(Another reason it's confusing is that they read like ranges or confidence intervals or some such, and it's not until you get to the end of the list that you see a definition meaning something else.)

1
charlieh943
5mo
Sorry that this is still confusing. 5-15 is the confidence interval/range for the counterfactual impact of protests,  i.e. p(event occurs with protests) - p(event occurs without protests) = somewhere between 5 and 15.  Rather than p(event occurs with protests) = 5, p(event occurs without protests) = 15, which wouldn't make sense.

Ah yeah I think I wasn't counting organising costs. 

I meant that if you measure cost-effectiveness in terms of impact per $, then EAGx looks way better , but if you measure cost-effectiveness in terms of impact per hour of (attendee) time, then EAGx looks similar. So there's a 'regression to the mean' type effect when you consider additional metrics. 

But you're right I wasn't considering organiser time. Apologies for the "quick thought" comment ending up being confusing rather than helpful.

2
OllieBase
6mo
No worries at all! I think always good to poke at this stuff, and I agree that per attendee hour, EAGxVirtual is less cost-effective than per $ spent.

Appendix: EAGxVirtual is unusually cost-effective

 

Quick thought that I expect if you accounted for non-financial costs, especially the time spent by attendees that would otherwise have been spent on other impact-focused activities, then the cost-effectiveness would go down substantially.

A weekend at a virtual conference vs an in-person conference probably takes like 50% as much time per attendee? If that's right, by a measure of cost-effectiveness that more like "connections made per hour of work lost", EAGx virtual and EAGx would be roughly equally cost-effective?

4
OllieBase
6mo
I'm not quite sure I understand. EAGxVirtual is unusually cost-effective because: * Organising costs are >>2x lower (no catering, no venue, no AV etc.) * The time for attendees is considerably lower (~2x lower seems right, maybe more) * But the impact seems to be ~2x lower. It seems like you're missing the organising costs in your last two questions? Or perhaps we disagree about the difference in the value of organising costs and attendee time? 

Great work! I really like the conditional reasoning test idea; it's something I hadn't really thought about in this context. I haven't reviewed the questions in detail yet but any thoughts on whether they'd be suitable for an application process?

There's psychological research finding that both "extended contact" interventions and interventions that "encourage participants to rethink group boundaries or to prioritize common identities shared with specific outgroups" can reduce prejudice, so I can imagine the Clubhouse stuff working (and being cheap + scalable).

https://forum.effectivealtruism.org/posts/re6FsKPgbFgZ5QeJj/effective-strategies-for-changing-public-opinion-a#Prejudice_reduction_strategies

You're right that there are big differences. I'm inclined to agree that some asks should be an "easier sell" too. I'm wondering if you think that these differences notably affect the arguments of this post?

3
Greg_Colbourn
8mo
I think potentially they do, in terms of the typical playbook and framing for corporate campaigns being less relevant. As in, it's less moral outrage vs. profit and appealing to corporations to be good, and more reckless endangerment vs. naive optimism and appealing to people at corporations (who presumably care about their own safety) to see sense. Morality/ethics doesn't need to be a factor, assuming people care for their own lives and those of their family and friends.

Agree that people might intuitively underweight turnover costs -- I think I was underweighting it before I did some brief research inti the existing soc sci / business literature.

From my post's abstract:

"Google and Google Scholar searches were conducted to identify research on these costs. One key finding was that direct hiring costs are much smaller than the less visible and measurable effects of turnover on an organisation’s productivity; once these costs are accounted for, turnover costs thousands of dollars per lost employee. Given that turnover rates ... (read more)

I think I agree with all of these points, with the tentative exception of the 2nd.

I think adding more 'bad cop' advocacy groups into the mix could help motivate (or enforce?) companies to actually act on their intentions. After all, the behaviour-intention gap is real... and it's hard to know their true intentions.

Besides, it could also be that the advocacy groups start by targeting companies that are maybe less frontier but lagging behind on safety commitments or actions. This could help diffuse safety norms faster, and reduce race dynamics where leading labs feel the push to stay ahead of less safety-conscious orgs.

Cool! Exciting that you're working on this, and thanks for your thoughts.

One persistent concern I have is that this may only be true of industries and movements where the cost of a campaign can plausibly outweigh the costs of giving in to campaigners' asks.

I think the bar for "disrupting supply / business as usual" is lower. A couple of the other social movement examples I cited were just this. I haven't thought much about what that might look like in the context of AI safety, but it might be comparable to forcing a localised 'pause' on (some aspects of) f... (read more)

3
Tyler Johnston
8mo
Thank you for responding and sorry for the delayed reply. I'm not totally sure what the distinction is between disrupting business as usual and encouraging meaningful corporate change — in my mind, corporate campaigns do both, the former in service of the latter. Maybe I'm misunderstanding the distinction there. That being said, I am much less certain than I was a few weeks ago about the "no costs from disrupted business can be sufficiently high to trigger action on AI safety" take, primarily because of what you pointed out: the corporate race dynamics here might make small disruptions much more costly, rather than less. In fact, the higher the financial upside is, the more costly it could be to lose even a tiny edge on the competition. So even if the costs of meaningful safeguards go up in competitive markets, so too do the costs of PR damage or the other setbacks you mention. I hadn't thought of this when I wrote my comment but it seems pretty obvious to me now, so thanks for pointing it out. I'm hoping to think more rigorously about why corporate campaigns work in the upcoming weeks, and might follow up here with additional thoughts. Both, I think. I'm still working on this because I'm optimistic that meaningful + robust policies with really granular detail will be developed, but if they aren't, it would make campaigns less promising in my mind. Maybe what's going on is something like the Collingridge dilemma, where it takes time for meaningful safeguards to be identified, but time also makes it harder to implement those safeguards. Curious to hear why you think campaigns are just as promising even if there aren't detailed asks to make of labs, if I'm understanding you correctly. Yeah, in my mind, the animal welfare to AI safety analogy is something like this, where (???) is the missing entity that I wish existed: G.A.P : Cooks Venture :: (???) : ARC/Apollo This is to say that ARC and Apollo are developing eval regimes in the same way Cooks Venture develo

"Evaluate applications one question at a time (rather than one applicant at a time)... This would require marking all applications after the deadline, rather than on a rolling basis."

Super minor comment but I thought I'd highlight that you don't need to do this! It's easy enough to pause and restart evaluating a single question; that's what I do in the programme I run, which had 750 applications this year. (It's true it might introduce a tiny bit more inconsistency, but this is likely minor if you have a clear rubric and potentially worth it for various re... (read more)

1
OscarD
9mo
Thanks Jamie! Yes, makes sense that some time lag between batches of marking a particular question is OK. Hmm good point, I'm not actually sure (I didn't build the application form itself). Our applications were via Rethink Priorities using Pinpoint. But yes I meant that we should have done this not necessarily that we couldn't. OK thanks, I'll let the team know of your nice offer!

Congratulations on getting this published! It seems helpful to have many of these ideas published in an (open-access) peer-reviewed journal.

1
Fırat Akova
9mo
Many thanks!

Yeah, not sure. I expect this won't be a major bottleneck for most participants if they're just using it to bounce a few ideas around with.

Yeah, seems fair; asking LLMs to model specific orgs or people might achieve a similar effect without needing the contextual info, if there's much info about those orgs or people in the training data and you don't need it to represent specific ideas or info highlighted in a course's core materials.

Thanks for posting! I'll consider whether it'd be helpful for me to include replications of these questions to https://www.leaf.courses/ participants for comparison. Let me know if it'd be helpful to you somehow!

Just wanted to thank you and NickLaing for this exchange. I'm planning to use an adapted version of the thoughts/considerations as an example of estimating expected value in some resources I'm creating!

 

Working on a new, more effective TB vaccine: Cost per life saved?

  • About 50% of phase 3 trials are successful. So 50% chance of the rollout being possible
  • Being conservative on The Economist’s optimistic estimate of 10 million lives saved, let’s reduce it to [BLANK 1].
  • So 0.5 (probability) x [BLANK 1] (lives saved) = [BLANK 2] lives sav
... (read more)

I’ve started to worry that it might be important to get digital sentience work (e.g. legal protection for digital beings) before we get transformative AI, and EA’s seem like approximately the only people who could realistically do this in the next ~5 years.

I was interested to see you mention this, as this is something I think is very important.

The phrasing here got me thinking a bit about what would that look like if we were try to make meaningful changes within 5 years specifically.

But I was wondering why you used the "~5 years" phrase here?

(Do you think ... (read more)

4
Ben_West
10mo
Metaculus currently says 7% (for one definition of "transformative"). But I chose five years more because it's hard for me to predict things further in the future, rather than because most of my probability mass is in the <5 year range.

Hey Joel! Cool list you already have.

Is the 300 USD prize for "(2) Cause areas" and/or "(3) Causes"? You distinguish them at the start of your post but then refer to "potential cause areas", "causes", and "cause ideas" in describing the contest.

Also, its just one 300USD prize and one 700USD prize, right?

Thanks!

1
Joel Tan
10mo
Hi Jamie. For both (causes broadly defined)! Yes, it's just one USD 300 prize (for causes), and one USD 700 prize (for methodologies).

To add in some 'empirical' evidence: Over the past few months, I've read 153 answers to the question "What is your strongest objection to the argument(s) and claim(s) in the video?" in response to "Can we make the future a million years from now go better?" by Rational Animations, and 181 in response to MacAskill's TED talk, “What are the most important moral problems of our time?”.

I don't remember the concern that you highlight coming up very much if at all. I did note "Please focus on the core argument of the video — either 'We can make future lives go b... (read more)

6
OllieBase
10mo
Thanks! That question seems like it might exclude the worry I outlined, but this is still something of an update.

i don't think we need to worry too much about 'crying wolf'. The effects of media coverage and persuasive messaging on (1) attitudes, and (2) perceived issue importance both substantially (though not necessarily entirely) wash out in a matter of weeks to months.

So I think we should be somewhat worried about wasted efforts -- not having a sufficiently concrete action plan to capitalise on the attention gained -- but not so much about lasting negative effects. 

(More speculatively: I expect that there are useful professional field-building effects that w... (read more)

This seems really cool. I was really excited just by reading the title of this forum piece. My initial reaction was something like, 'Yeah, I would be willing to sign up immediately and pay a subscription fee to access that if it was an app on my phone.' I could use it like a news app, that way I could read it during breakfast or whenever else I have a spare moment. It could be a replacement or supplement to the BBC News app I read currently.

I took a very quick look at the site on my phone, so these are just quick initial reactions. So take my comments with... (read more)

In the vein of "another good point" made in public reactions to the statement, an article I read in The Telegraph:

"Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer."

This seems obvious with hindsight as one factor at play, but I hadn't considered it... (read more)

I don't have a link to the report itself but Jason Hausenloy started some work on this a few months ago. https://youtu.be/1QY1L61TKx0

I’m interested in standards that are motivated by non-monetized social welfare... [e.g.] Fair Trade

I wrote a case study of the Fair Trade movement. The focus was on the movement rather than the standards themselves, but I think it might be helpful for at least some of what you refer to in "What I’m looking for in case studies". You can easily skim through the bolded headings in the "Strategic implications" section and see if any of the points highlighted seem relevant.

If someone else ends up doing a more standards-focused case study, it could be helpful fo... (read more)

Load more