All of Halffull's Comments + Replies

Effective Altruism Coaching 2020 Annual Review

This is great! Curious what (if anything) you're doing to measure counterfactual impact.  Any sort of randomized trial involving e.g. following up with clients you didn't have the time to take on and measuring their change in productive hours compared to your clients?

1lynettebye9moUnfortunately, I don't have an easy control group to do such a trial. I do my best to take on every client who I think is a great fit for me to help, so there isn't a non-coached group who is otherwise comparable. Additionally, as a for-profit business, there's an understandable limit to how much my clients are willing to humor my desire for unending data.
Halffull's Shortform

Yeah, I'd expect it to be a global catastrophic risk rather than existential risk.

4Denkenberger1ySome of the agricultural catastrophes that the solutions that the Alliance to Feed the Earth in Disasters (ALLFED [http://www.allfed.info]) are working on address include super crop disease, bacterium that out competes beneficial bacteria, and super crop pest (animal), all of which could be related to genetic modification.
Halffull's Shortform

Is there much EA work into tail risk from GMOs ruining crops or ecosystems?

If not, why not?

3Thomas Kwa1yIt's not on the 80k list [https://80000hours.org/problem-profiles/#which-problem] of "other global issues", and doesn't come up on a quick search of Google or this forum, so I'd guess not. One reason might be that the scale isn't large enough-- it seems much harder to get existential risk from GMOs than from, say, engineered pandemics.
Delegate a forecast
Yeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021

Yeah, I was talking about the Q1 model when I was trying to puzzle out what your growth model was.

There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?

A lot of the value of potential growth vectors of a business come in the tails. For this particular forecast it doesn't real... (read more)

1amandango1yI just eyeballed the worst to best case for each revenue source (and based on general intuitions about e.g. how hard it is to start a podcast). Yeah, this makes a lot of sense – we've thought about showing expected value in the past so this is a nice +1 to that.
Delegate a forecast

Thanks, this was great!

The estimates seem fair, Honestly, much better than I would expect given the limited info you had, and the assumptions you made (the biggest one that's off is that I don't have any plans to only market to EAs).

Since I know our market is much larger, I use a different forecasting methodology internally which looks at potential marketing channels and growth rates.

I didn't really understand how you were working in growth rate into your calculations in the spreadsheet, maybe just eyeballing what made sense based on the ... (read more)

1amandango1yYeah, I mostly focused on the Q1 question so didn't have time to do a proper growth analysis across 2021 – I just did 10% growth each quarter and summed that for 2021, and it looked reasonable given the EA TAM. This was a bit of a 'number out of the air,' and in reality I wouldn't expect it to be the same growth rate across all quarters. Definitely makes sense that you're not just focusing on the EA market – the market for general productivity services in the US is quite large! I looked briefly at the subscriptions for top productivity podcasts on Castbox (e.g. Getting Things Done [https://castbox.fm/channel/Getting-Things-Done-id2891955?country=us], 5am miracle [https://castbox.fm/channel/The-5-AM-Miracle-with-Jeff-Sanders-id1361894?country=us] ), which suggests lots of room for growth (although I imagine podcast success is fairly power law distributed). There isn't a way to get the expected value, just the median currently (I had a bin in my snapshot indicating a median of $25,000). I'm curious what makes the expected value more useful than the median for you?
Delegate a forecast

Hey, I run a business teaching people how to overcome procrastination (procrastinationplaybook.net is our not yet fully fleshed out web presence).

I ran a pilot program that made roughly $8,000 in revenue by charging 10 people for a premium interactive course. Most of these users came from a couple of webinars that my friend's hosted, a couple came from finding my website through the CFAR mailing list and webinars I hosted for my twitter friends.

The course is ending soon, and I'll spend a couple of months working on marketing and updating the co... (read more)

1amandango1yHere’s my Q1 2021 prediction [https://elicit.ought.org/builder/3-WBTzwKf], with more detailed notes in a spreadsheet here [https://docs.google.com/spreadsheets/d/161qqS1x27LHx3pICxALZ0xpUfUUCdCCCamKANqtKE_Y/edit?usp=sharing] . I started out estimating the size of the market, to get reference points. Based on very rough estimates of CEA subscriptions, # of people Effective Altruism Coaching has worked with, and # of people who have gone through a CFAR workshop, I estimated the number of EAs who are interested enough in productivity to pay for a service to be ~8000. The low number of people who have done Effective Altruism Coaching (I estimated 100, but this is an important assumption that could be wrong since I don’t think Lynette has published this number anywhere) suggests a range for your course (which is more expensive) of ~10 to 45 people in Q1. Some other estimates, which are in the spreadsheet linked above, gave me a range of $8,000 to $42,000. I didn’t have enough time to properly look into 2021 as a whole, so I just did a flat 10% growth rate across all the numbers and got this prediction [https://elicit.ought.org/builder/xJ8pYpnDV]. Interestingly, I notice a pressure to err on the side of optimistic when publicly evaluating people’s companies/initiatives. Your detailed notes were very helpful in this. I noticed that I wanted more information on: * The feedback you got from the first course. How many of them would do it again or recommend it to a friend? * More detail on your podcast plans. I didn’t fully understand the $10 lessons – I assumed it was optional $10 lessons attached to each podcast, but this may be wrong. * How much you’re focusing on EA’s. The total market for productivity services is a lot bigger (here’s an estimate [https://blog.marketresearch.com/us-personal-coaching-industry-tops-1-billion-and-growing#:~:text=Market%20size%20and%20growth%3A%20The,rate%20from%202016%20to%202022.] of $1B market value for life coachi
Putting People First in a Culture of Dehumanization

I recommend Made to Stick by Chip and Dan Heath.

What skill-building activities have helped your personal and professional development?

Going through several startup weekends showed me what works and what doesn't when trying to de-risk new projects.

Improving the future by influencing actors' benevolence, intelligence, and power

This is great! Was trying to think through some of my own projects with this framework, and I realized I think there's half of the equation missing, related to the memetic qualities of the tool.

1. How "symmetric" is the thing I'm trying to spread? How easy is it to use for a benevolent purpose compared to a malevolent one?

2. How memetic is the idea? How likely is it to spread from a benevolent actor to a malevolent one.

3. How contained is the group with which I'm sharing? Outside of the memetic factors of the idea itself, is the person or group I'm sharing with it likely to spread it, or keep it contained.

2MichaelA1y(My opinions, not necessarily Convergence's, as with most of my comments) Glad to hear you liked the post :) One thing your comment makes me think of is that we actually also wrote a post focused on "memetic downside risks [https://www.lesswrong.com/posts/EdAHNdbkGR6ndAPJD/memetic-downside-risks-how-ideas-can-evolve-and-cause-harm] ", which you might find interesting. To more directly address your points: I'd say that the BIP framework outlined in this post is able to capture a very wide range of things, but doesn't highlight them all explicitly, and is not the only framework available for use. For many decisions, it will be more useful to use another framework/heuristic instead or in addition, even if BIP could capture the relevant considerations. As an example, here's a sketch of how I think BIP could capture your points: 1. If the idea you're spreading is easier to use for a benevolent purpose than a malevolent one, this likely means it increases the "intelligence" or "power" of benevolent actors more than of malevolent ones (which would be a good thing). This is because this post defines intelligence in relation to what would "help an actor make and execute plans that are aligned with the actor’s moral beliefs or values", and power in relation to what would "help an actor execute its plans". Thus, the more useful an intervention is for an actor, the more it increases their intelligence and/or power. 2. If an idea increases the intelligence or power of whoever receives it, it's best to target it to relatively benevolent actors. If the idea is likely to spread in hard-to-control ways, then it's harder to target it, and it's more likely you'll also increase the intelligence or power of malevolent actors, which is risky/negative. This could explain why a more "memetically fit" idea could be more risky to spread. 3. Similar to point 2. But with the addition of the observation that, if it'd be harmful to spread the idea, then actors who are more likely to sprea
A Step-by-Step Guide to Running Independent Projects

This is great!

I'd love to be able to provide an alternative model that can work as well, based on Saras Sarasvathy's work on Effectuation.

In the effectuation model (which came from looking at the process of expert entrepreneuers), you don't start with a project idea up front. Instead, you start with your resources, and the project evolves based on demand at any given time. I think this model is especially good for independent projects, where much of the goal is to get credibility, resources, and experience.

Instead of starting with the goal,... (read more)

The Case for Impact Purchase | Part 1

I happen to think that relative utility is very clustered at the tails, whereas expected value is more spread out.. This comes from intuitions from the startup world.

However, it's important to note that I also have developed a motivation system that allows me to not find this discouraging! Once I started thinking of opportunities for doing good in expected value terms, and concrete examples of my contributions in absolute rather than relative terms, neither of these facts was upsetting or discouraging.

Some relevant articles:

https://forum.effectivea... (read more)

1Linda Linsefors1yI'm ok with hit based impact. I just disagree about events. I think you are correct about this for some work, but not for others. Things like operations and personal assistant are multipliers, which can consistently increase the productivity of those who are served. Events that are focused on sharing information and networking fall in this category. People in a small field will get to know each other and each others work eventually, but if there are more events it will happen sooner, which I model as an incremental improvement. But some other events feels much more hits based not that I think of it. Anything focused on getting people started (e.g. helping them choose the right career) or events focused on ideation. But there are other types of event that are more hit based, and I notice that I'm less interested in doing them. This is interesting. Because these events also differ in other ways, there are alternative explanations. But seems worth looking at. Thanks for providing the links, I should read them. (Of course everything relating to X-risk is all or nothing in therms of impact, but we can't measure and reward that until it does not matter anyway. Therefore in terms of AI Safety I would measure success in terms of research output, which can be shifted incrementally.)
The Case for Impact Purchase | Part 1
But if it took on average 50 000 events for one such a key introduction to happen, then we might as well give up on having events. Or find a better way to do it. Otherwise we are just wasting everyone's time.

But all the other events were impactful, just not compared to those one or two events. The goal of having all the events is to hopefully be one of the 1/50,000 that has ridiculous outsized impact - It's high expected value even if comparatively all the other events have low impact. And again, that's comparatively. Compared to say, mo... (read more)

The Case for Impact Purchase | Part 1

Nope, 1/50,000 seems like a realistic ratio for very high impact events to normal impact events.

The Case for Impact Purchase | Part 1
Would you say that events are low impact?

I think most events will be comparatively low impact compared to the highest impact events. Let's say you have 100,000 AI safety events. I think most of them will be comparatively low impact, but one in particular ends up creating the seed of a key idea in AI safety, another ends up introducing a key pair of researchers that go on to do great things together.

Now, if I want to pay those two highest impact events their relative money related to all the other events, I have a few options:

1. Pay all of the eve... (read more)

1Linda Linsefors1yWhait what? 100 000 AI Safety Events? Like 100 000 individual events? There is a typo here right?
The Case for Impact Purchase | Part 1
Since there will limited amount of money, what is your motivation for giving the low impact projects anything at all?

I'm not sure. The vibe I got from the original post was that it would be good to have small rewards for small impact projects?

I think the high impact projects are often very risky, and will most likely have low impact. Perhaps it makes sense to compensate people for taking the hit for society so that 1/1,000,000 of the people who start such projects can have high impact?

7Linda Linsefors2yI'm unsure what size you have in mind when you say small. I don't think small monetary rewards (~£10) are very useful for anything (unless lots of people are giving small amounts, or if I do lot that add up to something that matters). I also don't think small impact projects should be encouraged. If we respect peoples time and effort, we should encourage them to drop small impact projects and move on to bigger and better things. If you think that the projects with highest expected impact also typically have low success rate, then standard impact purchase is probably not a good idea. Under this hypothesis, what you want to do is to reward people for expected success rather than actual success. I talk about success rather than impact, because for most project, you'll never know the actual impact. By "success" I mean your best estimate of the projects impact, from what you can tell after the project is over. (I really meant success not impact from the start, probably should have clarified that some how?) I'd say that for most events, success is fairly predictable, and more so with more experience as an organiser. If I keep doing events the randomness will even out. Would you say that events are low impact? Would you say events are worth funding? Can you give an example of the type of high impact project you have in mind? How does your statement about risk change if we are talking about success instead?
The Case for Impact Purchase | Part 1
For an impact purchase the amount of money is decided based on how good impact of the project was

I'm curious about how exactly this would work. My prior is that impact is clustered at the tails.

This means that there will frequently be small impact projects, and very occasionally be large impact projects - My guess is that if you want to be able to incentivize the frequent small
impact projects at all, you won't be able to afford the large impact projects, because they are many magnitudes of impact larger. You could just purchase part of the... (read more)

1Linda Linsefors2yLest assume for now that impact is clustered as the tails. (I don't have a strong prior, but this at least don't seem implausible to me) Then how would you like to spend funding? Since there will limited amour of money, what is your motivation for giving the low impact projects anything at all? Is it to support the people involved to do keep working, and eventually learn and/or get lucky enough to do something really important?
New Top EA Causes for 2020?

Perhaps Dereke Bruce had the right of it here:

"In order to keep a true perspective of one's importance, everyone should have a dog that will worship him and a cat that will ignore him."

New Top EA Causes for 2020?

I propose that the best thing we can do for the long term future is to create positive flow-through effects now. Specifically, if we increase people's overall sense of well-being and altruistic tendencies, this will lead to more altruistic policies and organizations, which will lead to a better future.

Therefore, I propose a new top EA cause for 2020: Distributing Puppies

  • Puppies decrease individual loneliness, allowing a more global worldview.
  • Puppies model unconditional love and altruism, creating a flowthrough to their owners.
  • Puppies with good owners
... (read more)

I discussed this with my wife, who thinks that the broad idea is reasonable, but that kittens are a better choice than puppies:

  • As puppies are not role models their unconditional love is less relevant than first appears.
  • Most EA causes involve helping agents who aren't directly in contact with you.
  • If people learn their altruism from helping puppies, they will learn to expect gratitude, worse still, they will learn to expect gratitude even for relatively minor help!
  • Cats care about you only to the extent that they can receive food and/or pets. This is a much better model.
  • They also have toebeans.
Halffull's Shortform

Something else in the vein of "things EAs and rationalists should be paying attention to in regards to Corona."

There's a common failure mode in large human systems where one outlier causes us to create a rule that is a worse equilibrium. In the PersonalMBA, Josh Kaufman talks about someone taking advantage of a "buy any book you want" rule that a company has - so you make it so that you can no longer get any free books.

This same pattern has happened before in the US, after 9-11 - We created a whole bunch of security theater, that c... (read more)

What posts do you want someone to write?
Curious about what you think is weird in the framing?

The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.

Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.

Growth and the case against randomista development

No, I actually think the post is ignoring x-risk as a cause area to focus on now. It makes sense under certain assumptions and heuristics (e.g. if you think near term x-risk is highly unlikely, or you're using absurdity heuristics), I think I was more giving my argument for how this post could be compatible with Bostrom.

Growth and the case against randomista development
the post focuses on human welfare,

It seems to me that there's a background assumption of many global poverty EAs that human welfare has positive flowthrough effects for basically everything else.

I'm also very interested in how increased economic growth impacts existential risk.

At one point I was focused on accelerating innovation, but have come to be more worried about increasing x-risk (I have a question somewhere else on the post that gets at this).

I've since added a constraint into my innovation acceleration efforts, and now am basically focused on "asymmetric, wisdom-constrained innovation."

2Matthew_Barnett2yIf this is true, is there a post that expands on this argument, or is it something left implicit? I think Bostrom has talked about something similar: namely, differential technological development [https://en.wikipedia.org/wiki/Differential_technological_development] (he talks about technology rather than economic growth, but the two are very related). The idea is that fast innovation in some fields is preferable to fast innovation in others, and we should try to find which areas to speed up the most.
Growth and the case against randomista development

Let's say you believe two things:

1. Growth will have flowthrough effects on existential risk.

2. You have a comparative advantage effecting growth over x-risk.

You can agree with Bostrom that x-risk is important, and also think that you should be working on growth. This is something very close to my personal view on what I'm working on.

1Matthew_Barnett2yThis makes sense as an assumption, but the post itself didn't argue for this thesis at all. If the argument was that the best way to help the longterm future is to minimize existential risk, and the best way to minimize existential risk is by increasing economic growth, then you'd expect the post to primarily talk about how economic growth decreases existential risk. Instead, the post focuses on human welfare, which is important, but secondary to the argument you are making. Can you go more into detail? I'm also very interested in how increased economic growth impacts existential risk. This is a very important question because it could determine the influence from accelerating economic-growth inducing technologies such as AI and anti-aging.
What posts do you want someone to write?

I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.

1EdoArad2yNo, the analysis does not seem to contain what I was going for. Curious about what you think is weird in the framing?
Halffull's Shortform

It's been pointed out to me on Lesswrong that depressions actually save lives. Which makes the "two curves" narrative much harder to make.

Halffull's Shortform

This argument has the same problem as recommending people don't wear masks though, if you go from "save lives save lives don't worry about economic impacts" to "worry about economics impacts it's as important as quarantine" you lose credibility.

You have to find a way to make nuance emotional and sticky enough to hit, rather than forgoing nuance as an information hazard, otherwise you lose the ability to influence at all.

This was the source of my "two curves" narrative, and I assume would be the approach that others would take if that was the reason for their reticence to discuss.

1EdoArad2yThis is not quite what I was going for, even though it is relevant. This problem profile focuses on existing institutions and on methods for collective decision making. I was thinking more in the spirit of market design, where the goal is to generate new institutions with new structures and rules so that people are selfishly incentivised to act in a way which maximizes welfare (or something else).
Halffull's Shortform

Was thinking a bit about the how to make it real for people that the quarantine depressing the economy kills people just like Coronavirus does.

Was thinking about finding a simple good enough correlation between economic depression and death, then creating a "flattening the curve" graphic that shows how many deaths we would save from stopping the economic freefall at different points. Combining this was clear narratives about recession could be quite effective.

On the other hand, I think it's quite plausible that this particular problem will ... (read more)

5Halffull2yIt's been pointed out to me on Lesswrong that depressions actually save lives [https://www.nature.com/articles/d41586-019-00210-0]. Which makes the "two curves" narrative much harder to make.
4Greg_Colbourn2yMaybe also that the talk of preventing a depression is an information hazard when we are at the stage of the pandemic where all-out lockdown is the biggest priority for most of the richest countries. In a few weeks when the epidemics in the US and Western Europe are under control, and lockdown can be eased with massive testing, tracing and isolating of cases, then it would make more sense to freely talk about boosting the economy again (in the mean time, we should be calling for governments to take up the slack with stimulus packages. Which they seem to be doing already).
Why not give 90%?

I think this is actually quite a complex question. I think it's clear that there's always a chance of value drift, so you can never put the chance of "giving up" at 0. If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many ... (read more)

such as consistency and justification effects

And selection effects!

2HaydenW2yDefinitely! I simplified it a lot in the post. Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things might be true of a lot of people. And so will the third thing, effectively, if your income doesn't increase by more than 2-3x. My guess is that the main reason for that is that more devoted people tend to pledge higher amounts. I think if you took some of those 10%ers and somehow made them choose to switch to 50%, they'd be far more likely than before to give up. But yeah, it's not entirely clear that P(giving up) increases with amount donated, or that either causally affects the other. I'm just going by intuition on that.
Halffull's Shortform

I've had a sense for a while that EA is too risk averse, and should be focused more on a broader class of projects most of which it expects to fail. As part of that, I've been trying to collect existing arguments related to either side of this debate (in a broader sense, but especially within the EA community), to both update my own views as well as make sure I address any important arguments on either side.

I would appreciate if people could link me to other sources that are important. I'm especially interested in people making arguments fo... (read more)

3Aaron Gertler2yKelsey Piper's "On 'Fringe' Ideas [https://forum.effectivealtruism.org/posts/hRJueS96CMLajeF57/the-unit-of-caring-on-fringe-ideas] " makes a pro-risk argument in a certain sense (that we should be kind and tolerant to people whose ideas seem strange and wasteful). I'm not sure if this is written up anywhere, but one simple argument you can make is that many current EA projects were risky when they were started. GiveWell featured two co-founders with no formal experience in global health evaluating global health charities, and nearly collapsed in scandal [https://www.givewell.org/about/our-mistakes#December_2007_Overaggressive_and_inappropriate_marketing] within its first year. 80,000 Hours took on an impossibly broad task with a small staff (I don't know whether any had formal career advisement experience). And yet, despite various setbacks, both projects wound up prospering, without doing permanent damage to the EA brand (maybe a few scrapes in the case of 80K x Earning to Give, but that seems more about where the media's attention was directed than what 80K really believed).
Growth and the case against randomista development
I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. I think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc.

I'm curious about the intuitions behind this. I think developing countries with fast growth have historically had quite high pollution and carbon output. I also think that more countries joining the "developed" category could quite possibly ... (read more)

Growth and the case against randomista development

I'm quite excited to see an impassioned case for more of a focus on systemic change in EA.

I used to be quite excited about interventions targeting growth or innovation, but I've recently been more worried about accelerating technological risks. Specific things that I expect accelerated growth to effect negatively include:

  • Climate Change
  • AGI Risk
  • Nuclear and Biological Weapons Research
  • Cheaper weapons in general

Curious about your thoughts on the potential harm that could come if the growth interventions are indeed successful.

I do think this is a concern that we need to consider carefully. On the standard FHI/Open Phil view of ex risk, AI and bio account for most of the ex risk we face this century. I find it difficult to see how increasing economic development LMICs could affect AI risk. China's massive growth is something of a special case on the AI risk front I think.

I think growth probably reduces biorisk by increasing the capacity of health systems in poor countries. It seems that leading edge bioscience research is most likely to happen in advanced economies.

On cli... (read more)

9Michael_Wiebe2yI think catch-up growth in developing countries, based on adopting existing technologies, would have positive effects on climate change, AI risk, etc. In contrast, 'frontier' growth in developed countries is based on technological innovation, and is potentially more dangerous.
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration

This work is excellent and highly important.

I would love to see this same setup experimented with for Grant giving.

Steelmanning the Case Against Unquantifiable Interventions

I'd be curious about your own view on unquantifiable interventions, rather than just the Steelman of this particular view.

1Davidmanheim2yAs I said in the epistemic status, I'm far less certain than I once was, and on the whole I'm now skeptical. As I said in the post and earlier comments, I still think there are places where unquantifiable interventions are very valuable, I just think that unless it's obvious that they will be (see: Diamond Law of Evaluation,) I'd claim that quantifiably effective interventions are in expectation better.
EA Hotel Fundraiser 5: Out of runway!

I think there's a clear issue here with measurability bias. The fact of the matter is that the most promising opportunities will be the hardest to measure (see for instance investing in a startup vs. buying stocks in an established business) - The very fact that opportunities are easy to measure and obvious makes them less likely to be neglected.

The proper way to evaluate new and emerging projects is to understand the landscape, and do a systems level analysis of the product, process, and team to see if you think the ROI will be high compared to othe... (read more)

2Open_Thinker2yThis point is reasonable, and I fully acknowledge that the EA Hotel cannot have much measurable data yet in its ~1 year of existence. However, I don't think it is a particularly satisfying counter response. If the nature of the EA Hotel's work is fundamentally immeasurable, how is one able to objectively quantify that it is in fact being altruistic effectively? If it is not fundamentally immeasurable but is not measured and could have been measured, then that is likely simply incompetence. Is it not? Either way, it would be impossible to evidentially state that the EA Hotel has good yield. Further, the idea that the EA Hotel's work is immeasurable because it is a meta project or has some vague multipler effects is fundamentally dissatisfying to me. There is a page full of attempted calculations in update 3, so I do not believe the EA Hotel assumes it is immeasurable either, or at least originally did not. The more likely answer, a la Occam's Razor, is that there is simply insufficient effort in resolving the quantification. There are, after all, plenty of other more pressing and practical challenges to be met on a day-to-day basis; and [surprisingly] it does not seem to have been pressed much as a potential issue before (per the other response by Greg_Colbourn). Even if it is difficult to measure, a project (particularly one which aspires to be effective--or greatly effective, or even the most effective) must as a requirement outline some clear goals against which its progress can be benchmarked in my opinion, so that it can determine its performance and broadcast this clearly. It is simply best practice to do so. This has not been done as far as I can tell--if I am mistaken, please point me to it and I will revise my opinions accordingly. There are a couple additional points I would make. Firstly, as an EA Hotel occupant, you are highly likely to be positively biased in its favor. Therefore, you are naturally inclined to calculate more generously in its favor;
Effective Pro Bono Projects

Tobacco taxes are pigouvian under state sponsored healthcare.

Funding chains in the x-risk/AI safety ecosystem

Hmm that's odd, I tested both in incognito mode and they seemed to work.

1Ben Pace2yIt is quite odd, as the first link also works for me in incognito, but not in normal. Perhaps has something to do with me once having an evernote account? Or me being logged into google? Who knows.
Funding chains in the x-risk/AI safety ecosystem

You shouldn't, it's an evernote public sharing link that doesn't require sign in. Note also that I tried to embed the image directly in my comment, but apparently the markdown for images doesn't work in comments?

1Ben Pace2yFirsts link does, second link doesn't, but I gave up when first link wouldn't let me through. Indeed, I cannot get the markdown to work in comments. Alas.
4Ben Pace2yDo I have to make an evernote account to see that? Pretty sure this trivial inconvenience will prevent most lurkers from seeing it.
Funding chains in the x-risk/AI safety ecosystem

Small suggestion for future projects like this. I used to use graphviz for diagramming, but since found yED and never looked back. Its edge-routing and placement algorithms are much better, and can be tweaked with WYSIWYG editing after the fact.

List of ways in which cost-effectiveness estimates can be misleading

I tend to think this is also true of any analysis which includes only one way interactions or one way causal mechanisms, and ignores feedback loops and complex systems analysis. This is true even if each of parameters is estimaed using probability distributions.

How do you decide between upvoting and strong upvoting?

I upvote if I think the post is contributing to the current conversation, and strong upvote if I think the post will contribute to future and ongoing conversations (IE, its' a comment or post that people should see when browsing the site, aka Stock vs. Flow).

Occasionally, I'll strong upvote/downvote strategically to get a comment more in line with what I think it "deserves", trying to correct a perceived bias of other votes.



EAGxNordics 2019 Postmortem

I'm sad because I really enjoyed EAGx nordics :). In my view the main benefits of conferences are the networks and idea-sex that come out of them, and I think it did a great job at both of those. I'm curious if you think the conference "made back its' money" in terms of value to participants, which is seperate from the question of counterfactual value you pose here.

What posts you are planning on writing?
Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon).

Would be highly interested in this, and a case study showing how to rigorously think about systemic change using systems modeling, root cause analysis, and the like.

Why the EA Forum?

Yes, this is more an argument for "don't have downvotes at all" like hacker news or traditional forum.

Note I think your team has made the correct tradeoffs so far, this was more paying devils advocate.

3Habryka2yHacker news has downvotes, though they are locked behind a karma threshold, though overall I see more comments downvoted on HN than on LW or the EA Forum (you can identify them by the text being more greyish and harder to read).
Why the EA Forum?

Of course there's a reverse incentive here, where getting downvoted feelsbadman, and therefore you may be even less likely to want to post up unfinished thoughts, as compared to them simply getting displayed in chronological order.

2Habryka2yThe problem is that if your post got downvoted and displayed in chronological order, this often means you will get even more downvotes (in parts because having things in chronological order means people vote more harshly because people want to directly discourage people posting bad content, and also because your visibility doesn't reduce, which means more people have the opportunity to downvote)
Raemon's EA Shortform Feed

I won't be at EAG but I'm in Berkeley for a week or so and would love to chat about this.

Load More