Writing • Distilling the most impactful knowledge • Working on improving epistemics and collective decision-making using technology • 🙃🧬💿🧘♂️🔍
I use immersive reading all the time. I find that it substantially increases reading speed and comprehension. Here are some tools I find helpful:
Are there any such coworking / event spaces in SF Bay or any other city? I love this initiative. It makes me more inclined to move to NYC
„Think about the population density of places you go to regularly. Ask yourself: “How many people have been here in the last week?”. Avoid places where that number is large, and, take extra precautions.”
What do you consider a small/moderate/large numbers here? i.e. I go to a small exercises studio with ~200 weekly visitors. When this type of place starts being a high risk place.
Thanks for highlighting these concerns! Here is what I think about these topics:
I focused on doing an overview of the HLI and the problem area because compared to other teams it seemed as one of the most established and highest quality orgs within the Clearer Thinking regranting round. I thought this may be missed by some and is a good predictor of the outcomes.
I focused on the big-picture lens because the project they are looking for funding for is pretty open-ended.
I think the prior performance and the quality of the methodology they are using are good predictors of the expected value of this grant.
I didn’t get the impression that the application lacks specific examples. Perhaps could be improved though. They listed three specific projects they want to investigate the impact of:
That said, I wish they listed a couple of more organizations/projects/policies they would like to investigate. Otherwise, communicate something along the line: We don’t have more specifics this time as the nature of this project is to task Dr Lily Yu to identify potential interventions worth funding. We, therefore focus more on describing methodology, direction, and our relevant experience.
I am not sure how much support HLI gets from the whole EA ecosystem. It may be low. In their EA forum profile, it appears low “As of July 2022, HLI has received $55,000 in funding from Effective Altruism Funds”. Because of that, I thought discussing this topic on a higher level may be helpful.
I also think the SWB framework aspect wasn’t highlighted enough in the application. I focused on this as I see a very high expected value in supporting this grant application as it will help HLI stress test SWB methodology further.
As for Nuño's comment. I don't see a problem that money is passed further through a number of orgs. I sympathize with Austin's fragment of the comment (please read the whole comment as this fragment is a little misleading on what Austin meant there)
Initially, FTX decided on the regrant dynamic – perhaps to distribute the intelligence and responsibility to more actors. What if adding more steps actually adds quality to the grants? I think the main question here is whether this particular step adds value.