2447 karmaJoined Sep 2014


(My personal opinion, not EV's:)

EV is winding down and being on this board is quite a lot of work. This makes it very hard to recruit for! The positive flip side though of the fact of the wind-down means that the cultural leadership we are doing is a bit less impactful than it was say a year or two ago.

When we faced the decision of whether to keep searching or accept the candidates in front of us, I considered many factors but eventually agreed that it was ok to prioritize allowing the existing board members to leave (which they couldn't do until we found replacements), even if the new folks were not ideal in every conceivable way. I wished we had a better gender balance too, but ultimately since projects are spinning out, it's going to be much more on those individual projects and less on EV central to figure that out!

(I'm very excited about our new trustees despite this!)

I would like to point out that this is one of those things where n=1 is enough to improve people's lives (e.g., the placebo effect works in your favor), in the same way that I can improve my life by taking a weird supplement that isn't scientifically known to work but helps me when I take it.

For what it's worth, my life did seem to start going better (I started to feel more in touch with my emotional side) after becoming vegan.

While I broadly agree with Rocky's list I want to push back a little vs. your points:

Re your (2): I've found that small entities are in a constant struggle for survival, and must move fast and focus on the most important problems unique to their ability to make a difference in the world. Small-seeming requirements like "new hires have to find their own housing" can easily make the difference between being able to move quickly vs. slowly on some project that makes or breaks the company. I think for new entities the risks of incurring large costs before you have 'proven yourself' are quite high.

My experience also disagrees with your (1): As my company has grown, we have had many forces naturally pushing in the direction of "more professional": new hires tend to be much more worried about blame about doing things too quick-and-dirty rather than incurring costs on the business in order to do things the buttoned-up way; I've stepped in more often to accept a risk rather than to prevent one although I certainly do both!

(Side note: as a potential counterpoint to the above, I do note that Alameda/FTX was clearly well below professional standards at >200 employees - my assumption is that Sam/execs were constantly stepping in to keep the culture the way they wanted it. If I learned that somehow most of the 200 employees were pushing in the direction of less professionalism on their own, I would update to agree with you on (1).)

Answer by lincolnqSep 03, 20231

You might want to check out some of Phil Trammell's reports, where he analyzes what he calls time preference (time discount rate) with respect to philanthropy: https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit

Congrats on having invented something exciting!

Usually, the best way to get innovative new technology into the hands of beneficiaries quickly is to get a for-profit company to invest with a promise of making money. This can happen via licensing a patent to an existing manufacturer, or creating a whole startup company and raising venture capital, etc.

One of the things such investors want to see is a 'moat': something that this company can do that no other company can easily copy. A patent/exclusive license is a good way to create a moat.

There are some domains like software where simply publishing 'open source' ideas causes those ideas to get used, but for most domains including manufacturing, my default expectation is that new tech is not used unless someone can make money off it. Pharma is a great example - there are tons of vaccines and niche treatments that we don't have manufacturing for, even though we know how, because nobody can make enough money doing it.

I'd be really interested to hear whether you are considering seeing this idea through yourself? It sounds like you're doing a Ph.D; but if you would consider dropping out to work on this as a startup, then I think doing so would be one of the best ways to maximize this idea's chances for success. (In large part because your brain probably contains tons of highly relevant info for making this product work at scale!)

You wrote: "most scientists do patent and keep everything secret within companies" -- but I wonder if this indicates confusion, since usually patents don't keep things secret, they are published. Patents just allow their owner a legal monopoly on technology for a limited time.

Can you get introduced to any food-manufacturing people (ideally folks at bigger companies, in charge of finding + investing in new food products), who you can talk to about your idea, even just to get advice? Or, founders of similar food tech companies who came up with a good idea and had to decide whether to patent it?

I'm a bit confused about this because "getting ambitious slowly" seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already "ambitious"; unless you're really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn't work to say I'm going to get ambitious slowly.

What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.

I avoid reading, and don't usually respond to, comments on my posts, or replies to my own comments.

The reason is that it's emotionally intense to do so: after posting something on the EA Forum, I avoid checking the forum at all for ~24h or so (for fear of noticing replies in the 'recents' area, or changes in my karma), and after that I mainly skim for people flagging major errors or omissions that need my input to be resolved.

Lizka's You Don't Have to Respond to Every Comment talks about this a bit (and was enormously helpful for me) - I am not strongly averse to posting stuff and having people read it in the abstract - I just don't like the short term emotional swings that come with individual replies to things.

Can you give some evidence/an example for "unable to mentor many of the qualified applicants"?

I think this is a useful question and I'm glad to be discussing this.

I agree with many of your concerns - and would love to see a more culturally-unified EA on the axis of how conscious we are of our own impact - but I also think you're failing to acknowledge something crucial: As much as EA is about altruism, it is also about focus on what's important, and your post doesn't acknowledge this as a potential trade-off for the folks you're discussing.

You'll find a lot of EA folks perceive climate change as a real problem but also perceive marginal carbon costs as not a thing worth focusing on given all the other problems in the world and the fact that carbon is offsetable. You are reading this as a "careless attitude" but I don't think this is a fair characterization. There are real tradeoffs to be made here about how to use marginal attention; they may be offsetting and just not talking about it, or deciding that it's not going to make enough difference in the short run, but regardless I think you have insufficient evidence to conclude that their attitude is wrong.

(I personally offset all my CO2 with Wren and think for at least 5 minutes about each plane flight I decide to take to decide if it is worth it; but have never written about this till now, and would have no reason to bother writing it down.)

I'm interested in the discussion of whether in fact we are at a hinge of history, maybe this is a good comments section for that. I agree that Will's analysis barely scratches the surface and has some flaws.

Factors under consideration for me:

  • Existence of technologies that can have direct impacts on future society through making the world much better or much worse: computation and AI, the internet & social media, nanotech, biotech, the printing press, energy production / Dyson spheres
  • Do population/economic growth rates matter? i.e., if we are growing fast now vs slow, what would that imply?
  • Institutional attitudes: Do we have institutions that change behavior in controllable ways? What do people believe about the future impact of tech/ideas like money, life extension, social media, systems of government like the UN/democracy/Marxism/fascism, principles like liberalism/economics, strategies for national wealth like expansionism/colonialism/mercantilism, and so on?
  • Attitudes about change: are we able to convince people of things? Do people change their minds quickly or slowly? What systems exist to get information out, and what feedback mechanisms do they have?
  • Moral attitudes: How much do people care about others? To what degree do they care about those distant from them? Do people prioritize suffering, pleasure, satisfaction, etc? Do they believe they can change the world? Do they believe that there are moral errors that they or others are regularly making?
  • Satisfaction & Dissatisfaction attitudes: How much do people believe the world should be better than it is, and how motivated are they to “invest” to make things go better? e.g., Cold War & space exploration, colonialism era, building bridges and tunnels and other infra?

I see arguments for hingiest era being in the past, present or future:

  • arguments for past, eg 1780 or thereabouts: there were far fewer people, and they could have predicted (based on observing spread of religion) that the printing press, Industrial Revolution, European colonialism/mercantilism, and/or economic liberalism and democracy would have had huge impact. They also may have been able to predict moral progress eg slavery is bad. They probably would have been able to see that certain institutions had a ton of influence and were in turn influenceable.

    • Instinct is that they would have failed to predict as much progress in public health as we got, thereby expecting that future people would live in greater suffering than they do. Maybe this would have reduced their motivation to imagine a future with far more people.
    • They also could probably not have imagined computing and the internet in any particular detail.
  • arguments for this century (2000 to 2100): computing is going fucking crazy, there has never been a technology like this that has enabled such short feedback loops to society. Social media has shown that attitudes can change really quickly when info-consumption is addictive and anyone can publish widely. But these tech changes can't go on; we will certainly reach the limits of physics this century and change will slow down dramatically, so whatever we settle on soon will greatly impact how the future shakes out.

    • Counter-argument is that we haven't seen much popular moral progress, and it seems to me that there is far more to go here; our pace of tech development is outpacing moral development
    • Also, while institutions have a ton of power, they mostly seem like they are stuck in the past and hard to change; the institution which will impact the next thousand years probably doesn't exist and it is not clear what it looks like.
  • arguments for the future: Essentially, that computing is just the beginning; if we survive this era then we'll reach even more impactful tech, such as bio, nano, space, superluminal etc; new impactful institutions will arise that don't depend too heavily on whatever we are doing today, or maybe we'll be multiplanetary or in VR or whatever. Secondly, humans need to 'catch up' in moral development to our technological development and that just takes time and could easily stretch beyond 2100.

Overall I lean towards the present: tech is moving so fast now, faster than in any point in the past, and I see reasons for it to slow down by the end of the century. The slow pace of moral development idea pushes the hinginess into the future but I think the chance of surviving until then outweighs the changes in our morality and societal organization that I expect after that point. If I were certain we would survive another 100 years then I might be convinced that the future will be more hingey than the present.

Load more