I think the excerpt is getting at "maybe all possible universes exist (no claim about likelihood made, but an assumption for the post), then it is likely that there are some possible universes -- with way more resources than ours -- running a simulation of our universe. the behaviour of that simulated universe is the same as ours (it's a good simulation!) and in particular, the behaviour of the simulations of us are the same as our behaviours. If that's true, our behaviours could, through the simulation, influence a much bigger and better-resourced world. ...
How likely do you think it would be for standard ML research to solve the problems you're working on in the course of trying to get good performance? Do such concerns affect your project choices much?
For the contamination sentence: what's wrong with equipment and media sterilization? Why wouldn't we just grow meat in sterilized equipment in managed facilities? Also, couldn't we just sterlize after the fact?
For the sensitivity / robustness: why does it need to be robust? Can't it just be grown in a special facility? It's not like you can mimic the Doritos production process at home, but that doesn't stop a lot of Doritos being made. Why would the bioreactor need to placed outside?
For waste management: This does seem necessary. But months / years of cont...
I'm pretty confused by your paragraph describing the "futuristic bioreactor". It doesn't seem like we want almost any of those features for cultured meat.
The only parts that seem like they would be needed in are "[...] assembling those molecules into muscle and fat cells, and forming those cells into the complex tissues we love to eat" and "It has precise environmental controls and gas exchange systems that keep the internal temperature, pH, and oxygen levels in the ideal range for cell and tissue growth"
Some (though not all) of the others seem like they might be useful if we were to try and make cultured meat production as decentralizable as current meat production (and far more decentralized than factory farming).
Do you think that different trajectories of prosaic TAI have big impacts on the usefulness of your current project? (For example, perhaps you think that TAI that is agentic would just be taught to deceive). If so, which? If not, could you say something about why it seems general?
(NB: the above is not supposed to imply criticism of a plan that only works in some worlds).
Does it make sense to think of your work as aimed at reducing a particular theory-practice gap? If so, which one (what theory / need input for theoretical alignment scheme)?
If amount of happiness (or suffering) possible is not linear in the number of elementary particles, what number of elementary particles do you suggest using?