AllAmericanBreakfast

Wiki Contributions

Comments

Call to Vigilance

Do you have an opinion on the second-best venue for people interested in these issues to find community?

How to succeed as an early-stage researcher: the “lean startup” approach

I asked Cleve about what made him decide that the singular value decomposition, and later MATLAB, were topics worth focusing on. What sources of information did he look to? Was he trying to discern what other people were interested in?

What I took in from his response was that he never picked topics based on the scale of the potential application. For example, he didn't decide to study the mathematics underpinning computer graphics because of the applied importance of computer graphics. He just has a relentless interest in the underlying mathematics, and wants to understand it. What can we learn about the quaternion, a 4x4 matrix that's the workhorse of computer graphics? This understanding of these topics developed bit by bit, through small-scale interactions with other people.

We should treat this sort of account with skepticism, both because it's a subjective assessment of his own history, and because it's a single and unrepresentative example of the outcomes of academic mathematical research. Cleve might have simply lucked into a billion-dollar topic. The fact that we're all asking him about his background is the result of selecting for outcomes, not necessarily for an unusually effective process.

But I think what he was saying was that to find ideas that are likely to nerd snipe somebody else, it's important to use your judgment and try to identify components of a field in an academic sense that are clearly important, and try to understand them better. Having a sense of judgment for the importance of components of a system seems like an important underlying skill for the "lean startup" approach you're describing here.

How to succeed as an early-stage researcher: the “lean startup” approach

I am sitting in a virtual lecture with Cleve Moler, inventor of MATLAB. He just told us that he produced a 16mm celluloid film to promote the singular value decomposition in 1976. A clip from the film he produced made it into Star Trek, the Motion Picture, in 1979. It's on a screen behind Spock. Point of evidence in favor of the idea that promoting ideas matters in academia. 

What should we call the other problem of cluelessness?

“Partial” might work instead of “non-absolute,” but I still favor the latter even though it’s bulkier. I like that “non-absolute” points to a challenge that arises when our predictive powers are nonzero, even if they are very slim indeed. By contrast, “partial” feels more aligned with the everyday problem of reasoning under uncertainty.

What should we call the other problem of cluelessness?

One of the challenges is that “absolute cluelessness” is a precise claim: beyond some threshold of impact scale or time, we can never have any ability to predict the overall moral consequences of any action.

By contrast, the practical problem is not as a precise claim, except perhaps as a denial of “absolute cluelessness.”

After thinking about it for a while, I suggest “problem of non-absolute cluelessness.” After all, isn’t it the idea that we are not clueless about the long term future, and therefore that we have a responsibility to predict and shape it for the good, that is the source of the problem? If we were absolutely clueless, then we would not have that responsibility and would not face that problem.

So I might vote for “absolutely clueless” and “non-absolutely clueless” to describe the state of being, and the “problem of absolute cluelessness” and “problem of non-absolute cluelessness” to describe the respective philosophical problems.

Why scientific research is less effective in producing value than it could be: a mapping

This reminds me of a conversation I had with John Wentworth on LessWrong, exploring the idea that establishing a scientific field is a capital investment for efficient knowledge extraction. Also of a piece of writing I just completed there on expected value calculations, outlining some of the challenges in acting strategically to diminish our uncertainty.

One interesting thing to consider is how to control such a capital investment, once it is made. Institutions have a way of defending themselves. Decades ago, people launched the field of AI research. Now, it's questionable whether humanity can ever gain sufficient control over it to steer toward safe AI. It seems that instead, "AI safety" had to be created as a new field, one that seeks to impose itself on the world of AI research partly from the outside.

It's hard enough to create and grow a network of researchers. To become a researcher at all, you have to be unusually smart and independent-minded, and willing to brave the skepticism of people who don't understand what you do even a fraction as well as you do yourself. You have to know how to plow through to an achievement that will clearly stand out to others as an accomplishment, and persuade them to keep sustaining your funding. That's the sort of person who becomes a scientist. Anybody with those characteristics is a hot commodity.

How do you convince a whole lot of people with that sort of mindset to work toward a new goal? That might be one measure of a "good research product" for a nascent field. If it's good enough to convince more scientists, especially more powerful scientists, that your research question is worth additional money and labor relative to whatever else they could fund or work on, you've succeeded. That's an adversarial contest. After all, you have to fight to get and keep their attention, and then to persuade them. And these are some very intelligent, high-status people. They absolutely have better things to do, and they're at least as bright as you are.

Why scientific research is less effective in producing value than it could be: a mapping

All these projects seem beneficial. I hadn't heard of any of them, so thanks for pointing them out. It's useful to frame this as "research on research," in that it's subject to the same challenges with reproducibility, and with aligning empirical data with theoretical predictions to develop a paradigm, as in any other field of science. Hence, I support the work, while being skeptical of whether such interventions will be useful and potent enough to make a positive change.

The reason I brought this up is that the conversation on improving the productivity of science seems to focus almost exclusively on problems with publishing and reproducibility, while neglecting the skill-building and internal-knowledge aspects of scientific research. Scientists seem to get a feel through their interactions with their colleagues for who is trustworthy and capable, and who is not. Without taking into account the sociology of science, it's hard to know whether measures taken to address problems with publishing and reproducibility will be focusing on the mechanisms by which progress can best be accelerated.

Honest, hardworking academic STEM PIs seem to struggle with money and labor shortages. Why isn't there more money flowing into academic scientific research? Why aren't more people becoming scientists?

The lack of money in STEM academia seems to me a consequence of politics. Why is there political reluctance to fund academic science at higher levels? Is academia to blame for part of this reluctance, or is the reason purely external to academia? I don't know the answers to these questions, but they seem important to address.

Why don't more people strive to become academic STEM scientists? Partly, industry draws them away with better pay. Part of the fault lies in our school system, although I really don't know what exactly we should change. And part of the fault is probably in our cultural attitudes toward STEM.

Many of the pro-reproducibility measures seem to assume that the fastest road to better science is to make more efficient use of what we already have. I would also like to see us figure out a way to produce more labor and capital in this industry. To be clear, I mean that I would like to see fewer people going into non-STEM fields - I am personally comfortable with viewing people's decision to go into many non-STEM fields as a form of failure to achieve their potential. That failure isn't necessarily their fault. It might be the fault of how we've set up our school, governance, cultural or economic system.

Has anyone found an effective way to scrub indoor CO2?

Indoor CO2 concentrations and cognitive function: A critical review (2020)
 

"In a subset of studies that meet objective criteria for strength and consistency, pure CO2 at a concentration common in indoor environments was only found to affect high-level decision-making measured by the Strategic Management Simulation battery in non-specialized populations, while lower ventilation and accumulation of indoor pollutants, including CO2, could reduce the speed of various functions but leave accuracy unaffected."

I haven't been especially impressed by claims that normal indoor CO2 levels are impairing cognitive function to any extent worth worrying about. Crack a window, I guess?

Why scientific research is less effective in producing value than it could be: a mapping

it could be a lot more valuable if reporting were more rigorous and transparent

Rigor and transparency are good things. What would we have to do to get more of them, and what would the tradeoffs be?

Do I understand your comment correctly that you think that in your field that the purpose of publishing is mainly to communicate to the public, and that publications are not very important for communicating within the field to other researchers or towards end users in the industry?

No, the purpose of publishing is not mainly to communicate to the public. After all, very few members of the public read scientific literature. The truth-seeking or engineering achievement the lab is aiming for is one thing. The experiments they run to get closer are another. And the descriptions of those experiments are a third thing. That third thing is what you get from the paper.

I find it useful at this early stage in my career because it helps me find labs doing work that's of interest to me. Grantmakers and universities find them useful to decide who to give money to or who to hire. Publications show your work in a way that a letter of reference or a line on a resume just can't. Fellow researchers find them useful to see who's trying what approach to the phenomena of interest. Sometimes, an experiment and its writeup are so persuasive that they actually persuade somebody that the universe works differently than they'd thought.

As you read more literature and speak with more scientists, you start to develop more of a sense of skepticism and of importance. What is the paper choosing to highlight, and what is it leaving out? Is the justification for this research really compelling, or is this just a hasty grab at a publication? Should I be impressed by this result?

It would be nice for the reader if papers were a crystal-clear guide for a novice to the field. Instead, you need a decent amount of sophistication with the field to know what to make of it all. Conversations with researchers can help a lot. Read their work and then ask if you can have 20 minutes of their time; they'll often be happy to answer your questions.

And yes, fields do seem to go down dead ends from time to time. My guess is it's some sort of self-reinforcing selection for biased, corrupt, gullible scientists who've come to depend on a cycle of hype-building to get the next grant. Homophilia attracts more people of the same stripe, and the field gets confused.

Tissue engineering is an example. 20-30 years ago, the scientists in that field hyped up the idea that we were chugging toward tissue-engineered solid organs. Didn't pan out, at least not yet. And when I look at tissue engineering papers today, I fear the same thing might repeat itself. Now we have bioprinters and iPSCs to amuse ourselves with. On the other hand, maybe that'll be enough to do the trick? Hard to know. Keep your skeptical hat on.

Why scientific research is less effective in producing value than it could be: a mapping

My experience talking with scientists and reading science in the regenerative medicine field has shifted my opinion against this critique somewhat. Published papers are not the fundamental unit of science. Most labs are 2 years ahead of whatever they’ve published. There’s a lot of knowledge within the team that is not in the papers they put out.

Developing a field is a process of investment not in creating papers, but in creating skilled workers using a new array of developing technologies and techniques. The paper is a way of stimulating conversation and a loose measure of that productivity. But just because the papers aren’t good doesn’t mean there’s no useful learning going on, or that science is progressing in a wasteful manner. It’s just less legible to the public.

For example, I read and discussed with the authors a paper on a bioprinting experiment. They produced a one centimeter cube of human tissue via extrusion bioprinting. The materials and methods aren’t rigorously controllable enough for reproducibility. They use decellularized pig hearts from the local butcher (what’s it been eating, what were its genetics, how was it raised?), and an involved manual process to process and extrude the materials.

Several scientists in the field have cautioned me against assuming that figures in published data are reproducible. Yet does that mean the field is worthless? Not at all. New bioprinting methods continue to be developed. The limits of achievement continue to expand. Humanity is developing a cadre of bioengineers who know how to work with this stuff and sometimes go on to found companies with their refined techniques.

It’s the ability to create skilled workers in new manufacturing and measurement techniques, skilled thinkers in some line of theory, that is an important product of science. Reproducibility is important, but that’s what you get after a lot of preliminary work to figure out how to work with the materials and equipment and ideas.

Load More