I understand what you're saying about the tension. As someone trained in psychology, there's a litany of papers that 'solve the problem of not understanding' with little or no 'problem solving' benefit.
Having said that, I think those incentives are changing. In the UK and Australia, universities are now being evaluated and incentivised based on how well they solve problems (e.g. https://www.arc.gov.au/engagement-and-impact-assessment). I think, in general, your motivation and career would not be hurt by doing things that focus on engagement with people who have important problems, and helping to impact their decision-making.
Personally, I think even the best 'solve the problem of not understanding' questions start with a problem in real life. I think medicine is a good example of where academic success is often directly correlated with how well your research either solves important problems or has promise to solve problems.
As a fledgling economist, however, I do see you point. There are strong incentives in some fields to come up with some new theory than to solve some existing problem. I guess this is true in medicine too, where there's more money on research for male pattern baldness than malaria (https://www.independent.co.uk/news/world/americas/bill-gates-why-do-we-care-more-about-baldness-malaria-8536988.html).
Still, I never dissuade my students from trying to solve important problems. Even if one of their studeis is something more 'theoretical', I try to ensure they're working backwards from the important problem that's worth solving. To your use example above, even if you do come up with a more general theory of philanthropic portfolio management, you'd hope that your intro and discussion could still speak to how it helps someone answer the policy question: "How much money should I allocate to x vs. y?"
One thing I'd point out is that there are many areas where solving problems does also lead to academic success. Systematic reviews are cited to the hilt because they try to solve an important problem ("what works?"). Knowledge translation is a whole field of taking stuff trapped in universities and getting it out into practice to solve problems. Very very few interventions do economic analyses of their cost-benefit, and those that do often struggle to put a dollar value on the benefit. For example, in this study, we could calculate the cost per bit of childhood cardiovascular health, but couldn't put a dollar value on the bit of cardiovascular health: https://jamanetwork.com/journals/jamapediatrics/article-abstract/2779446
One of my deepest regrets was trying to focus my PhD on something that was theoretically interesting but practically far from helping the most disadvantaged (i.e., do you need to control or accept your emotions to perform under pressure?). I did this because I thought that was what you were 'supposed' to do, and because I thought it was interesting. If I had my time over, I would have started with a bigger problem then worked backwards to find the something interesting and 'at the frontier of knowledge.'
Critical thinking about research, including what biases are most common, why they apply, and how to know if an intervention works: https://training.cochrane.org/handbook
Great initiative @MichaelA. I'm not sure what a 'sequence' does, but I assume this means there'll be a series of related posts to follow, is that right?
My little dude is only 2 but one of my best mates. Have never had more laughs than as a dad. But, never had more tears either. It's turbulent, but the highs are high.
I'm a university professor (senior lecturer, is what we call it down under) and sport psychologist, so if ever you want me to speak to how involvement in your project can actually increase the quality of athletes' motivation and therefor performance, I can hopefully act as a credible source for an interesting angle to sell it.
So there's lots of small studies showing nudges work, but some studies say the same nudges are harmful. Instead, I'd recommend relying more upon evidence syntheses, when they're available. Some things that are 'strongly recommended' by theory and experts just don't stand up to the data (e.g., 'legitimising paltry contributions') because of counter-veiling forces (e.g., anchoring). A whole bunch of EAs finished this project earlier in the year to summarise all the evidence syntheses: https://psyarxiv.com/yxmva/ You might find it a useful summary.
"Balance autonomy, competence, and relatedness"
These are the three most robust psychological needs. Let me start by outlining what these are, why most don't balance them, and the evidence for involving each.
By autonomy, I mean giving following the feeling that they're acting out of their own volition. They either have the freedom to act on what is important to them (e.g., choices over projects) or what they're doing is so aligned with their values that they don't need choice (e.g., doctors following evidence-based protocols).
By competence, I mean giving the sense of efficacy of achieving their goals. This involves leaning toward self-referenced, improvement goals, rather than goals against others (where your sense of efficacy is more fragile). It also means giving feedback about progress and goal achievement, and suggestions for improvement, both of which tend to increase my feeling that I can do it (whatever 'it' is).
By relatedness, I mean feeling understood and cared for. Some leaders do this by creating shared understanding between team members, where others do it by building personal connections with followers directly.
The problem is: strategies that build some things crush the others. A chummy-buddy-boss might connect with you, but not build competence or may use the relationship to pressure you into stuff you don't want to do. A draconian or transactional boss might get good work out of you and still make you feel competent. A laissez-faire will often give autonomy and choice, but at the expense of improving competence or relatedness.
Why believe me?
Meta-analyses on leadership show that:
(1) transactional leadership is okay (probably because clear targets + feedback = competence), but isn't as good as transformational leadership, where transformational leadership places more emphasis on a range of psychological needs ;
(2) servant leadership explains variance even controlling for transformational leadership, because servant leadership is more directly focused on meeting the needs of the followers and
(3) in studies that directly measure satisfying these needs at work, the causal model explains a lot of the variance: satisfying needs leads to higher motivation, which leads to better engagement, satisfaction, and performance.
Willing to be wrong about these conclusions, but think the psychological needs are incredibly powerful in good leadership, easy to support, but require skill to balance.
(1 and 2) Hoch, J. E., Bommer, W. H., Dulebohn, J. H., & Wu, D. (2018). Do Ethical, Authentic, and Servant Leadership Explain Variance Above and Beyond Transformational Leadership? A Meta-Analysis. Journal of Management, 44(2), 501–529. https://doi.org/10.1177/0149206316665461
(2) Eva, N., Robin, M., Sendjaya, S., van Dierendonck, D., & Liden, R. C. (2019). Servant Leadership: A systematic review and call for future research. The Leadership Quarterly, 30(1), 111–132. https://doi.org/10.1016/j.leaqua.2018.07.004
(3) Slemp, G. R., Kern, M. L., Patrick, K. J., & Ryan, R. M. (2018). Leader autonomy support in the workplace: A meta-analytic review. Motivation and Emotion. https://doi.org/10.1007/s11031-018-9698-y