Metascience
Scientific Theories and Their Psychological Corollaries: The Ecological Crisis as a Case Study in the Need for Synthesis
Author: Arnold Schroder
Date: December, 2023
Area: Metascience, Scientific Ethics, Social Sciences
Text: PDF, Substack
That policy makers will ever rationally respond to scientific warnings about the ecological crisis should be treated as a falsifiable hypothesis. After more than five decades of such warnings, there is a strong case for skepticism. Climate and other ecological tipping points constitute the quantitative thresholds beyond which current political systems can definitively be said to have failed. This presents a mandate to generate broad consensus on where tipping points lie, and at what proximity to them new strategies should be pursued. Central to any new strategy should be an understanding of why the old one failed—an understanding of why those in power almost exclusively derive from academic backgrounds other than physical science, and the psychological differences between those who issued or received so many warnings of collapse. To that end, a psychological trait syndrome relevant to political power is proposed, based on correlations between academic specialization, psychometric results, and the behavior of powerful people across a wide range of societies. This proposed syndrome consists of four covarying dimensions of individual difference. These are perceptions of hierarchy vs. egalitarianism, established knowledge vs. open inquiry, physical vs. symbolic action, and schematic vs. particular knowledge.
Visualizing researchers’ scientific contributions with radar plots
Author: Manh-Toan Ho
Date: December, 2023
Area: Metascience
Text: PDF, Substack
The essay advocates for diverse approaches in presenting a researcher's scientific contributions in a project. Taking inspiration from sports journalism and its visualization of football players' data, the essay suggests that a radar plot, incorporating CRediT contributor role data, enables multiple authors of a scientific paper to illustrate their contributions in a more specific manner. The suggested method, though subject to bias reporting, pays credit to different aspects of a research project, from conceptualization, analysis, administration, to writing and revising. It not only enables both academics and lay readers to better understand the considerable amount of work required in every project but also calls for the need to employ diverse viewpoints in science.
Principles of Categorization: A Synthesis
Author: Davood Gozli
Date: June, 2023
Areas: Metascience, Psychology
Text: PDF
The present article explores the nature of categorization and its role in shaping our relationship with reality. Drawing on Jens Mammen's distinction between sense categories and choice categories, and Eleanor Rosch's principles of categorization, I examine how our attitudes and modes of engagement with categories can reveal important insights relevant not only to psychology but other scientific fields as well. Furthermore, I argue that the connection between sense and choice categories can be traced by examining atypical instances and non-basic-level categories, which highlight the role of subjects embedded in particular situations. In general, categorization is an active process, influenced by our interests and commitments, even though it does not always appear as such. By correcting biases in our treatment of concepts and categories, we can ultimately correct our biases in scientific practices, thus revealing the entanglement of categorization with broader epistemological issues.
Is a Qualitative Metric of Falsifiability Possible?
Author: Dan James
Date: March, 2023
Area: Metascience
Text: PDF, Substack
There is an ever-increasing number of quantitative metrics, most of which are intended to act as proxies of quality for either authors or journals in current scholarly publishing. In contrast, this paper presents a more directly qualitative paper-level metric that adds a falsifiability dimension to the existing methods used to assess scholarly research. This new metric, the "F-index", is derived from a "Falsifiability Statement" (FS) (examples of both are applied self-referentially in Annex A). An FS is a discrete metalevel statement provided by the author/s outlining how their research or assumptions can be foreseeably falsified, and the F-index is a numerical estimate of how clear and practical the steps are to falsify the research or stated assumptions as outlined in the FS. Though the F-index is particularly suited to hypothesis or theory-driven fields, it is also relevant to any empirical inquiry that relies on propositions or assumptions that can be potentially falsified. An F-index is qualitative in that a high F-index number provides a good indication of how novel or original a paper is. Four candidate mechanisms for obtaining an F-index from a Falsifiability Statement are evaluated: a peer reviewer assessed metric, an author or self-reporting metric, a propositional density metric, and an NLP derived metric. This evaluation concludes that a FS is currently a practical proposition, and that the derivation of a meaningful F-Index is an achievable goal.
Why Proposal Review Should Be More Like Meteorology
Date: January, 2023
Author: Stuart Buck
Area: Metascience
Text: PDF, Substack
The process of evaluating research proposals for funding is often based on subjective assessments of the "goodness" or "badness" of a proposal. However, this method of evaluation is not precise and does not provide a common language for reviewers to communicate with each other. In this paper, we propose that science funding agencies ask reviewers to assign quantitative probabilities to the likelihood of a proposal reaching a particular milestone or achieving technical goals. This approach would encourage reviewers to be more precise in their evaluations and could improve both agency-wide and individual reviewer calibration over time. Additionally, this method would allow funding agencies to identify skilled reviewers and allow reviewers to improve their own performance through consistent feedback. While this method may not be suitable for all types of research, it has the potential to enhance proposal review in a variety of fields. [abstract generated by ChatGPT]
Notes on the Inexact Sciences
Date: November, 2022
Author: Suspended Reason
Areas: Metascience, Psychology, Social Sciences
Text: PDF, Substack
Popular wisdom warns us against premature optimization. And yet, in a quest for public legitimacy and tidy problem domains, many fields discourage vitally necessary descriptive and conceptual work in favor of statistical analysis and laboratory experiments. Topics of unprecedented complexity are tackled using rote, mechanical approaches, by researchers who routinely fail to realize how much linguistic and conceptual clarification is a precondition of headway. Meanwhile, sociological and professional incentives prevent the sorts of synthetic work that might de-provincialize researchers' theories, and initiate exactly those conceptual refactorings which would advance the discipline.
Market Failures in Science
Author: Milan Cvitkovic
Date: April, 2022
Area: Metascience
Text: PDF, Substack
(No abstract — and that’s okay)
On Scaling Academia
Author: Jan Hendrik Kirchner
Date: April, 2022
Area: Metascience
Text: PDF
Overcoming humanity's challenges will require a deep understanding of both the problem and the possible solutions. There are early indications that the scientific apparatus, which has traditionally been the primary tool for gaining deep understanding, might not be able to keep pace. In this essay, I outline a set of interventions that might help the scientific apparatus overcome existing bottlenecks, and I discuss limitations and possible implications. Centrally, I argue for systematization and automation of the research process to allow researchers to benefit from emerging technology like artificial intelligence.
Randomness in Science
Authors: Roger’s Bacon, Sergey Samsonau, Dario Krpan (example article)
Date: May, 2021
Areas: Metascience, Psychology
Text: PDF, Substack
“Could we improve science by exploring new ways to inject randomness into the research process?"