A comment on the EU science policy and on the reasons of its crisis. (Published on the Euroscience blog)
Basic research is intrinsically risky: looking at the history of science one may find many examples of unexpected discoveries as well as of many ideas that were assumed true at a certain point in time while later they were proven wrong. Among the first we find, in recent years, the discovery of high temperature superconductivity by Alex Muller and Georg Bednorz, of the Quantum Hall effect by Klaus von Klitzing, the invention of the scanning tunneling microscope (STM) by Gerd Binnig and Heinrich Rohrer, and, more recently, the discovery of the graphene layers by Andrei Geim and Konstatin Novoselov.
Continue reading European science policy and research risk
Measure the impact of scientific research of a country is of great interest to policy makers in order to define both the amount of its spending and its priorities. To study in a systematic way the scientific impact of nations, using simple and relevant indicators, data on the number of articles and citations produced by individual countries have been used in the past. Articles and citations are, in fact, an indirect measure of the output of the investment in research: the number of scientific articles is related to the activity carried out and the number of citations received by these articles measures popularity that one can consider correlated to the scientific quality. While, when referring to a single researcher these numbers should be treated with great caution, when considering the production of a whole country one can reasonably assume, thanks to the large numbers involved, that there is a proportionality between the total number of articles and citations and the global significance of the research.
Robert May was among the first to perform this type of analysis for the years 1981-1994; in particular compared the investments and the results of scientific research in various countries. Later, David King presented in 2004 a similar but more refined analysis, for the years 1993-2002. More recently, other studies, several national and international agencies, using a similar methodology, measured the productivity of scientific research of nations by normalizing the number of scientific articles and citations received to the spending on research and university. In fact, when comparing very different countries (for example, the United States and Switzerland) it is necessary to take into account the fact that the global scientific production depends on the size of the country itself: for example, the number of researchers or total investment in research. As the number of researchers is not simply measurable (for example, in many countries there is a nontrivial problem in the census of not permanent researchers), it can be used as an indicator for comparing different countries, the spending on scientific research and development or the cost of higher education for research and development (the so-called HERD) that is surveyed by the OECD. Continue reading Diversification of nations research systems