In the evaluation of scientific research it is rather obvious that ´scientific quality´ is one of the most important criteria, and in our case of development research it stays on equal footing with another key dimension: relevance to development. The criteria and procedures for evaluation are very much identical to those applied in the evaluation of research in general: peer reviews, panels, bibliometric indices, etc. The practical application of these tools in the case of development research may sometimes be quite difficult. Yet there seems to exist a general consensus that quality must be expressed in the same manner as for any scientific research, with the same problems and the same constraints.
Any double standard in this respect is to be rejected. From the conclusions of the Royal Academy of Overseas Sciences Symposium on the Evaluation of Development Research, 2009, we can cite the words of J.Berlamont: "We don´t help developing countries by compromising on high standards". In the same report, Yépez made a similar statement, expressing her regrets that this is not applied rigorously, as in some instances in the relationships between Spain and Latin-America. There are in addition ethical reasons for not accepting a double standard.
The standard definition of "scientific quality" is through the conformity with the principles of the "scientific method". This method is described in terms of the formulation, testing, and modification of hypotheses, but these ideas are then mostly illustrated by examples of very revolutionary scientific discoveries (like Darwin´s theory of evolution or Einstein´s theory of relativity).
A more practical criterion for scientific quality is that the research should lead to some new idea (and this may be a theoretical hypothesis, an experimental law, the causal relation between two phenomena, or a new technology, etc.) and that the credibility or the usefulness of this new idea be corroborated by theoretical or empirical arguments (e.g., a mathematical analysis, an experimental verification, a statistical investigation, etc.). For the aspect "new", we refer to Annex Innovation, originality.
If we take this definition seriously, it implies a set of qualitative characteristics like:
- The research (and its ensuing publications) should start from a well defined research question (otherwise, there would be no need for a new idea).
- The researchers should be aware of the standard knowledge in the domain of the research performed (as well about the facts as about the commonly accepted hypotheses).
- All reasoning should follow strict logical rules.
- Experiments should be reproducible.
- There should be openness with respect to a full description of the experimental or theoretical circumstances and details. (No magic tricks!)
This definition of scientific quality may sound rather idealistic and hard to achieve for many groups in developing countries.
- This quality should certainly be considered as the final target to which scientific research should evolve, also in the South.
- Research projects that do not conform to the above description may still be of relative interest. A purely descriptive research, e.g., can form an excellent preparation for a subsequent project, because the described data can give inspiration for the formulation of a new idea. Nevertheless, some of the above mentioned aspects should be present, such as the originality (the description should contain new elements) and the need of a predefined question (why do you need a new description?).
How can it be used for the evaluation of a research project (ex ante)?
The main criterion for the success of a research project is the quality of the researcher or the research group that proposes the project. However, a few additional considerations should be taken into account, and the specific guide should clearly state who will perform this scientific evaluation: a general evaluation commission or external referees. Questions to be asked are:
- Does the project start from a relevant non-trivial question, and is this question clearly formulated in the framework of the subject?
- Would answering this question (or solving this problem) really be an important step forward in the development or progress of our scientific knowledge?
- Does the project description indicate that the group is sufficiently acquainted with the up-to-date knowledge in this domain? Do they have broad and adequate access to the international scientific literature?
- Is the proposed methodology for obtaining an answer to this problem appropriate?
- Does the group possess the necessary skills for applying this methodology? Or does the project contain an element of upgrading their skills to the required level?
How can it be used for the evaluation of a scientific publication?
Since we expect at the end of a research project to have a written result - a report or a publication - the items below will at the same time form a part of the evaluation ex post of a research project.
In order to assess the quality of a scientific publication (or report), we should distinguish between an intrinsic evaluation and an evaluation on the basis of external indicators. An intrinsic evaluation by an independent expert in the domain treated by the publication is generally considered to be the best way for assessing the scientific value of a paper. Questions to be asked:
- Is this paper well written, with a clear structure that underlines the starting problem, the methodology followed, the results obtained and the conclusion?
- Is its description sufficiently comprehensive and the logical reasoning perfectly sound?
- Does the conclusion constitute a valuable step forward in our knowledge?
For an evaluation on the basis of external indicators that try to measure the impact that a paper has (or potentially may have), see Evaluation of scientific publications.
How can it be used for the evaluation of a researcher?
The scientific attitude of a researcher can only be judged from his activities, i.e. from the research he/she performed and from his/her publications. For very young researchers, we may have to resort to interviews or to the testimony of their mentors. Questions that can be answered by a general evaluation committee are:
- What is the number of publications relative to the number of years devoted to research, and taking into account the rest of the duties of the person (didactical and managerial tasks, other services to the community,...)?
- Taking into account the financial possibilities, does he/she participate in conferences with scientific contributions and publications in the proceedings?
- What is the scope of the subjects investigated: is there a nice spread or is the same subject treated over and over again? Whatever is the case, is this justified by the nature of the problems investigated?
It may also be useful to ask for evaluation by a senior scientist, who personally knows the researcher to be evaluated (although some may be overly inclined to give a positive evaluation...). Questions here could be:
- Is this person driven by the curiosity to understand the phenomena, or is he satisfied with a superficial description?
- Is he meticulous in working out all the details of his work, and eliminating all other possibilities before jumping to a conclusion?
How can it be used for the evaluation of a research group?
A team or research group should be assessed on the basis of the composition of the group, i.e. on the quality of its members and of their publications (assessed as described above). Furthermore, one should consider the following:
- Does the group have a coherent research plan, or is everybody following his/her own favourite programme?
- How good are the prospects for a successful elaboration of the work plan? Are all necessary expertises present in the group? Is the infrastructure appropriate?
- How good are the international contacts of the group? (Being integrated in the international research community usually gives some guarantee for scientific quality.)