Artículos

Evaluation in science and technology in Argentina. State of situation and proposals

Cátedra Libre Ciencia, Política y Sociedad. Contribuciones a un pensamiento latinoamericano
Universidad Nacional de La Plata , Argentina
Rocío Montes
Universidad Nacional de La Plata, Argentina

Ciencia, Tecnología y Política

Universidad Nacional de La Plata, Argentina

ISSN: 2618-2483

Periodicity: Semestral

vol. 6, supl. 1-2, e024, 2021

revista.ctyp@presi.unlp.edu.ar

Received: 20 September 2019

Accepted: 07 October 2019



DOI: https://doi.org/10.24215/26183188e024

Abstract: Evaluation is a fundamental part of the scientific and technological policy of a country and, given its performative character, is a tool that allows for the introduction of changes in the sector. In this paper we present a critique of the hegemonic paradigm of evaluation by products, based mainly on quantitative indicators of papers and patents. Within this framework, we discuss the main issues of science and technology evaluation in Argentina. Among them: the excessive weight assigned to quantitative parameters; anonymity and lack of transparency; the primacy of ex ante evaluation carried out exclusively by peers; the lack of coherence with policies and plans, and the overlapping of evaluation systems. Finally, we present a series of proposals to improve the evaluation processes in Argentina, emphasizing the need to achieve consistency with State policies and a National Science and Technology Plan, and to include non-academic actors in evaluation of scientific-technological activity.

Keywords: Scientific evaluation, scientific carrers, CyT projects, social actors, systems of scientific information, anonymity, blind peer review, Argentina.

*Article elaborated collectively in the framework of the Cátedra Libre CPS by Gabriel M. Bilmes, Marcela Fushimi and Santiago Liaudat. With contributions made by Julián Bilmes, Ignacio F. Ranea Sandoval and Jonatan Sabando.

Evaluation as part of science and technology policy

Evaluation is considered a central aspect of science and technology (S&T) policies, especially since the second half of the 20th century, when the activity of the sector reached greater proportions (a period characterised as the "industrialisation of science" by Salomon, 1997). This role, however, is subject to permanent debates and tensions, fundamentally because through evaluation processes, resources are distributed, stable jobs are accessed, careers are promoted, lines of research are consolidated or discarded, and reputations are built or devalued.

All evaluative processes explicitly seek to assess progress, measure results, weigh up effects, attribute scores; but implicitly they act performatively, providing guidelines that orient, organise and privilege one type of activity over another. Various studies have shown how the actors in the scientific and technological complex adapt their practices to what is expected of them (Davyt & Velho, 1999; Fernández Esquinas et al., 2011). For this reason, evaluation is also a fundamental tool for introducing changes in the implicit policies of a country's S&T sector. Returning to Herrera's (1975) categories, it could be said that by modifying evaluative processes, an explicit policy (S&T plans) could be made implicit (norms, values and forms of organisation that effectively guide the practices of the actors).

Evaluation is an instrument at the service of those who plan, finance and/or manage S&T activities. The object to be evaluated is very diverse: from the macro level, such as policies, R&D plans and national programmes; to the meso level, such as institutional evaluations, sub-systems or specific areas; and finally, to the micro level, such as evaluations of individuals, research groups and particular projects. Moreover, it should be noted that the evaluation process can be carried out at different times: prior to the beginning of the activity (ex ante), during the course of the activity (intermediate) or at the end (ex post).

In short, evaluation is a key issue of enormous complexity that is often hidden from most of the actors who take part in a phase of the evaluation process, which, added to the inherent processes of bureaucratisation involved in the activity, generates a sense of opacity and lack of transparency for many of those being evaluated (Atrio, 2018). In this article we do not intend to exhaust a topic for which there exists abundant specialised literature, but to recover the emerging criticisms in relation to the standard model of evaluation by products, to analyse specific problems of evaluation in our country, and finally to present a series of proposals from the Cátedra Libre Ciencia, Política y Sociedad of the UNLP.

The hegemonic paradigm in question: evaluation by products

The most widespread approach to S&T evaluation at the global level is the one enshrined in the Frascati (1963) and Oslo (1992) Manuals of the Organisation for Economic Co-operation and Development (OECD).1 This is a linear perspective that measures income to the system - basically money invested and existing human resources - in relation to the results obtained, translated into the number of scientific articles published in refereed journals (usually referred to as papers), or the technological development achieved, measured in terms of the number of patents obtained. It is an evaluation paradigm that uses the input-output matrix of economics applied to S&T production. Over time, both manuals have been extended and annexed, and although they have kept the original analysis matrix, they have become more complex and have incorporated complementary variables. However, papers and patents continue to be the most valued items in academic evaluations.

Of all the criticisms and questions that this evaluation paradigm has received, we highlight the following:

Several international statements and manifestos, widely supported by individual scientists as well as scientific associations, institutions and journals, question this assessment methodology (DORA, 2002; Hicks et al., 2015). However, evaluation by products remains hegemonic. This is partly due to its intrinsic merits, such as simplicity in its application, ease and economy of data collection, and comparability of results that facilitates management allocation of resources. These aspects tend to prevail over alternative approaches that propose more elaborate and specific strategies, but are for the same reason more complex to implement. On the other hand, and by way of hypothesis, we point out that the persistence of evaluation by products corresponds to the interests of the global powers expressed in the OECD, due to the fact that this logic of evaluation tends to strengthen mainstream science in the central countries and scientism in the periphery (Kreimer, 2011).

Main problems of S&T evaluation in Argentina

In our country, as in other parts of the world, the current paradigm of evaluation has been increasingly questioned, especially during the expansion of the S&T sector promoted by the governments of Néstor Kirchner and Cristina Fernández (2003-2015). During those years, there were debates, reflections and proposals that took shape in different documents and regulations, among which the most important were those drafted by the MinCyT between 2003 and 2015.

The most important of these were those drafted by the Ministry of Science and Technology between 2011 and 20124, the CONICET evaluation regulations (2008) and their successive modifications, and the creation of an inter-institutional commission for the Humanities and Social Sciences (CIECEHCS, 2014). As a result, new projects and financing lines associated with technological, social and productive development were designed and proposed during those years: mainly the Technological and Social Development Projects (PDTS) and sectoral funds such as FONTAR, FONSOFT and FONARSEC, which incorporated different evaluation mechanisms.5 More recently, new criteria were also adopted for entry to research careers and scholarships at CONICET, referring to strategic issues and institutional strengthening. These initiatives - which we will not analyse in this document - represent experiences to be taken into account when thinking about changes in evaluation mechanisms.

In our opinion, the issues that we will analyse below constitute central problems in the evaluation of scientific and technological activities in our country. They also arise recurrently in the debates, not always explicitly, and persist despite proposals for change. These are:

Excessive bias towards the quantitative. The uncritical application of evaluation by products has led to the quantification of research results as an almost unique indicator of scientific excellence. This is despite the fact that the need to prioritise qualitative assessments has been repeatedly pointed out. The negative consequences include: a) the orientation of research towards "fashionable" topics with a greater chance of being published in "mainstream" journals, to the detriment of local or regional issues; b) the strengthening of traditional disciplines and institutions, and of already consolidated groups, undermining those in formation or located in non-central geographical areas; c) the implementation of the logic of "Publish or perish", which leads to the unnecessary and productivist multiplication of the number of publications, many times superfluous and lacking in value in scientific terms; d) bureaucratisation of scientific activity and superficiality of evaluation activities, which are limited to the counting of papers and the application of pre-assembled bibliometric indices, all of which generates a growing unease that is experienced in terms of labour alienation.

Anonymity in the evaluation of resources and people. Ignorance of who evaluates is no guarantee of quality and, on the contrary, can be a source of possible discretion. Since public resources are involved, anonymity in the evaluation of R&D projects, research funds and permanent staff is a violation of National Laws 25,200 and 27,275, which guarantee, respectively, the necessary transparency of any evaluation instance in State bodies and the right of access to public information. Among the main consequences of anonymity, the main problem is that it can lead to irresponsibility in the exercise of the power to judge, resulting in ill-founded opinions, disguised nepotism, arbitrariness and others, which are often hidden and "protected" from public scrutiny. The resistance that still persists in some of the institutions that make up the S&T complex is inexplicable, even more so when in some organizations in the sector -for example national universities- the evaluations are already public.67

Lack of coherence with policies and plans, and overlapping of evaluation systems. In many organisations, a lack of adequacy and consistency of evaluation systems and criteria is observed, with the policies and plans that these organisations themselves promote and with national and regional S&T plans. This is one of the problems most suffered by researchers in their daily lives, and one of the most recognised by the institutions themselves.8 Its origin lies in the disarticulation and lack of inter-institutional and inter-ministerial coordination between all the organizations that make up the S&T complex (CONICET, universities, decentralised organizations, ANPCyT, etc.) since each of them implements its own mechanisms and systems, and information is generally not shared.

The most important consequences are: a) the lack of coordination between agencies allows inconsistencies in the evaluation criteria of a given instance with public plans and policies, sometimes even contradicting them, and producing overlapping deadlines and calls for proposals with sometimes incongruent demands and objectives; b) since evaluations are not shared, each organization carries out its own evaluation, which leads to the same actor, project or institution being evaluated multiple times, unnecessarily overloading the system. Thus, it is common, for example, for one organization to evaluate the promotion of a researcher, for another organization to evaluate the accreditation of a project in which the same researcher participates, and for a third organization to evaluate the awarding of funds; c) the use of multiple virtual platforms for uploading background information (CVar, SIGEVA CONICET, SIGEVA by universities, Incentives, etc.) generates excessive bureaucratisation and work overload that could be avoided.

Primacy of ex ante evaluation. There is an almost absolute predominance of ex ante evaluation of research proposals - be they work plans or projects. In general, intermediate and ex post evaluation is reduced to fulfilling formal steps - filling out forms in due time and form - which do not usually affect the course of the projects, and the absence of in situ evaluations is particularly noticeable.9 The only intermediate, in situ and ex post evaluations that are carried out are those related to the financial issue as a fiscal control, and they are those linked to the financial issue as a fiscal control, and are focused on the rendering of expenses and the inventory of assets. As a result, there is a gap between what a project or work plan said it was going to do and what is actually done, and the use of the results to reorient activities, save budget and improve the quality and relevance of the S&T carried out in the country is wasted.

Merely declarative use of social utility criteria. Although the project formulation grids include items in which the areas of impact and social usefulness of the results and their relevance must be made explicit, this is not often taken into account. A declarative use of impact and social utility is made, merely for the purpose of approving the project, but then it has no practical effect, since S&T production is evaluated by traditional products (papers and patents). Other forms of communication of results are not valued and sometimes non-traditional forms of dissemination, such as alternative media, open access publications, web dissemination, outreach activities among other ways that facilitate the social use of knowledge are undermined. This is an expression of the split between what is said and what is done, which is particularly important when it comes to thinking about how to link S&T to the social problems of our country.

Exclusively peer evaluation. Peer evaluation is unanimously considered a guarantee of the quality of S&T work. However, the fact that the evaluation is exclusively carried out by specialists has negative consequences, such as: a) it tends to adopt current international evaluation guidelines and not according to national problems, favouring the adoption of mainstream research topics and methodologies to the detriment of national and regional problems and traditions; b) corporate endogamy is generated, since peer-controlled evaluation gives rise to group reproduction logics (exchange of favours, status and prestige), rather than social problem-solving dynamics; c) in social impact projects and plans, in general, non-academic actors (workers, farmers, communities or others, whether they are beneficiaries or possible victims of S&T development) are not included at any stage of the evaluation, and the participation of economic actors (companies) and political actors (state bodies, municipalities) is very rare.

Concepts that are assumed to be objective and universal. Evaluation processes are often based on criteria that refer to expressions such as quality, excellence, productivity, relevance, impact, etc. Far from being terms with generalised meanings, they are concepts with a strong value load and multiple meanings. By not making the definition of these concepts explicit (that is, what the evaluator should evaluate), it is assumed that there is a common, objective and universal idea in this regard. This implicitly leads to the adoption of the dominant criteria emanating from the core countries and propagated throughout the world by international organisations (IDB, WB, OECD). The growing effort to link the S&T sector with the business sphere should also alert us to the need to clearly define evaluation criteria, to avoid the private sector shaping the State's research guidelines.

Lack of transparency in evaluation. Despite the fact that there has been progress in recent years, there are still instances, mechanisms and evaluation procedures that are not sufficiently public and transparent: ambiguous criteria, non-explicit parameters and scores, lack of knowledge of both evaluators and those being evaluated about the grids to be used, inaccessible reports, among others, are frequent elements that account for this problem.

Some proposals to improve S&T evaluation processes in Argentina

On the basis of the above diagnosis, the Cátedra Libre CPS proposes the following actions that we consider necessary to guide changes in the evaluation processes:

  1. 1. Revision of the evaluation systems so that they are consistent with State policies and with a National S&T Plan. There is no possibility of discussing a proposal for new paradigms in evaluation outside the framework of a National Project and its corresponding S&T policy. Without going into detail, we define a desirable model of a country as one characterised by sovereign, inclusive and environmentally sustainable development. From this we can roughly deduce S&T public policies aimed at solving social, environmental and productive needs, based on the interdisciplinary study of national and regional problems. The broadest possible participation of social actors other than those regarding the S&T complex should be involved in the elaboration of these policies. This approach seeks to advance in the S&T integration with government and society (productive apparatus, social demands) in a framework of scientific and technological sovereignty, constituting not a complex, but a National S&T System. The criteria and mechanisms for the evaluation of S&T activity should be in accordance with the public policies defined within the framework of this national project and expressed in a National S&T Plan.
  2. 2. Inclusion of social, political and economic actors in the evaluation. In the effort to link S&T with national needs, the participation of the actors that are part of our society in the evaluation processes is essential. We refer to the inclusion, as the case may be, of trade unions, social and popular economy movements, organised communities, SMEs, public companies, representatives of industry and agriculture, state bodies, and others. To this end, it will be necessary to design institutional mechanisms and identify the stages of the process in which they could be involved in the evaluation, so that these actors have a productive role both for the S&T system and for themselves. And, as stated in the previous point, it is also necessary to include them in the elaboration of the S&T policy, so that their subsequent participation in the evaluation processes makes sense. In this way, the social actors involved could analyse the fulfilment of the aims proposed in each sector policy that affects them, and their participation in the definition of the S&T agenda would be a reassurance against possible changes in the course of government, and at the same time would help convert a government policy into a State policy.
  3. 3. Transparency and democratisation of the entire evaluation process. Information technologies now make all stages of the process publicly accessible, so there is no reason to continue to keep them anonymous or private. We propose to give the widest publicity to the evaluation procedures and criteria, which may be known prior to their application, that they become explicit10 and that their adoption responds to public and consensual processes with all the actors involved in the evaluation process. In cases where the evaluators or the bodies they make up use ad hoc criteria that are neither public nor previously made explicit, these should be clearly stated in the opinion issued. Likewise, transparency must also extend to all aspects of the evaluation process, including the selection of the evaluators. and the instances of challenge and recusal.
  4. 4. Eliminate anonymity in the evaluation of people and resources. We propose that the evaluation reports of people and resources be signed by all the evaluators who participated in the task, including the assessors of evaluation commissions, and that they be publicly accessible, which would undoubtedly be a gesture that the whole system would value as a sign of transparency. The resistance that exists in this regard is unfounded, based on fear of change and a certain institutional inertia. The mention of evaluators should be positively valued as part of S&T activities that require not only knowledge and experience, but also time and dedication, and therefore deserve to be properly recognised.
  5. 5. Broadening and diversification of evaluation criteria. It becomes necessary to move away from the scheme that imposes predominance of evaluation by traditional products with a marked quantitative bias. To this end, the evaluation criteria must be broadened, firstly by incorporating qualitative variables - criteria of importance, consequences of the research, timeliness, regional importance, publication and/or dissemination of results in open access, links with social actors, among others - and then by incorporating quantitative criteria that consider the conditions in which scientific research is carried out in each region of the country, its coherence with public policies, its genuine contribution to knowledge, the real funds available to researchers, the weighting of relations between the system's institutions and social actors, or the savings for the country from possible technological development. Undoubtedly, it is not an easy task, but there are experiences in this direction and it is possible to think of immediate steps. For example, in the evaluation of people, the researcher could present, together with his/her CV, a document with other evaluations in relation to their activity and a limited selection of their publications to be evaluated qualitatively and in depth. In this sense, notions such as "the researcher's global trajectory" allow for an evaluation that goes beyond bibliometric impact indicators and considers the researcher's activities as a whole.
  6. 6. De-bureaucratisation and coordination between evaluation bodies. Coordination between S&T bodies is essential in order to avoid the multiplicity of evaluation instances (usually with different institutional grids and logics and the consequent overload of unnecessary work for researchers, evaluators and managers. The overall coherence given to the system by the existence of a S&T plan should help in the coordination between instances. On the other hand, the publicity of the evaluation process would make it possible to reuse the opinions of previous evaluations in their recitals, thus avoiding a waste of public resources. For the same purpose, it is necessary to move towards a single national system of CVs, constructed as broadly as possible so that the overall trajectory of researchers can be observed. And to coordinate between the different S&T bodies the periods of calls for grants, career entry and others in order to reduce the workload for evaluations and their excessive and unnecessary frequency. The funds of small-scale research, for example, could be awarded automatically with the approval of the budget of institutes and laboratories, and more time could be used to the evaluation of more complex processes and large projects, associated with high social impact programmes, which are awarded through public competitions to which projects, networks and groups apply, and not individuals.
  7. 7. Conduct in situ and ex post evaluations of large projects. For larger projects, institutions and in other cases that may be justified, the evaluation should include visits to the work sites or places where the project will be carried out, allowing, among other things, personal interviews. This mechanism, widely used in other institutions around the world, has proven to be an extremely effective tool to encourage qualitative evaluation, allowing for an active exchange between evaluators and the evaluated, which has a positive impact on the quality of the process and its results, regardless of the decision taken. On the other hand, in the case of large projects, it is necessary to include ex post evaluations of results and impacts, which include the researchers, but also the social, political and economic actors involved.
  8. 8. Evaluation of evaluators. Publicising the whole process will make it possible to evaluate the activity of the evaluators. A National Public Bank of Evaluators could be set up in which the evaluators' personal background in the subject could be displayed, and eventually a selection of the opinions issued could be made by the evaluator him/herself. This would have several positive consequences, such as a greater appreciation of this activity as a professional background. It also favours an ethics of evaluation, as the evaluator would have to make well-founded, detailed and consistent judgements, avoiding light judgements, euphemisms and other bad practices. There should also be training opportunities for the activity, which could be included in undergraduate and postgraduate studies.

Finally, and as part of an ethic that we want to promote, we believe that it is necessary to move towards collaborative and formative, inclusive and plural, contextual and situated evaluation. Although there will be always an inevitable competitive side, insofar as resources and access to positions that are limited are disputed, the sense of evaluation must not be lost sight of.

Bibliography

Argentina. Ministerio de Ciencia, Tecnología e Innovación Productiva. Comisión Asesora sobre Evaluación del Personal Científico y Tecnológico (2011 y 2012). Documentos 1 y 2: Hacia una redefinición de los criterios de evaluación del personal científico y tecnológico. Proyectos de desarrollo tecnológico y social (PDTS). Recovered from https://vinculacion.conicet.gov.ar/wp-content/uploads/sites/2/Docu-mento-I-Comision-Asesora-Evaluacion-del-Personal-CYT-version-13-09-121.pdf

Argentina. Ministerio de Ciencia, Tecnología e Innovación Productiva. Secretaría de Planeamiento y Políticas en Ciencia, Tecnología e Innovación Productiva (2013). Argentina Innovadora 2020: Plan Nacional de Ciencia, Tecnología e Innovación. Lineamientos estratégicos 2012-2015. Recovered from: https://www.argentina.gob.ar/sites/default/files/pai2020.pdf

Atrio, J. (2018): “¿Cómo perciben los investigadores del CONICET al sistema institucional de evaluación de la ciencia y la tecnología?”, Revista Iberoamericana de Ciencia, Tecnología y Sociedad, vol. 13, n°37, pp. 189-229.

CPS, C. L. (2018). Publicaciones científicas, ¿comunicación o negocio editorial?. Ciencia, Tecnología y Política, 1(1), 005. https://doi.org/10.24215/26183188e005

Interinstitutional Commission for the Development of Evaluation Criteria for the Humanities and Social Sciences CIECEHCS (2014). Criteria for evaluating scientific production in the humanities and social sciences. Buenos Aires.

Davyt, A. & Velho, L. (1999). Excelencia científica: la construcción de la ciencia a través de su evaluación. Comisión Sectorial de Investigación Científica (CSIC), Uruguay. Redes 6 (13), pp. 13-48.

DORA (2002). San Francisco Declaration on Re-search Assessment. Recovered from: https://sfdora.org (last visit 20/09/19).

Fernández Esquinas, M., Díaz Catalán, C., & Ramos Vielba, I. (2011). Evaluación y política científica en España: el origen y la implantación de las prácticas de evaluación científica en el sistema público de I+D (1975-1994). In T. González de la Fe & López Peláez, Innovación, conocimiento científico y cambio social. Ensayos de sociología ibérica de la ciencia y la tecnología (pp. 93-130). Madrid: Centro de Investigaciones Sociológicas (CIS).

Herrera, A. (1975). Los determinantes sociales de la política científica en América Latina. Política científica explícita y política científica implícita. In J. Sábato (Comp.) (1975). El pensamiento latinoamericano en la problemática ciencia-tecnología-desarrollo-dependencia. Buenos Aires: Paidós.

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature News, 520 (7548), 429. https://doi.org/10.1038/520429

Kreimer, P. (2011). La evaluación de la actividad científica: desde la indagación sociológica a la burocratización. Dilemas actuales. Propuesta educativa, 36(20), 59-77. Recovered from: http://www.propuestaeducativa.flacso.org.ar/archivos/dossier_articulos/60.pdf

Naidorf, J., Vasen, F. & Alonso, M. (2015). Evaluación académica y relevancia socioproductiva: los proyectos de desarrollo tecnológico y social (PDTS) como política cien-tífica. Cadernos PROLAM/USP, 14(27), 43-63. https://doi.org/10.11606/issn.1676-6288.prolam.2015.103235

Porta, F. & Lugones, G. (Dirs.) (2011). Investigación científica e innovación tecnológica en Argentina: impacto de los fondos de la Agencia Nacional de Promoción Científica y Tecnológica. Bernal: Universidad Nacional de Quilmes.

Salomon, J.-J. (1997). La ciencia y la tecnología modernas. In Salomon, J.-J., Sagasti, F. & Sa-chs, C. (Comps.). La búsqueda incierta: Ciencia, tecnología, desarrollo. México: Fondo de Cultura Económica.

Wuchty, S.; Jones, B. F.; Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316 (5827), pp. 1036-1039. doi: http://dx.doi.org/10.1126/science.1136099

Notes

1 The official names of the OECD manuals are respectively: Proposed Standard Practice for Surveys of Research and Experimental Development, and Measuring Scientific and Technological Activities. Proposed Guidelines for Collecting and Interpreting Data on Technological Innovation. The dates referred to in the body of the text correspond to their first editions.
2 This individualistic assessment is reinforced by the system of prizes and distinctions in S&T, generally awarded to individuals and not to research groups, networks or institutions. It is worth noting that research articles are increasingly the product of co-authorship in all fields of knowledge (Wuchty, Jones & Uzzi, 2007).
3 We have discussed this point at length in SCP (2018).
4 Paper 1 (2011): Towards a redefinition of the criteria for the evaluation of scientific and technological personnel and Paper 2 (2012): Technological and social development projects.
5 For a discussion of STDPs see Naidorf et al. (2015). Regarding sectoral funding instruments, see Porta & Lugones (2011).
6 It is worth noting that following a controversy between 2007 and 2009 over the distribution of ANPCyT funds, the National Prosecutor's Office for Administrative Investigations issued a resolution requesting the National Congress to reformulate the decree regulating this body, pointing out that National Law 25,200 was not being applied.
7 There are also strong questions regarding anonymity in the refereeing of articles in scientific journals. We refer to the system known as "double blind", in response to which the open science movement proposes the opening up of all research processes, including the evaluation stage. We hope to address this issue specifically in a future article.
8 "... the science and technology system lacks an articulated and consistent monitoring and evaluation system that considers all dimensions" (Argentina Innovadora 2020, 2013: 54).
9 A partial exception in this sense are the visits that CONEAU makes to university institutions in external evaluation processes.
10 We refer to cut-off lines or unwritten minimum thresholds that are often used to define access to certain academic posts or positions, such as number of papers, number of postgraduate theses supervised, among others.
Non-profit publishing model to preserve the academic and open nature of scientific communication
HTML generated from XML JATS4R