Announcements
Eurosurveillance remains in the updated list of the Directory of Open Access Journals (DOAJ). It was first added to the DOAJ on 9 September 2004. Eurosurveillance is also listed in the Securing a Hybrid Environment for Research Preservation and Access / Rights MEtadata for Open archiving (SHERPA/RoMEO) [2], a database which uses a colour‐coding scheme to classify publishers according to their self‐archiving policy and to show the copyright and open access self-archiving policies of academic journals. Eurosurveillance is listed there as a ‘green’ journal, which means that authors can archive pre-print (i.e. pre-refereeing), post-print (i.e. final draft post-refereeing) and archive the publisher's version/PDF.

ESCAIDE participants are invited to the fifth Eurosurveillance scientific seminar on 30 November 2016

Follow Eurosurveillance on Twitter: @Eurosurveillanc

Read our articles on Zika virus infection

Note of concern published for 'Epidemiological investigation of MERS-CoV spread in a single hospital in South Korea, May to June 2015', http://bit.ly/29QFXPp


In this issue


Home Eurosurveillance Edition  2011: Volume 16/ Issue 18 Article 4
Back to Table of Contents
Previous Download (pdf)
Next

Eurosurveillance, Volume 16, Issue 18, 05 May 2011
Perspectives
Prioritisation of infectious diseases in public health: feedback on the prioritisation methodology, 15 July 2008 to 15 January 2009
  1. Department for Infectious Disease Epidemiology, Robert Koch Institute, Berlin, Germany

Citation style for this article: Gilsdorf A, Krause G. Prioritisation of infectious diseases in public health: feedback on the prioritisation methodology, 15 July 2008 to 15 January 2009. Euro Surveill. 2011;16(18):pii=19861. Available online: http://www.eurosurveillance.org/ViewArticle.aspx?ArticleId=19861
Date of submission: 04 November 2010

In 2004, the German public health institute, the Robert Koch Institute (RKI), prioritised pathogens by public health criteria and presented the methodology and findings. In order to further improve the methodology, the RKI invited experts to give feedback on this via a structured web-based questionnaire. The survey was completed by 72 participants during 15 July 2008 to 15 January 2009. Prioritisation of pathogens was considered as useful for public health purposes by 68 participants and for both surveillance and epidemiological research by 64 participants. Additional pathogens were suggested, including some that are resistant to antimicrobials. The criteria incidence, severity, outbreak potential, emerging potential and preventability were each considered as useful or very useful for the prioritisation (by more than 65 participants for each criterion). Weighting of the criteria was judged as relevant or very relevant by 67 of participants, but needs more explanation. It was also suggested that the group carrying out the prioritisation be composed of a median of 15 experts (range: 5–1,000). The feedback obtained in the survey has been taken into account in the modification of the methodology for the next round of prioritisation, which started in December 2010.


Background

Strengthening communicable disease surveillance and response at national level requires a substantial and long-term commitment of human, financial and material resources. The usefulness of prioritisation as part of this process, irrespective of the methodology used, has been demonstrated by several research groups [1-7]. This investment begins ideally with a systematic review of the national priorities for surveillance [8,9]. In 2004, the Department for Infectious Disease Epidemiology of the Robert Koch Institute (RKI), the German national public health institute in the portfolio of the federal Ministry of Health, initiated an exercise on prioritising various pathogens to guide the research and surveillance strategies of the department. After a literature review, we developed a methodology, including a scoring system for 12 criteria for selected pathogens. For each criterion, a three-tiered score (–1, 0 and +1) was used. Independently, each criterion was weighted: a group of experts ranked the 12 criteria in terms of perceived importance. A mean value was calculated for each criterion (its weight), by which the score of the criteria was to be multiplied. The total weighted scores led to a ranked list of 85 pathogens. Initial findings were presented at three international scientific conferences in 2006 and 2007 [10-12] and were covered in a national non-scientific magazine [13]: this generated public interest and feedback from scientists and patient advocacy groups.

A review of previous prioritisations strategies used by others and details of the methodological approach we used (Figure) were subsequently published in 2008 [14-16]. A review and possible revision of our approach is part of the methodology. This current process is described in this paper.

Figure. Prioritisation workflow, Robert Koch Institute, 2008–10


 

To refine the prioritisation methodology further, develop it into a standard tool and ensure that it is fully understandable, an open call was issued, inviting respondents to complete an online structured questionnaire on the prioritisation methodology and relevance of the prioritisation tool. In addition, we targeted representatives of the scientific community as well as health policy stakeholders.

This paper presents the findings of the survey and discusses their potential implications for the planned modification of the methodology of prioritising pathogens.

Survey approach

When we published extensive descriptions of the prioritisation methodology [14-16], we invited readers to give feedback and comments through an online questionnaire. Additionally, we contacted by email all German regional epidemiologists (n=60), all members and alternates of the scientific Advisory Forum (n=64) of the European Centre for Disease Prevention and Control (ECDC), all heads of the German national reference laboratories (n=66) and all members of the Committee for Epidemiology of Infections (n=12) and four relevant German epidemiological societies and associations, asking them to take part in the online survey.

The online survey contained the list of the 85 selected pathogens and the 12 criteria used in the prioritisation, with questions on the usefulness and appropriateness of these criteria. In order to compare the participants’ feedback on the criteria, we gave a numerical value to each possible answer and calculated the mean value for each criterion. To assess the usefulness of the criteria, the possible answers were: very useful (with a value of 3), useful (value of 2) and dispensable (value of 1).

The survey contained additional questions on the number and profession of experts that participants considered should take part in a prioritisation process and also questions about the participants themselves.

The questionnaire was internally pretested and then posted in both English and German on the RKI home page from 15 July 2008 to 15 January 2009. The data were analysed using Epi Info software.

Survey findings

Participants
In total, 72 participants completed the survey. Most (n=35) found out about the it from the national epidemiological bulletin, 18 received the email request sent by the RKI, 11 read about it in Eurosurveillance [15] and eight found it coincidentally on the Internet.

Of the 72 experts, 54 were working in Germany, nine in other European Union (EU) countries and six in non-EU countries. For three respondents, no information on the country in which they worked was available.

The participants had a variety of professions and institutional affiliations, with the majority being medical doctors by training (Table 1).

Table 1. Employment information and education level of prioritisation survey participants, Robert Koch Institute, 15 July 2008 to 15 January 2009 (n=72)


 

Almost all participants (n=68) provided information on the length of their work experience: the median duration was 18 years (range: 3–40 years).

Feedback and comments
Prioritising pathogens was considered useful for public health purposes by 68 participants, for both surveillance and epidemiological research by 64, and for clinical research by 57. Most respondents considered prioritisation to be beneficial for public health services, at the national (n=58) and international (n=49) level. Additionally, 33 participants believed that the prioritisation will also be useful for regional public health services, universities and ministries of health, to guide surveillance and research agendas. A total of 29 participants considered that it would be beneficial to local public health services.

Most participants (n=40) considered that the list of 85 pathogens [15] was comprehensive and appropriate, while16 proposed changes to the list; 16 answered that they did not know. The following additional pathogens and topics were suggested:

  • all Brucella spp.
  • all Campylobacter spp.
  • Clostridium difficile
  • Corynebacterium ulcerans and Cornybacterium pseudotuberculosis
  • coxsackieviruses
  • echoviruses
  • enteroviruses
  • fungi:
    • Candida spp.
    • Cryptococcus spp.
    • Aspergillus spp.
    • Fusarium spp.
  • human herpesvirus (HHV)-6 and HHV-8
  • poxviruses
  • Pseudomonas ssp.
  • Rickettsia spp.
  • respiratory syncytial virus
  • Staphylococcus epidermidis
  • vectors.

Seven participants suggested including pathogens resistant to antimicrobials as a separate group (e.g. bacteria producing extended-spectrum beta-lactamase, vancomycin-resistant enterococci and oxacillin-resistant S. aureus).

Prioritisation criteria
Definitions of the scores for each criterion are described elsewhere [15]. Table 2 describes the respondents’ rating of the usefulness of the prioritisation criteria, by their profession or institutional affiliation.

Table 2. Usefulness scores of the prioritisation criteria by survey participants’ profession or institutional affiliation, Robert Koch Institute, 15 July 2008 to 15 January 2009 (n=72)

Incidence
Incidence was judged by 68 participants as a very useful or useful criterion for prioritisation. The comments received mainly reflected the difficulty in getting adequate data on incidence, especially for diseases that are not notifiable. One suggestion was to include ‘unknown’ in the highest score of the criterion, to indicate that the level of attention should be high if information is lacking.

Severity
This criterion was considered to be useful or very useful by 68 participants. Comments referred to the difficulty of incorporating different issues such as hospitalisation, work-time lost due to sick leave and persisting disabilities into one single criterion. Furthermore, the issue was raised of how work-time lost due to sick leave can be judged if children, unemployed and retired people are concerned. It was also suggested that cost of medical care be included as an additional aspect.

Mortality
A total of 62 participants thought this a useful or very useful criterion. One respondent suggested that life-years lost be used instead of mortality for diseases that affect children more than adults. The scarcity of reliable data sources to score this criterion was a concern expressed by three of the participants.

As replacing the mortality criterion with case fatality rate had already been suggested in the prioritisation process in 2004 – as mortality is influenced by incidence (a separate criterion) – we asked in our survey whether case fatality rate should be used instead. A total of 33 participants recommended the replacement, 17 preferred mortality, while nine could not see a difference and 13 did not have an opinion. Five participants were in favour of including both criteria.

Outbreak potential
This criterion was considered by 69 participants as useful or very useful. Two participants suggested using the basic reproductive rate (R0) of a pathogen, rather than the frequency of outbreaks, to judge outbreak potential. A fixed threshold of five or more cases per outbreak for all pathogens was questioned by three participants.

Trend
A total of 57 participants considered this as a useful or very useful criterion. However, for 12 respondents the definitions used for each score were not clear enough. They suggested that a timescale for the trend should be determined. Questions were also raised on how to score diseases with unclear trends.

Emerging potential
This criterion was judged by 65 of the participants as useful or very useful. Five considered that endemicity and a low probability of the disease being introduced should not be included in the same score. Additionally, inclusion of the emergence of pathogen strains resistant to antimicrobials as a separate aspect of the definition of the highest score was proposed. It was also suggested that this criterion should be combined with the trend criterion.

Evidence for risk factors/groups
A total of 62 respondents judged this criterion as useful or very useful. A clear definition of the kind and quality of ‘scientific evidence’ was requested by some participants. It was also suggested that this criterion be combined with the evidence for pathogenesis criterion, to cover transmission routes and pathogenesis.

Two respondents questioned whether existing scientific evidence should be part of the prioritisation approach, as it leads to conflation of the relevance of a disease for public health and knowledge of the disease. These two aspects are important, but should be judged independently.

Validity of epidemiologic information
This criterion was judged by 62 participants as useful or very useful. Here the definition of the score 0 (‘epidemiologic information exists but is scientifically not very valid’) was considered imprecise. The applicability of this criterion and the lack of reliable data that are needed to score it were raised as concerns.

International duties and public attention
A total of 52 participants thought this a useful or very useful criterion. However, the definitions were not clear and as several aspects are included in each definition, some participants indicated that it is problematic to assign a single score in situations when separate aspects should be scored differently. They also thought it hard for the scoring to take into account rapidly occurring changes in public or political attention.

Evidence for pathogenesis
This criterion was considered by 57 of the participants as useful or very useful. The problem of assessing different aspects of the criterion using a single score was raised again. Combination of this criterion with the evidence on risk factors/groups criterion was suggested.

Preventability
In total, 67 respondents judged this as a very useful or useful criterion. The task of scoring the availability of prevention measures and need for further research in a single criterion was criticised. It was also suggested that availability of an effective vaccine be included as a separate criterion.

Treatability
This criterion was deemed by 61 respondents as useful or very useful. The distinction between the definitions of the three scores was not clear to some participants and might need clarification. The issue of incorporating drug resistance into the prioritisation was raised again. One participant suggested merging preventability, treatability and severity into a single criterion.

Suggestions for additional criteria
Participants suggested that the prioritisation tool include assessment of the economic impact of a disease or its control measures, the concept of life-years saved or lost, emergence of antimicrobial resistance and monitoring of vaccination effects, for example, on incidence or pathogenicity.

Scoring system
A total of 54 participants found the three-tiered scoring system to be adequate; six would have preferred a two-tiered and four a five-tiered system. Five suggested introducing a more continuous scoring (e.g. from low to high, on a scale from 1 to 10), whenever possible.

Weighting process
The weighting process was judged by 49 participants as very relevant and by 18 as relevant. Two thought it irrelevant and three did not know. The weighting method was considered plausible but initially difficult to understand by 31 participants, 19 understood the weighting method immediately and for 13 it remained unclear. Some respondents supported the separation of the weighting from the actual prioritisation.

One participant pointed out that basing the numerical value of the weighting on the ranking of the criteria may result in bias, as it assumes that the difference in importance between each criterion in the ranked list is always equal. We therefore suggest that values between 1 and 10 be used instead for the weighting, without any ranking.

The need for a better description of the weighting process was highlighted by two participants.

Size and composition of an expert group for prioritisation
The participants proposed that the median size of an expert group needed to conduct the prioritisation exercise of surveillance and research activities of a national public health agency was 15 (range: 5–1,000). They suggested that experts representing the following professions or institutions should take part in future exercise rounds: national public health service (suggested by 65 participants), university faculty of infectious diseases (by 59), microbiologist (by 57), hospital epidemiologist or hygienist (by 51), international public health organisations (by 48), regional public health service (by 47), hospital physician (by 45) and local public health service (by 37). Two respondents suggested that health economists be involved.

Conclusions

Setting priorities in research can serve as a catalyst for public debate and create networks of stakeholders [4,17]. The opinion of the user of the prioritisation is very important, as exchanging experiences and discussing the topics with the various stakeholders is highly relevant [3,18]. Indeed, Lomas et al. stated, when describing prioritisation efforts, ’The process is more important than the science’ [19]. Our survey was one step in involving various stakeholders and proved very useful in helping to develop our prioritisation methodology further, even if the set up of the survey was neither able nor intended to be representative. As the survey was announced in an open call and as some email requests were sent to generic email addresses, we have no information about the denominator and are therefore unable to calculate the response rate. Given the survey design, it is also impossible to tell whether the opinions of those who responded were representative. It is possible that those who chose to take part in the survey were those who were relatively positive about the prioritisation process. However, even if that were the case, they provided constructive criticism and comments, which have helped us to improve our methodology.

Overall, the participants commented positively on the prioritisation methodology: although there was variation between the responses of participants with different professions and institutional affiliations, the proposed criteria were mostly considered useful. However, it became clear that the definitions of some criteria were unclear for scoring purposes. We will therefore try to clarify the problematic definitions.

Which pathogens should be included?
The suggested list of 85 pathogens was seen as fairly comprehensive by most participants. However, given the recommendations, we realised that some additional pathogens could be included in future, as their importance has changed since the list was drawn up in 2004.

How should the prioritisation take into account antimicrobial resistance and emerging diseases?
Interestingly, antimicrobial resistance was mentioned at various points in the survey as an essential issue that should be addressed. We believe it can be sufficiently accounted for if it is an integral part of the criterion of treatability and we therefore propose that it be included in its definition.

Participants also questioned how an endemic disease could be scored in the same way as a disease that is unlikely to emerge. We believe this to be justified, as an endemic disease has generally already led to an established infrastructure for prevention, surveillance, diagnosis and treatment. Similarly, diseases that are not endemic and are very unlikely to emerge in a country in near future should probably not be considered a priority when resources are limited. A disease with potential to emerge generates new challenges and thus deserves special attention, at least for prevention and surveillance.

How should disease severity be assessed?
One of the issues raised in various ways throughout the survey was the challenge of adequately accounting for the severity of an illness resulting from an infectious disease. Participants suggested that the prioritisation should take into account other aspects of disease severity, such as the economic impact of an illness, life-years lost, the effect of work-time lost due to sick leave if children, unemployed and retired people are concerned, and the cost of care. However, including the requirement for such detailed information might increase the problem of lack of relevant data, resulting in difficulty in scoring this criterion, as discussed above. Our original approach intentionally attempted to keep the score definitions within each criterion as simple as possible. We will, however, take those issues into consideration when redefining these definitions.

Detailed instructions concerning the process of assigning a single score to a multicategory criterion will be developed and provided during the next prioritisation.

How should the prioritisation take into account variability of incidence trends and outbreak potential?
Some participants drew attention to the fact that a time frame would be needed for the scoring of some criteria (e.g. trend or emerging potential). We consider that it would depend on how frequent the prioritisation exercise is planned to be repeated and what its main objective is. For example, a disease with a highly variable incidence from one year to another should probably have a high score for outbreak potential, while the scoring for incidence should probably be based on some sort of average yearly figure for the previous five or 10 years. Furthermore, if recent observations indicate that despite observed fluctuations yearly incidence tends to increase, it should be appropriately accounted for in the trend criterion.

The fixed threshold of five cases or more per outbreak for all pathogens was questioned by some participants. The underlying rational for the threshold was that in Germany, only a few households have five or more members, suggesting that most outbreaks of less than five cases are likely to be limited to one household. Such outbreaks have fewer implications for public health services, as opposed to larger outbreaks. Obviously this distinction may be more appropriate for common gastrointestinal pathogens, which are responsible for the vast majority of all outbreaks. However, for practical reasons we decided to use this threshold for all diseases.

How should criteria be weighted?
To take into account the fact that not all criteria are similarly important for prioritisation, we included a weighting of the criteria, which is independent of the prioritisation. The survey participants commented in general that the weighting of the criteria is relevant, but that it needs to be explained more clearly. Given these comments, we will also consider using a discrete scale for the weighting, rather than basing the weighting on ranking.

How can the prioritisation process deal with lack of reliable data?
The lack of reliable data – data that are needed to score the criteria for each pathogen – was a concern expressed at various points during the survey. It was suggested that the evidence level be specified for each score. We fear, however, that the complexity and effort required would not be in proportion to the expected improvement. Besides, the prioritisation process was designed to use a Delphi approach [20,21], using opinions of senior experts in the field rather than a meta-analysis. The current prioritisation process already assesses the strength of evidence and information available. However, the scores of those criteria are simply included in the overall sum for each pathogen. One possible amendment of the existing methodology would be separate computation of ‘knowledge’ criteria, such as evidence or validity, and ‘relevance’ criteria, such as incidence, severity or treatability.

A standardised tool for prioritising pathogens will obviously never be completely perfect and will also never please every stakeholder [22,23]. However, it helps to improve strategic research planning [5]. We have used the findings of this survey to pragmatically improve the prioritisation methodology, including clarification of the approach, as transparency and understanding are essential components of any prioritisation process. The next round of the prioritisation exercise, which started in December 2010, and which follows the same workflow as shown in the Figure is still ongoing: the revised methodology and the results will be published once the prioritisation is completed.

The reason for involving multiple stakeholders in the improvement process was to ensure a certain level of acceptance and agreement on the pathogen prioritisation list – this list will be a final product of the exercise and will inevitably be a sensitive issue that generates debate. In addition, any part of the findings and methodology may be used by other institutions to conduct their own prioritisation of activities.

Acknowledgements
We cordially want to thank all the participants in our call for comments. The effort spent in filling out the questionnaire is much appreciated. We also want to thank Göran Kirchner, who set up the online questionnaire, and Yanina Lenz, who gave important input to this article.


References
  1. Berra S, Sánchez E, Pons JM, Tebé C, Alonso J, Aymerich M. Setting priorities in clinical and health services research: properties of an adapted and updated method. Int J Technol Assess Health Care. 2010;26(2):217-24.
  2. Fleurence RL, Torgerson DJ. Setting priorities for research. Health Policy. 2004;69(1):1-10.
  3. Nuyens Y. Setting priorities for health research: lessons from low- and middle-income countries. Bull World Health Organ. 2007;85(4):319-21.
  4. World Health Organization (WHO). Priority setting methodologies in health research. Summary Report. Geneva: WHO; 2008. Available from: http://apps.who.int/tdr/stewardship/pdf/Priority_setting_Workshop_Summary10_04_08.pdf
  5. Remme JH, Blas E, Chitsulo L, Desjeux PM, Engers HD, Kanyok TP, et al. Strategic emphases for tropical diseases research: a TDR perspective. Trends Microbiol. 2002 Oct;10(10):435-40.
  6. Doherty JA. Establishing priorities for national communicable disease surveillance. Can J Infect Dis. 2000;11(1):21-4.
  7. Carter AO; Centers for Disease Control (CDC). Setting priorities: the Canadian experience in communicable disease surveillance. MMWR Morb Mortal Wkly Rep. 1992;41 Suppl:79-84.
  8. World Health Organization (WHO). Setting priorities in communicable disease surveillance. Geneva: WHO; 2006. WHO/CDS/EPR/LYO/2006.3. Available from: http://www.who.int/csr/resources/publications/surveillance/WHO_CDS_EPR_LYO_2006_3.pdf
  9. Ghaffar A. Setting research priorities by applying the combined approach matrix. Indian J Med Res. 2009;129(4):368-75.
  10. Krause G, Alpers K, Benzler J, Bremer V, Claus H, Haas W, et al. Prioritising infectious diseases in Germany [Poster]. International Meeting on Emerging Diseases and Surveillance; 23-25 Feb 2007, Vienna, Austria.
  11. Krause G, Alpers K, Benzler J, Bremer V, Claus H, Haas W, et al. Standardised Delphi Method for prioritising foodborne and zoonotic diseases in Germany [Poster]. Priority Setting of Foodborne and Zoonotic Pathogens; 19-21 Jul 2006, Berlin, Germany.
  12. Krause G. Prioritization of infectious diseases by public health criteria. 8th EMBO/EMBL Joint Conference on Science and Society; 2-3 Nov 2007; Heidelberg, Germany.
  13. Mayer K-M. Parade der Keime - Deutschlands Seuchenexperten reihen erstmal Infektionserreger nach deren Gefährlichkeit. Focus. 2007;10:44.
  14. Krause G. Zur Priorisierung von Infektionskrankheiten im ÖGD [The prioritisation of infectious diseases for public health]. Epidemiologisches Bulletin. 2008;40:343-7. German. Available from: http://www.rki.de/nn_1197052/DE/Content/Infekt/EpidBull/Archiv/2008/40__08,templateId=raw,property=publicationFile.pdf/40_08.pdf
  15. Krause G. Prioritisation of infectious diseases in public health--call for comments. Euro surveill.2008;13(40):pii=18996. Available from: http://www.eurosurveillance.org/ViewArticle.aspx?ArticleId=18996
  16. Krause G; Working Group on Prioritization at Robert Koch Institute. How can infectious diseases be prioritized in public health? A standardized prioritization scheme for discussion. EMBO Rep. 2008;9 Suppl 1:S22-7.
  17. Hopkins RB, Campbell K, O'Reilly D, Tarride JE, Bowen J, Blackhouse G, et al. Managing multiple projects: a literature review of setting priorities and a pilot survey of healthcare researchers in an academic setting. Perspect Health Inf Manag. 2007;4:4.
  18. Smith N, Mitton C, Peacock S, Cornelissen E, MacLeod S. Identifying research priorities for health care priority setting: a collaborative effort between managers and researchers. BMC Health Serv Res. 2009;9:165.
  19. Lomas J, Fulop N, Gagnon D, Allen P. On being a good listener: setting priorities for applied health services research. Milbank Q. 2003;81(3):363-88.
  20. Adler M, Ziglio E, editors. Gazing Into the oracle: the Delphi method and its application to social policy and public health. London: Jessica Kingsley Publishers; 1996.
  21. Linstone HA, Turoff M, The Delphi method: techniques and applications. Reading (USA): Adison-Wesley; 1975.
  22. Sibbald SL, Singer PA, Upshur R, Martin DK. Priority setting: what constitutes success? A conceptual framework for successful priority setting. BMC Health Serv Res. 2009;9:43.
  23. Mitton C, Donaldson C. Setting priorities in Canadian regional health authorities: a survey of key decision makers. Health Policy. 2002;60(1):39-58.


Back to Table of Contents
Previous Download (pdf)
Next

The publisher’s policy on data collection and use of cookies.

Disclaimer: The opinions expressed by authors contributing to Eurosurveillance do not necessarily reflect the opinions of the European Centre for Disease Prevention and Control (ECDC) or the editorial team or the institutions with which the authors are affiliated. Neither ECDC nor any person acting on behalf of ECDC is responsible for the use that might be made of the information in this journal. The information provided on the Eurosurveillance site is designed to support, not replace, the relationship that exists between a patient/site visitor and his/her physician. Our website does not host any form of commercial advertisement. Except where otherwise stated, all manuscripts published after 1 January 2016 will be published under the Creative Commons Attribution (CC BY) licence. You are free to share and adapt the material, but you must give appropriate credit, provide a link to the licence, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Eurosurveillance [ISSN 1560-7917] - ©2007-2016. All rights reserved.
 

This website is certified by Health On the Net Foundation. Click to verify. This site complies with the HONcode standard for trustworthy health information:
verify here.