Skip to main content
  • Research article
  • Open access
  • Published:

Hotspots in research on the measurement of medical students’ clinical competence from 2012-2016 based on co-word analysis

Abstract

Background

This study aimed to identify hotspots in research on clinical competence measurements from 2012 to 2016.

Methods

The authors retrieved literature published between 2012 and 2016 from PubMed using selected medical subject headings (MeSH) terms. They used BibExcel software to generate high-frequency MeSH terms and identified hotspots by co-word analysis and cluster analysis.

Results

The authors searched 588 related articles and identified 31 high-frequency MeSH terms. In addition, they obtained 6 groups of high-frequency MeSH terms that reflected the domain hotspots.

Conclusions

This study identified 6 hotspots of domain research, including studies on influencing factors and perception evaluation, improving and developing measurement tools, feedback measurement, measurement approaches based on computer simulation, the measurement of specific students in different learning phases, and the measurement of students’ communication ability. All of these research topics could provide useful information for educators and researchers to continually conduct in-depth studies.

Peer Review reports

Background

For medical students, as future doctors, clinical competence is one of the most important capabilities to acquire. However, the measure used to evaluate this capability remains a key point of concern, and this study domain is currently a major theme in the field of medical education [1,2,3]. Moreover, research on the measurement of medical students’ clinical competence covers many different topics and themes, including the testing of measurement approaches, the measurement of a clerk’s clinical ability and students’ perceptions of clinical competence evaluations [4].

Although previous systematic reviews have summarized the progress in research regarding the measurement of clinical competence, researchers have paid more attention to one or a few aspects of this study domain [5,6,7]. Therefore, the published literature lacks a comprehensive summary of the research on the measurement of medical students’ clinical competence. We sought to determine which research themes were the most frequently published, i.e., research hotspots, with the aim of facilitating future studies and education.

As a common approach to bibliometrics, co-word analysis was first developed by French [8]. Co-word analysis is a type of quantitative analysis that reflects the content of publications by analyzing the frequencies and relationships of co-words. Its main principles are the following: keywords are used to reflect the main content of articles. If two keywords expressing particular research subjects appear in the same article, there may be a certain intrinsic relationship between the two keywords. The more frequently the two keywords occur in the same publications, the closer is the relationship between the words and the more popular are the research subjects reflected by the two keywords. According to the frequency of co-words, keywords can be classified into groups by statistical methods, such as cluster analysis or factor analysis. Additionally, different keywords groups represent particular research hotspots. Currently, this method is widely applied to map the knowledge structure of research fields [9], identify research domain topics, and explore the characteristics and development of the evolution of specific subjects. In addition, this type of analysis has been used in many fields, such as medical science, environmental science, and biology.

Accordingly, using a co-word analysis of the existing literature, we aimed to identify hot topics in the research on the measurement of medical students’ clinical competence and to determine crucial sub-domains of this field of study to provide useful evidence for medical educators and researchers.

Methods

Data source

As a professional database, PubMed contains the world’s largest body of literature on biomedicine. In addition, PubMed has created medical subject heading (MeSH) terms. PubMed indexes most of the literature with MeSH terms, which reflect the contents of the literature more accurately than keywords. Generally, the results derived from a co-word analysis based on MeSH terms are more reliable and reasonable than those obtained using keywords. Therefore, we chose PubMed as the data source for this study.

When we searched the literature, we performed a MeSH search due to its high accuracy. The retrieval strategy was as follows: ((((((((Clinical skill) OR Clinical capacity) OR Clinical performance) OR Clinical ability)) OR “Clinical Competence”[Mesh])) AND “Students, Medical”[Mesh]) AND ((((education* assess*) OR education* evaluat*)) OR “Educational Measurement”[Mesh:NoExp]). Moreover, we set the publication date from 2012/01/01 to 2016/12/31, and we performed the process of retrieval on January 6, 2017.

Criteria for literature selection

We selected literature that met the following study criteria: [1] the type of study was an original article, and [2] the articles searched were related to the topic of “measurement of medical students’ clinical competence.” We excluded literature if any of the following were true: [1] the study type was not an article, i.e., it was a review or a letter, and [2] the main contents of the study did not concern the measurement of medical students’ clinical competence.

We exported the literature results from PubMed in a Medline file. Then, two researchers independently checked the literature by title, abstract and the full text if needed. Finally, we obtained a new Medline file that included the articles that met the criteria of our study.

Data analysis

We deleted specific terms representing characteristics of demography and geographic location, such as “infants” and “Asia”, that were not related to this research. We also extracted the remaining MeSH terms from the new Medline file. We then created a new TXT file in which the MeSH terms were arranged according to different articles. Next, we imported the TXT file into the BibExcel software (developed by Olle Persson), which counted the frequency of each MeSH term. Then, we identified high-frequency MeSH terms based on the Donohue formula, \( T=\frac{-1+\sqrt{1+8\times \mathrm{I}1}}{2} \), where I 1 represents the number of MeSH terms indexed only once in the articles. If the frequency of a MeSH term was greater than the value of T, that term was considered a high-frequency MeSH term. We used these high-frequency MeSH terms to generate a co-word matrix using BibExcel. Furthermore, we transformed the co-word matrix into a correlation matrix by calculating the Ochiai coefficient using the following formula: \( \mathrm{Ochiai}=\frac{N}{\sqrt{Na\times Nb}} \). In this formula, N represents the frequency at which term A and term B were both indexed in the same articles; Na represents the frequency of term A; and Nb represents the frequency of term B. Finally, we imported the correlation matrix into SPSS version 18.0 for Windows (SPSS Inc., Chicago, Illinois) to perform a cluster analysis. We obtained a co-word cluster diagram of the high-frequency MeSH terms, which indicated the research hotspots.

Results

Selected literature

As shown in Fig. 1, 913 studies were retrieved from PubMed, of which 588 met the literature selection criteria and were extracted in this research.

Fig. 1
figure 1

Flow diagram illustrating the literature search

High-frequency MeSH terms

From the 588 original articles, 833 MeSH terms were indexed, and the total occurrence of MeSH terms was 5625. In other words, each MeSH term appeared an average of 6.75 times in the retrieved articles. According to the Donohue formula, the value of T equaled nearly 29, which identified MeSH terms with a frequency greater than 29 as high-frequency words. Ultimately, 31 high-frequency MeSH terms were extracted, as shown in Table 1. “Students, Medical” (508, 9.03%), “Clinical Competence” (438, 7.79%), and “Educational Measurement” (362, 6.44%) were the top three MeSH terms.

Table 1 Descriptions of the high-frequency MeSH terms

The 31 high-frequency MeSH terms mainly showed five layers of clinical competence measurement: measurement subjects (2 terms), measurement objects (5 terms), measurement contents (5 terms), measurement methods or tools (4 terms), and activities of medical education (7 terms), which were shown in Table 1. The first four layers were associated with clinical competence measurement, as they were the key parts of the measurement process. Although there was no direct correlation between the fifth layer and clinical competence measurement, medical educational activities had an important effect on the measurement.

Co-word cluster of high-frequency MeSH terms

According to the frequency of co-words, high-frequency MeSH terms were subjected to a cluster analysis; the results are shown in Fig. 2. According to the distance of the cluster, which equaled 22, we divided 31 MeSH terms into 6 groups, which revealed hotspots for this study domain.

Fig. 2
figure 2

Cluster diagram of high-frequency MeSH terms

The first group contained the most MeSH terms [11], which included terms of measurement objects (“students, medical” and “education, medical, undergraduate”), measurement content (“clinical competence” and “health knowledge, attitudes, practice”), and activities of medical education (“education, medical” and “curriculum”). These terms occurred in the same publication, implying that there must be a link among students, clinical competence and curriculums. To our knowledge, as an essential influencing factor, whether experienced curriculums would cause different results. Hence, researchers took factors associated with curriculums into account [10, 11] when determining the connection between curriculums and the results of clinical competence assessments.

The second group included terms of measurement methods (“program evaluation”) and measurement contents (“attitude of health personnel”). These terms simultaneously appeared in publications with the term “pilot projects”, which revealed the new measurement methods that were explored and implemented by medical educators [12], who tested the effects, validity and feasibility of new methods [13]. The third co-word cluster contained terms of medical education activities (“teaching”, “learning”, and “feedback”). These three MeSH terms occurred simultaneously in articles regarding clinical competence measurement, indicating that educators and students were focused on this measurement. The role of feedback was realized, and its benefit guided future clinical training and evaluation. Thus, in this field, researchers mainly studied teachers’ and students’ perceptions and satisfaction [14, 15] to improve measurements of education.

The fourth co-word cluster contained terms of measurement methods (“physical examination”) and educational activities (“problem-based learning” and “computer simulation”). Computer simulation was gradually applied to the processes of learning and teaching. Thus, a question explored by researchers was whether simulation training would improve clinical skills [16] and whether the effectiveness of training with computer simulation was statistically significant.

The fifth co-word cluster result contained terms of measurement subjects (“Faculty, Medical”) and measurement objects (“internship and residency”, “education, medical, graduate” and “clinical clerkship”). The four high-frequency terms of co-occurrence reflected that medical educators paid more attention to the important steps in the process of learning, and they wanted to determine students’ clinical performance in these phrases [17, 18].

The sixth co-word cluster contained terms of measurement contents (“communication”, “physician-patient relations”) and methods (“patient simulation”). These three high-frequency terms occurred simultaneously in the articles, revealing that researchers were interested in the relationship between patients and medical students, especially the assessment of communication skills by patient simulation [19]. From the above results, six hotspots of clinical competence measurement were found, as follows: [1] the impact of curriculums on clinical competence, [2] development of new methods of clinical competence measurement, [3] students’ feedback on clinical competence measurement, [4] the effect of computer simulation training on clinical competence, [5] students’ clinical performance in the important steps of learning, and [6] communication skills measured by patient simulation.

Discussion

Potential evidence of findings.

According to the co-word analysis, we identified 6 hotspots of medical students’ clinical competence measurement. In light of these 6 hot topics, we found several potential features.

[1] Clinical competence has a broader goal: knowledge, skills and attitudes. Because students’ level of training varies, there are great differences among the measurements in different steps. Junior students are tested by examinations to determine whether they have acquired knowledge, while the comprehensive capabilities of resident doctors are assessed by patient simulation. Hence, it is important for educators to choose appropriate measurement objectives and methods and study the feasibility of each method.

[2] Advanced technology would be useful to medical education and would overcome the deficiencies of traditional measurement tools. The combination of educational and technical assessments will become a trend in the future. With a myriad of innovations in the measurement process, researchers should not only pay attention to their validity and reliability but also start concentrating on creating standards. Ultimately, “pilot projects” will be applied worldwide to promote medical education.

[3] Training is a crucial factor that affects clinical competence. There are multiple forms of training, including curriculums, bedside teaching, and case discussions. However, researchers paid much more attention to curriculums than to other training forms. The reason for this finding might be that curriculums are the major form of training, and it is easy to identify the impact of curricula on clinical competence, such as by examination. In fact, the level of clinical competence is the result of many factors. Accordingly, researchers need to study further how to measure different impact factors and how to improve students’ clinical competence.

Applications of co-word analysis

In light of the increasing importance of clinical competence measurement over the past decades, we decided to identify hot topics in research in this domain using a co-word analysis. Our study represented the first detailed analysis of global clinical competence measurement research hotspots from 2012 to 2016.

Methodologically, hotspots provide useful information for educators and researchers. For educators, understanding hot research domains allows them to consider clinical competence measurement research in budgets. They can also discuss whether to adopt a new measurement tool to assess students’ clinical competence. For researchers, hotspots could provide evidence of research theme selection to facilitate their future research. Moreover, researchers could receive guidance in the search for new research areas and conduct further studies on the basis of their findings. For instance, they could study the conditions and range of applications of new measurement methods.

Scientific findings, such as publications, contain vast amounts of information and clues regarding research topics. We can use this information to study research activity. As a representation of publication contents, keywords or MeSH terms could be used to explore the characteristics of research. However, there are some guidelines for this research method. First, when a co-word analysis is used for research, a large number of high-frequency co-words are needed. Therefore, a minimum number of published articles is required, at least a few hundred. Second, the relationship between co-words must be reflected by statistical software (such as SPSS) or bibliometric tools (for example, CiteSpace and VOSviewer). In particular, with the rapid development of knowledge visualization, potential information from publications could be visually demonstrated using figures. Third, co-word analysis could be used to study the structure and evolution of research in a particular field [20] and to explore new research subjects in addition to identifying hotspots.

Several limitations of our study should be noted. MeSH terms are standard words used to index studies, although not all words have related MeSH terms, especially emerging words; this discrepancy may have affected the results of the co-word analysis to some extent. We also did not divide the MeSH terms by calendar year; therefore, we could not explore the annual evolution of changes in research hotspots.

Conclusions

Clinical competence measurement is systematic and sophisticated. We found six research hotspots referring to aspects of the measurement process, and these findings should be helpful for educators and researchers.

Abbreviations

MeSH:

Medical subject headings

References

  1. Lee K, Whelan JS, Tannery NH, et al. 50 Years of publication in the field of medical education. Med Teach. 2013;35:591–8.

    Article  Google Scholar 

  2. Regehr G. Trends in medical education research. Acad Med. 2004;79:939–47.

    Article  Google Scholar 

  3. Rotgans JI. The themes, institutions, and people of medical education research 1988–2010: content analysis of abstracts from six journals. Adv Health Sci Educ Theory Pract. 2012;17:515–27.

    Article  Google Scholar 

  4. Watling CJ, Lingard L. Toward meaningful evaluation of medical trainees: the influence of participants’ perceptions of the process. Adv Health Sci Educ Theory Pract. 2012;17:183–94.

    Article  Google Scholar 

  5. Hrynchak P, Takahashi SG, Nayer M. Key-feature questions for assessment of clinical reasoning: a literature review. Med Educ. 2014;48:870–83.

    Article  Google Scholar 

  6. Mauksch LB, Dugdale DC, Dodson S, et al. Relationship, communication, and efficiency in the medical encounter: creating a clinical model from a literature review. Arch Intern Med. 2008;168:1387–95.

    Article  Google Scholar 

  7. Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. Med Educ. 2011;45:1181–9.

    Article  Google Scholar 

  8. He Q. Knowledge discovery through co-word analysis. Library Trends. 1999;48:131–59.

    Google Scholar 

  9. Callon M, Law J, Rip A. Mapping the dynamics of science and technology: sociology ofs science in the real world. London: The Macmillan Press; 1986.

    Book  Google Scholar 

  10. Wirth K, Malone B, Turner C, et al. A structured teaching curriculum for medical students improves their performance on the National Board of medical examiners shelf examination in surgery. Am J Surg. 2015;209:765–70.

    Article  Google Scholar 

  11. Karpa KD, Hom LL, Huffman P, et al. Medication safety curriculum: enhancing skills and changing behaviors. BMC Med Educ. 2015;15:234–42.

    Article  Google Scholar 

  12. Palmer RT, Biagioli FE, Mujcic J, et al. The feasibility and acceptability of administering a telemedicine objective structured clinical exam as a solution for providing equivalent education to remote and rural learners. Rural Remote Health. 2015;15:3399–408.

    Google Scholar 

  13. Kelly M, O'Flynn S, McLachlan J, et al. The clinical conscientiousness index: a valid tool for exploring professionalism in the clinical undergraduate setting. Acad Med. 2012;87:1218–24.

    Article  Google Scholar 

  14. Bösner S, Pickert J, Stibane T. Teaching differential diagnosis in primary care using an inverted classroom approach: student satisfaction and gain in skills and knowledge. BMC Med Educ. 2015;15:63–9.

    Article  Google Scholar 

  15. Thomas I. Student views of stressful simulated ward rounds. Clin Teach. 2015;12:346–52.

    Article  Google Scholar 

  16. Harrop J, Rezai AR, Hoh DJ, et al. Neurosurgical training with a novel cervical spine simulator: posterior foraminotomy and laminectomy. Neurosurgery. 2013;73(Suppl 1):94–9.

    Article  Google Scholar 

  17. Peterson LN, Rusticus SA, Ross LP. Comparability of the national board of medical examiners comprehensive clinical science examination and a set of five clinical science subject examinations. Acad Med. 2015;90:684–90.

    Article  Google Scholar 

  18. Teherani A, Irby DM, Loeser H. Outcomes of different clerkship models: longitudinal integrated, hybrid, and block. Acad Med. 2013;88:35–43.

    Article  Google Scholar 

  19. Uhm S, Lee GH, Jin JK, et al. Impact of tailored feedback in assessment of communication skills for medical students. Med Educ Online. 2015;20:28453–9.

    Article  Google Scholar 

  20. Yao Q, Chen K, Yao L, et al. Scientometric trends and knowledge maps of global health systems research. Health Res Policy Syst. 2014;12:26.

    Article  Google Scholar 

Download references

Acknowledgments

Not applicable.

Funding

Not applicable.

Availability of data and materials

All data are available from the first author of the paper on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

XC and SYZ conceived the sudy. XC, XZ, and HP designed the study. LZL and CJY collected and analyzed data. XC, XZ, LZL and CJY wrote the manuscript. XC, HP and SYZ revised the manuscript. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Hui Pan or Shuyang Zhang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interest.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, X., Zhou, X., Luo, L. et al. Hotspots in research on the measurement of medical students’ clinical competence from 2012-2016 based on co-word analysis. BMC Med Educ 17, 162 (2017). https://0-doi-org.brum.beds.ac.uk/10.1186/s12909-017-0999-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12909-017-0999-8

Keywords