Skip to main content
  • Research article
  • Open access
  • Published:

The use of factor analysis and abductive inference to explore students’ and practitioners’ perspectives of feedback: divergent or congruent understanding?

Abstract

Background

The importance of feedback in workplace-based settings cannot be underestimated. Approaches that evaluate feedback reflect either the sender’s or receiver’s viewpoint in isolation of each other. This study investigated prevailing student and practitioner views of feedback resulting from development and testing of a survey about feedback.

Method

This study used a cross-sectional design, incorporating use of expert consultation and factor analysis of surveys. Fifty-two items based on identified attributes for effective feedback from current research were developed and reviewed through expert consultation. Surveys developed from the items were completed by students (n = 209) and practitioners (n = 145). The juxtaposition of items based on students’ and practitioners’ responses to the surveys were examined through use of exploratory factor analysis.

Results

Separate student and practitioner surveys resulted. Each survey contained 23 items that clustered into factors. The item statements were different across practitioner and student groups Only nine items were shared across factors identified for both groups. The resulting factors represented different notions of feedback—namely, practitioners had a process-oriented focus in comparison with students’ outcome focus.

Conclusion

While students and practitioners view feedback differently this does not necessarily mean they are incongruous.

Peer Review reports

Background

Feedback is a core component of the educational process [1] in both academic and workplace-based settings. The importance of feedback in workplace-based settings cannot be underestimated. Workplace-based settings provide learners the opportunity to acquire discipline specific skills and knowledge as well as develop linguistic and discourse patterns particular to their profession [2]. As such effective feedback on workplace-based performance is a key element in helping the learner to develop capacity, to evaluate their performance, and change behaviours [3, 4]. However, determining the effectiveness of feedback can be challenging as the quality of feedback is variable [5] with learners commonly reporting they still receive little feedback [1] – despite the abundance of literature focused on feedback.

Current conceptualisations describe feedback as dialogic [6, 7]. That is, feedback “…involves relationships in which participants think and reason together” [7] (p.286). An assumption also exists that learners (e.g. students) and learning partners (i.e. someone who supports a learner in the feedback process, for example practitioners) share a common understanding of the term ‘feedback’ [3]. If learners and learning partners do not share the same understanding of feedback, then the commonplace approach to examine one-sided viewpoints of effective feedback must be questioned. Investigating feedback drawing on empirical findings could assist to substantiate conceptual understanding of feedback proffered in the extant literature.

Literature exploring approaches that evaluate the prevailing discourse of feedback is in its infancy. Halman et al. [5] developed and described validity evidence for the Direct Observation of Clinical Skills Feedback Scale (DOCS-FBS). This instrument is specifically intended to rate the quality of verbal feedback provided by assessors in the clinical environment and was tested through participants using the scale to rate videotaped feedback interactions. Bing-You et al. [1] present validity evidence for two Feedback in Medical Education (FEEDME) instruments for use in the clinical setting. The FEEDME-Culture instrument is completed by the learner and developed to assess medical students’ and residents’ perceptions of the feedback they receive. The FEEDME-Provider instrument is a companion instrument also completed by the learner. This instrument aims to ascertain the medical students’ and residents’ perceptions of how the faculty member provided feedback.

Both the DOCS-FBS and the FEEDME instruments focus on the feedback received in the clinical setting: the DOCS-FBS based on a one-off feedback interaction; and the FEEDME based on feedback encounters during a clinical rotation. Neither, however, provide insight into the learning partner’s perception of the feedback quality in tandem with the learner’s perception. Therefore, there is value in development of instruments that explore effective feedback from the views of both the learner and learning partner and exploring the meaning of any eventuating structural analysis.

Aim

This study explored prevailing student and practitioner perspectives of feedback through instrument validation and structural analysis of the Quality Feedback Inventory (QFI) and ‘sense-making’ of the resultant factors.

Ethical considerations

Approval to conduct this study was obtained from the Human Research Ethics Committees of the university (Reference number: 2018/341) and health care service (HREC/18/QPAH/93) where the study was conducted. Participant information outlined the purpose and anticipated benefits of this study. Participation was voluntary; with the return of completed or partially completed surveys taken as an indication of respondents’ consent to participate. Surveys were anonymous and therefore non-identifiable to the research team.

Method

This study used a cross-sectional design, incorporating a focus group technique and factor analysis of surveys, to compare and contrast student and practitioner views of feedback. It draws on empirical data exploring the value of feedback. The study involved two stages: stage one, the generation of items for a list that explores views of feedback; and stage two, data collection and explanatory factor analysis of the list of items (see Fig. 1).

Fig. 1
figure 1

Flow chart of inventory development process and product

Stage one – item generation

Stage one was conducted throughout June 2018. The use of research findings as a source of items has been identified as an effective approach in item generation [8]. Therefore, a range of simple statements based on recent scoping review findings that identified 11 key attributes of feedback [9] (see Tables 1 and 2) and existing feedback instruments (e.g. FEEDME-Culture, DOCS-FBS) were crafted by the research team. Constructed statements were discussed and verified by the research team at a face to face meeting. Inclusion of statements was achieved through majority consensus. Generation of a large item pool relevant to the concept of interest is preferential and strengthens the internal-consistency reliability (and therefore validity) of the emerging scale [10]. An initial 52 items were developed and organised into two lists of items that provided individualised item language for students and practitioners. For example, the item regarding evaluating practice became ‘I was encouraged to evaluate my own practice’ for the student list of items and ‘I encouraged the student to evaluate their practice’ for the practitioner list of items.

Table 1 Initial 46 practitioner items and associated feedback attributes
Table 2 Initial 50 student items and associated feedback attributes

Seeking expert opinion is a useful approach to determine whether ideas or constructs of interest make sense [8] and can be conducted after item development to provide evaluative judgement “regarding the content representativeness (relevance, accuracy, and completeness) of the selected items…” [11] (p.63). A group discussion of four experienced nurses, who support student learning in clinical learning environments and experienced in giving and receiving feedback, was conducted. An explanation of the underpinning feedback attribute for each item was provided to the group. Members of the expert consultation group reviewed each item to establish its clarity and relevance to the underlying attribute. This process facilitated a discussion on item interpretation through cognitive probes such as ‘What do you understand by item X?’, ‘What does item X mean to you?’ or ‘How else could you phrase this item?’. Additionally, the expert group participants checked the wording of each item to ensure it reflected common phrasing in the workplace. Changes to the item wording was made immediately and redundant items removed. All changes were verified by the group to ensure consensus was achieved. This resulted in a preliminary pool of 50 items in the student survey and 46 items in the practitioner.

Stage two – collection and analysis of surveys

Procedure

Stage two of the study involved distribution of the two surveys and was conducted from July to November 2018 at two teaching healthcare facilities in South East Queensland, Australia. Participants were students and practitioners. The student group were third-year nursing students on a four-week clinical placement. The practitioner group were nurses positioned to assist student learning in the clinical learning environment and therefore expected to be involved in providing feedback. All participants were invited to complete the surveys towards the end of the four-week clinical placement period during which students and practitioners were involved with multiple feedback encounters. Students and practitioners scored items on a five-point Likert scale (1 = never, 2 = rarely, 3 = sometimes, 4 = often, and 5 = always).

Analysis

An independent research assistant entered all data into an electronic spreadsheet prior to analysis and therefore non-identifiable to the researchers. Data were transferred into the International Business Machines Corporation Statistical Package for Social Sciences (IBM-SPSS version 25 for Windows); screening for errors and other anomalous data was undertaken prior to analysis. Cases were removed from analysis if ≥50% of responses were incomplete. We examined skewness and kurtosis at item level to determine if assumptions of normality were met [12].

An initial principal component analysis (PCA) and subsequent exploratory factor analysis (EFA) was performed (using principal axis factoring and oblique rotation) to determine which items possibly correlated creating a ‘factor’ [12]. Factorability of the list of items was determined by examining the Kaiser-Meyer-Olkin (KMO) test of sample adequacy and Bartlett’s test of sphericity. We also used the eigenvalue greater than one criterion [13, 14] and scree test to determine the number of factors.

Results

Participants

Surveys were completed by 425 participants: nursing students (n = 239), and practitioners (n = 186). Females represented 85% of participants, males represented 12.5%, and non-binary 0.5%, with 2% of surveys not having gender recorded. The mean age of nursing students was 24 years (SD = 4.9) and of practitioners 36 years (SD = 11.8). Practitioners reported an average of 6.5 years (SD 6.8) experience supporting students in the workplace.

Construct validity

Where possible, the surveys were designed with comparable items across student and practitioner groups. An initial PCA was undertaken to appraise factorability of the correlation matrix and to establish which components exist within the data [13]. Cases were excluded list-wise to ensure a valid case on every variable for every case [13,14,15,16]. PCA was run separately on the 46-item practitioner (n = 145) and 50-item student (n = 209) surveys using orthogonal (varimax) and oblique (direct oblimin) rotation. There was minimal difference between rotation strategies. Direct oblimin rotation was selected as items in the lists were focused on a common construct (i.e. feedback) as one would logically expect some degree of correlation between factors.

The Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was > .9 for items in both student and practitioner surveys. The KMO values for individual items (anti-image matrix) was > .77 – both above accepted minimum of .6 – also supporting factorability of the items in the surveys [13]. Bartlett’s test of sphericity was significant for the student (χ2(253) = 3714.45, p < .001) and practitioner items (χ2(253) = 1656.93, p < .001). Scatterplots confirmed linearity of data. Interrogation of individual matrices identified differences between the correlation matrix of student data and practitioner data, with item correlations notably lower and more diffuse in the practitioner group. As such, we decided to analyse the groups separately.

Items were removed due to low loading on individual items (i.e. < .30) and where ‘cross-loading’ across two or more items in both student and practitioner analysis had a difference of approximately .20 or less [12,13,14]. This process resulted in identifying a 23-item student inventory (QFI-S) with three components and a 23-item practitioner inventory (QFI-P) with four components for subsequent EFA. Table 1 presents the initial items and excluded items for the practitioner inventory and Table 2 presents the initial items and excluded items for the student inventory.

For the final stage, Principal Axis Factoring was selected as the method for EFA as assumptions of normality were violated [16]. As with the PCA, direct oblimin rotation strategy was used. Examination of the scree plot for the practitioner inventory suggested between 4 and 6 factors. Using Kaiser’s criterion (retain factors with eigenvalues > 1) four factors explaining 58.8% of the variance. This process was repeated for the student inventory. The scree plot showed inflexions that would support 3–5 factors. Retaining factors with eigenvalues > 1 extracted three factors explaining 67.5% of the variance. The pattern matrix for each inventory is provided in Table 3 (practitioner) and Table 4 (student) No additional items were removed from either inventory based on this analysis.

Table 3 Pattern matrix and communalities (h2) for practitioner inventory
Table 4 Pattern matrix and communalities (h2) for student inventory

Four latent factors that emerged from the practitioner inventory were labelled: Collaborative preparation for feedback (eight items); Imparting feedback (three items); Environmental context for feedback (five items); and Learner-focused feedback (seven items). An explanation of each QFI-P factor is outlined in Table 5. In this inventory, item 10.2 (‘the feedback I shared was clear’) cross-loaded on factors three and four (refer to Table 3). Attempts to remove this item destabilised the pattern matrix. Therefore, it was retained in factor three as this had the higher loading.

Table 5 Description of factors in the Quality Feedback Inventory for students (QFI-S) and the Quality Feedback Inventory for practitioners (QFI-P)

In contrast, three latent factors surfaced for the student inventory. These were labelled: Individualised growth-oriented feedback (11 items); Environmental context for feedback (eight items); Goal-oriented feedback (four items). A description of each of the QFI-S factors are presented in Table 5. Items 6.3 (‘I was encouraged to be involved in feedback conversations’) and 6.5 (‘I had the opportunity to ask questions’) in the student inventory cross-loaded on factors one and two and was retained in factor one due to the higher loadings (refer to Table 4). As with the practitioner inventory, item removal destabilised the pattern matrix.

Nine of the 23 items occurred in both the student and practitioner inventories (Table 3 and Table 4) and were distributed differently across the each of the factors demonstrating participants’ congruent and divergent perspectives of feedback. For example, items that reflected the concept of ‘Environmental context for feedback’ occurred in both the student and practitioner inventories although were represented by different items. All key attributes of effective feedback were expressed in the items of the practitioner and student inventories except for one attribute—‘Desired’ (i.e. feedback is welcomed and invited).

The Cronbach’s alpha coefficient (α) for the 23-item practitioner inventory was .926 and for the 23-item student inventory was .958—demonstrating good internal consistency. The Cronbach’s alpha coefficient for factors in each inventory are provided in Table 3 and Table 4.

Discussion

As a mathematical method, factor analysis not only seeks to reduce the number of variables (in this example from more than 45 items to 23) but has the capacity to aid in data interpretation with each cluster of items representing a specific latent factor [17, 18]. However, beyond the ability of factor analysis to undertake structural analysis of the particular phenomenon (for example views of feedback) and instrument validation, is its value in abductive inference [17]. Early work by Shank identifies that “good abductive reasoning [inference] leads neither to the ridiculous, nor to the obvious. Instead, it leads to areas where we need further understanding” [19] (p.7). Therefore, we posit that ‘sense-making’ of resultant factors is the crucial next step in factor analysis and not solely reporting results of participant’s responses.

Our results provide preliminary empirical evidence of validity for the student and practitioner inventories. Through use of psychometric analysis, this study identified clustered items regarding how feedback is viewed and understood by students and practitioners in clinical placements in Australia. The results indicate that the factors within the QFI-S and QFI-P (Table 3 and Table 4) fit the data well, providing evidence of feedback constructs aligned with current conceptualisation of feedback [5, 6, 9]. Subsequently, these inventories support exploration of shared feedback encounters between learners and learning partners instead of merely a one-sided determination of satisfaction with feedback [1, 5]. Importantly, while the items were clustered differently across student and practitioner groups, items retained in both groups were representative of the empirically derived attributes of effective feedback. Thus, verifying the importance of each of the attributes identified in the literature [9] and resulting factors of the QFI-P and QFI-S derived through psychometric analysis.

Furthermore, these findings reflect that of a study by Adcroft [3] who found students and academics have different perceptions of feedback creating dissonance as each group offer divergent interpretations of feedback events. If we position ourselves with the assumption that we all have the same understanding of the term ‘feedback’ then this finding could be surprising. However, from a socio-constructive lens—where meaning is constructed based on our experiences of life and the world, and dialogue with others [20], coupled with each feedback encounter being unique for that situation and/or person—differences in item importance for each group is not unexpected. This difference in the importance of how each item is viewed by each group can be seen not only in the small number of over-lapping items, but through the clustering and the collective concepts represented in the identified factors.

When we return to the purpose of feedback—namely, to assist learners toward developing evaluative judgement (implicit within this rationale is that in developing evaluative judgement learners are better able to reach their required goals) it is arguably appropriate that learners view feedback through the lens of focusing on outcomes related to individual growth and goal attainment [3, 21]. In contrast, learning partners view feedback as organised around the concept of collaborative processes. As contemporary literature advocates for increasing student engagement to examine, reflect, and form an evaluation [22] then these parallel views may arguably coalesce well; vis-à-vis learning partners assisting processes for learners to attain their goals.

Substantive interpretation of the statistical factors determined three factors in the QFI-S and four factors in the QFI-P. Items included in the QFI-S reveal a strong outcome focus; for example ‘feedback helped me to know how to improve my practice (item 8.4) and ‘feedback focused on my knowledge’ (item 11.4). This focus towards outcomes is mirrored/comparable in student perspective pertaining to assessment (academic or WBA) [23, 24]. This furthermore supports the position that students view feedback from an outcome lens compared to practitioners. Divergence between students and practitioners is also observed in the factor environmental context for feedback. Considered collectively, QFI-S items grouped within this factor capture the immediate context in which feedback occurs. However, the grouping of items in the QFI-P for this factor establish the broader climate which buttresses the feedback encounter and message. Looking beyond individual items of each inventory sees the respective factors coalesce to form what could be called a harmonious dissonance. That is to say, despite the differing perspectives of the student and practitioner, when coupled together cohesive and functional understanding of feedback can result.

While a detailed discussion is outside the scope of this manuscript, differences seen in the correlation matrices of comparable items for each group raises some interesting points for consideration. Results indicated a greater number of items with very low item correlation coefficient values (≤ .2) observed in the practitioner group compared to the student group. The difference may be attributed to the potential for variations in familiarity and immersion in feedback and differences of feedback literacy—particularly in the practitioner group. We postulate that students are more accustomed with feedback terminology and place a higher value on seeking feedback to achieve their desired outcome from their program of study. This is in comparison with that of practitioners who support student learning in the workplace in conjunction with a priority to ensure quality patient outcomes and safe practice. This is thought-provoking given that two thirds (n = 122) of the practitioner group held an undergraduate bachelor’s degree in nursing (and exposed to ‘academic’ or ‘critical thinking’ language) and 77 practitioners had been practicing as a nurse for an average of 5 years (and therefore would have undergone very similar education to the student participants).

Access to logical approaches toward preferred feedback can help progress the adoption of feedback into practice. Valid sources can be the impetus to enact change where most needed (either for the learner or learning partner) and provide development opportunities for the learner or learning partner to enhance feedback ‘culture’ in their specific learning setting. It is important that merely clustering of items is not the sole consideration for inclusion in the development and validation of instruments, but the significance and meaning of these clustered items have within the context under exploration is also considered. This is evident in the exclusion of items that represent the attribute ‘desired’; for example, item 4.1 ‘I asked for feedback/‘I encouraged the student to ask for feedback’. While these items were excluded due to the mathematical methods of factor analysis, the items remain a central element of effective feedback [25] and warrant asking in any evaluation of feedback.

Limitations

Despite the factors being underpinned by feedback concepts presented in the wider international literature and multiple disciplines, participants in our study were restricted—representing practicing nurses and student nurses from just one university and two health care facilities in Australia. This may contribute to a decreased ability to generalise the results beyond these settings. The items were explored in workplace-based settings where verbal feedback is the prevailing approach. It is recommended that future development of items includes engaging learners (e.g. students) in the consultation process to elicit more feedback behaviours important for learners in attaining their goals. Additionally, because completion of the lists of items was anonymous, there is no way of being able to use the data to help individuals improve feedback practices or recognise the performance of feedback encounters.

Conclusions

Future research is needed to explore the differences observed between the student and practitioner groups and the possible impact these differences have on engagement with feedback and feedback literacy and dissonance between the learner and learning partner. The possible effects organisational culture has on the structure of feedback perceptions also warrants further research. Although development of the QFI started with a list of comparable statements, psychometric testing demonstrated minimal overlap of items between students and practitioners and resulted in two inventories—the QFI-S and the QFI-P. This divergence revealed a goal-oriented outcome focus for students and a process driven focus for practitioners. While this may appear to ‘fly in the face’ of dialogic feedback, congruent views are demonstrated through practitioners’ consideration of collaborative preparation and environmental context to undergird imparting learner-focused feedback that guides students towards their desired goals and outcomes for subsequent individual growth.

Simultaneous evaluation of both perspectives of feedback is not overtly evident in the literature and is needed to understand this issue further. When used in tandem, the QFI-P and QFI-S identify feedback encounters shared by the learner and learning partner. Information obtained from student and practitioner concurrent completion of both inventories has the potential to constructively inform feedback processes and thereby optimise the value of routine feedback. Additionally, this information may provide advice to adapt individual’s feedback practices to optimise feedback relationships, learning outcomes, and life-long learning.

Availability of data and materials

Relevant data are included within the article. The raw data are not available for sharing due to confidentiality agreements approved by the Human Research Ethics Committee.

Abbreviations

QFI:

Quality feedback inventory

QFI-S:

Quality feedback inventory-student

QFI-P:

Quality feedback inventory-practitioner

PCA:

Principal components analysis

EFA:

Exploratory factor analysis

References

  1. Bing-You R, Ramesh S, Hayes V, Varaklis K, Ward D, Blanco M. Trainees' Perceptions of Feedback: Validity Evidence for Two FEEDME (Feedback in Medical Education) Instruments. Teach Learn Med. 2018;30(2):162–72.

    Article  Google Scholar 

  2. Paul A, Gilbert K, Remedios L. Socio-cultural Considerations in Feedback. In: Boud DJ, Molloy EK, editors. Feedback in Higher and Professional Education. London: Routledge; 2013.

    Google Scholar 

  3. Adcroft A. The Mythology of Feedback. High Educ Res Dev. 2011;30(4):405–19.

    Article  Google Scholar 

  4. Bowen L, Marshall M, Murdoch-Eaton D. Medical Student Perceptions of Feedback and Feedback Behaviors Within the Context of the "Educational Alliance". Acad Med. 2017;92(9):1303–12.

    Article  Google Scholar 

  5. Halman S, Dudek N, Wood T, Pugh D, Touchie C, McAleer S, et al. Direct Observation of Clinical Skills Feedback Scale: Development and Validity Evidence. Teach Learn Med. 2016;28(4):385–94.

    Article  Google Scholar 

  6. Boud D, Molloy E. Rethinking Models of Feedback for Learning: The Challenge of Design. Assess Eval High Educ. 2013;38(6):698–712.

    Article  Google Scholar 

  7. Yang M, Carless D. The Feedback Triangle and the Enhancement of Dialogic Feedback Processes. Teach High Educ. 2013;18(3):285–97.

    Article  Google Scholar 

  8. Streiner DL, Norman GR, Cairney J. Health Measurement Scales: A Practical Guide to Their Development and Use. 5th ed. United Kingdom: Oxford University Press; 2015.

    Book  Google Scholar 

  9. Ossenberg C, Henderson A, Mitchell M. What Attributes Guide Best Practice for Effective Feedback? A Scoping Review. Adv Health Sci Educ. 2019;24(2):381–401.

    Article  Google Scholar 

  10. DeVellis RF. Scale Development: Theory and Applications. 4th ed. Los Angeles: Sage; 2017.

    Google Scholar 

  11. Dimitrov DM. Statistical Methods for Validation of Assessment Scale Data in Counseling and Related Fields. Alexandria: American Counseling Association; 2011.

    Google Scholar 

  12. Tabachnick BG, Fidell LS. Using Multivariate Statistics. 6th ed. Harlow: Pearson Education Limited; 2014.

    Google Scholar 

  13. Field A. Discovering Statistics Using IBM SPSS Statistics. 5th ed. London: Sage; 2018.

    Google Scholar 

  14. Polit DF. Statistics and Data Analysis for Nursing Research. 2nd ed. Upper Saddle River; New Jersey: Pearson; 2010.

    Google Scholar 

  15. Fabrigar LR, Wegener DT. Exploratory Factor Analysis. United Kingdom: Oxford University Press; 2011.

    Book  Google Scholar 

  16. Costello AB, Osborne JW. Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most From Your Analysis. Pract Assess Res Eval. 2005;10(7):1.

    Google Scholar 

  17. Haig BD. The Philosophy of Quantitative Methods. The Oxford Handbook of Quantitative Methods. New York: Oxford University Press; 2018.

    Book  Google Scholar 

  18. Pett MA, Lackey NR, Sullivan JJ. Making Sense of Factor Analysis: The Use of Factor Analysis for Instrument Development in Health Care Research. Thousand Oaks: Sage; 2003.

    Book  Google Scholar 

  19. Shank G. Abductive Strategies in Educational Research. Am J Semiotics. 1987;5(2):275–90.

    Article  Google Scholar 

  20. Beck C, Kosnik C. Components of a Good Practicum Placement: Student Teacher Perceptions. Teach Educ Q. 2002;29(2):81–98.

    Google Scholar 

  21. Dawson P, Henderson M, Mahoney P, Phillips M, Ryan T, Boud D, Molloy E. What Makes for Effective Feedback: Staff and Student Perspectives. Assess Eval High Educ. 2019;44(1):25–36.

    Article  Google Scholar 

  22. Nicol DJ, Macfarlane-Dick D. Formative Assessment and Self-regulated Learning: A Model and Seven Principles of Good Feedback Practice. Stud High Educ. 2006;31(2):199–218.

    Article  Google Scholar 

  23. Massie J, Ali JM. Workplace-based Assessment: A Review of User Perceptions and Strategies to Address the Identified Shortcomings. Adv Health Sci Educ. 2016;21(2):455–73.

    Article  Google Scholar 

  24. Nesbitt A, Baird F, Canning B, Griffin A, Sturrock A. Student Perception of Workplace-based Assessment. Clin Teach. 2013;10(6):399–404.

    Article  Google Scholar 

  25. Hattie J, Timperley H. The Power of Feedback. Rev Educ Res. 2007;77(1):81–112.

    Article  Google Scholar 

Download references

Acknowledgements

The authors gratefully thank the clinical facilitators and clinical placement coordinators at each site for their support with providing opportunities for data collection at each site.

Funding

This research was undertaken as part of doctoral studies supported by Metro South Health Study, Education and Research Trust Account post graduate scholarship and a Research Training Program Domestic Fee Offset scholarship provided by the Australian Government Department of Education and Training.

Author information

Authors and Affiliations

Authors

Contributions

CO: conceived the paper; collected, analysed, and interpreted the data and prepared the manuscript. AH: critically reviewed the manuscript and suggested revisions to the manuscript. MM: critically reviewed the manuscript and suggested revisions to the manuscript. All authors read and approved the final manuscript.

Authors’ information

CHRISTINE OSSENBERG, RN MAdvancedPrac, is a PhD candidate in the School of Nursing and Midwifery at Griffith University, Australia and nurse researcher at the Princess Alexandra Hospital, Nursing Practice Development Unit, Brisbane, Australia. Her research interests include professional development, workplace-based learning and assessment and sustainable feedback practices.

AMANDA HENDERSON, PhD, is a Nursing Director (Research) at the Princess Alexandra Hospital, Nursing Practice Development Unit, Brisbane, Australia. Her research interests include workplace learning, learning environments and practice change, specifically, developing effective partnerships across university and industry stakeholders to advance learning in practice.

MARION MITCHELL, PhD, is a Professor of Nursing at Griffith University, Australia and holds a joint appointment as Chair of Critical Care with the Princess Alexandra Hospital Intensive Care Unit and Griffith University School of Nursing and Midwifery. Her clinical research focuses on patient and family-centred care and educational research.

Corresponding author

Correspondence to Christine Ossenberg.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Metro South Health Human Research Ethics Committee (HREC/18/QPAH/93) and Griffith University Human Research Ethics Committee (Reference number: 2018/341). Participants were provided with written information about the purpose of the research study. No written consent was required; consent was implied by the return of partially completed or completed surveys.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests. The authors alone are responsible for the content and writing of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Supplemental material 1. – Descriptive statistics of shared items by participant type

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ossenberg, C., Henderson, A. & Mitchell, M. The use of factor analysis and abductive inference to explore students’ and practitioners’ perspectives of feedback: divergent or congruent understanding?. BMC Med Educ 20, 466 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12909-020-02378-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12909-020-02378-w

Keywords