This edition of the HILJ club has been prepared by Alan Fricker, NHS Knowledge and Library Hub Manager. @AlanFricker.bsky.social (Twitter RIP)

Paper for Discussion

Validation of a generic impact survey for use by health library services indicates the reliability of the questionnaire

Christine Urquhart BSc, MSc, PhD, Alison Brettle BA(Hons), MSc, PhD, (2022) HILJ v39 (4)

First published: 25 March 2022: https://doi.org/10.1111/hir.12427 (Open Access)

Abstract (from HILJ)

Background
A validated generic impact questionnaire can demonstrate how individual and groups of health libraries contribute to continuing education and patient care outcomes.

Objectives
To validate an existing generic questionnaire for Knowledge for Healthcare, England by examining: (1) internal reliability; (2) content validity; and (3) suggest revisions.

Methods
Methods used included Cronbach’s alpha test, simple data mining of patterns among a data set of 187 questionnaire responses and checking respondents’ interpretation of questions.

Results
Cronbach’s alpha was 0.776 (acceptable internal reliability). The patterns of responses indicated that respondents’ interpretations of the questions were highly plausible, and consistent. The meaning of ‘research’ varied among different occupational groups, but overall, respondents could identify relevant personal and service impacts. However, users were confused about the terms that libraries use to describe some services.

Discussion
The analysis indicated that the questionnaire worked well for the two types of personal services (literature/evidence searches and training/e-learning) frequently cited on the responses. Further research may be required for library assessment of the impact of other services such as digital resource services.

Conclusions
The generic questionnaire is a reliable way of assessing the impact of health library and knowledge services, both individually and collectively.

HILJ Club Introduction

Impact is an enduring area of interest for health librarians as we seek to make the case for our services. I picked this article on that basis and aware that it offers a follow on to an earlier HILJClub (issue 2!) looking at the development of the tool. The generic impact questionnaire tool has been widely used over the years since it was created and therefore an assessment of validity is to be welcomed and worth a read.

Discussion

The combination of quantitative and qualitative approaches taken is firmly in line with the desirability in impact work of no stories without numbers and no numbers without stories. The quantitative figures seem statistically strong enough and I like the approach that tests the linked nature of questions.

Activity in South East London (presented in 2017 with Doug Knock) where the generic questions were regularly included as part of a wider biennial user satisfaction survey across several libraries saw consistent patterns across responses in different organisations (with hundreds of completions). While this means my experience chimes with the paper I was left with doubts at the time about how compelling the figures produced were for advocacy. The responses came out about the same for all of the different services which makes it harder to talk about the difference a specific service makes rather than a more general picture.

The findings based on the qualitative interviews are interesting in the light of another HILJ paper (Kiely 2020) on Library jargon (see the HILJClub past on this one) and the ongoing issues we have in describing our services in ways that are comprehensible to users. For both the quant and qual work the people involved in this research are active users of our services – so likely to have better understanding than most.

The authors conclude that their research confirms the suitability of the generic impact questionnaire for regular use. In their discussion they consider the importance of being able to consider a single meaningful incident of library use. In our South East London survey we followed the generic questions with a question inviting respondents to share a story of just such an incident. These frequently provided valuable pictures of the difference services were making.

The paper concludes with advice for users of the tool with the need for additional contextual information the element that stands out for me.

Potential Questions?

What is your experience of applying the generic questionnaire tool? How have you sought to clarify the terms used to describe your services to aid advocacy / comprehension? If you have used the tool how have you analysed and written up your data?