• Open access
  • Published: 27 October 2005

A framework to evaluate research capacity building in health care

  • Jo Cooke 1  

BMC Family Practice volume  6 , Article number:  44 ( 2005 ) Cite this article

39k Accesses

11 Altmetric

Metrics details

Building research capacity in health services has been recognised internationally as important in order to produce a sound evidence base for decision-making in policy and practice. Activities to increase research capacity for, within, and by practice include initiatives to support individuals and teams, organisations and networks. Little has been discussed or concluded about how to measure the effectiveness of research capacity building (RCB)

This article attempts to develop the debate on measuring RCB. It highlights that traditional outcomes of publications in peer reviewed journals and successful grant applications may be important outcomes to measure, but they may not address all the relevant issues to highlight progress, especially amongst novice researchers. They do not capture factors that contribute to developing an environment to support capacity development, or on measuring the usefulness or the 'social impact' of research, or on professional outcomes.

The paper suggests a framework for planning change and measuring progress, based on six principles of RCB, which have been generated through the analysis of the literature, policy documents, empirical studies, and the experience of one Research and Development Support Unit in the UK. These principles are that RCB should: develop skills and confidence, support linkages and partnerships, ensure the research is 'close to practice', develop appropriate dissemination, invest in infrastructure, and build elements of sustainability and continuity. It is suggested that each principle operates at individual, team, organisation and supra-organisational levels. Some criteria for measuring progress are also given.

This paper highlights the need to identify ways of measuring RCB. It points out the limitations of current measurements that exist in the literature, and proposes a framework for measuring progress, which may form the basis of comparison of RCB activities. In this way it could contribute to establishing the effectiveness of these interventions, and establishing a knowledge base to inform the science of RCB.

Peer Review reports

The need to develop a sound scientific research base to inform service planning and decision-making in health services is strongly supported in the literature [ 1 ], and policy [ 2 ]. However, the level of research activity and the ability to carry out research is limited in some areas of practice, resulting in a low evidence base in these areas. Primary Care, for example, has been identified as having a poor capacity for undertaking research [ 3 – 5 ], and certain professional groups, for example nursing and allied health professionals, lack research experience and skills [ 5 – 7 ]. Much of the literature and the limited research on research capacity building (RCB) has therefore focused on this area of practice, and these professional groups. Policy initiatives to build research capacity include support in developing research for practice, where research is conducted by academics to inform practice decision making, research within or through practice, which encompasses research being conducted in collaboration with academics and practice, and research by practice, where ideas are initiated and research is conducted by practitioners [ 3 , 8 ].

The interventions to increase research capacity for, within, and by practice incorporates initiatives to support individuals and teams, organisations and networks. Examples include fellowships, training schemes and bursaries, and the development of support infrastructures, for example, research practice networks [ 9 – 13 ]. In the UK, the National Coordinating Centre for Research Capacity Development has supported links with universities and practice through funding a number of Research and Development Support Units (RDSU) [ 14 ]which are based within universities, but whose purpose is to support new and established researchers who are based in the National Health Service (NHS). However, both policy advisers and researchers have highlighted a lack of evaluative frameworks to measure progress and build an understanding of what works[ 15 , 16 ].

This paper argues for a need to establish a framework for planning and measuring progress, and to initiate a debate about identifying what are appropriate outcomes for RCB, not simply to rely on things that are easy to measure. The suggested framework has been generated through analysis of the literature, using policy documents, position statements, a limited amount of empirical studies on evaluating research RCB, and the experience of one large RSDU based in the UK.

The Department of Health within the UK has adopted the definition of RCB as 'a process of individual and institutional development which leads to higher levels of skills and greater ability to perform useful research". (pp1321) [ 17 ]

Albert & Mickan cited the National Information Services in Australia [ 18 ] who define it as

" an approach to the development of sustainable skills, organizational structures, resources and commitment to health improvement in health and other sectors to multiply health gains many times over.'

RCB can therefore be seen as a means to an end, the end being 'useful' research that informs practice and leads to health gain, or an end in itself, emphasising developments in skills and structures enabling research to take place.

A framework for measuring capacity building should therefore be inclusive of both process and outcome measures [ 19 ], to capture changes in both the 'ends' and 'means'; it should measure the ultimate goals, but also measure the steps and mechanisms to achieve them. The notion of measuring RCB by both process and outcome measures is supported within the research networks literature [ 12 , 20 ], and capacity building in health more generally [ 19 , 21 ]. Some argue we should acknowledge 'process as outcome', particularly if capacity building is seen as an end in itself [ 21 ]. In this context process measures are 'surrogate' [ 12 ], or 'proxy' outcome measures[ 16 ]. Carter et al [ 16 ]stress caution in terms of using 'proxy' measures in the context of RCB, as there is currently little evidence to link process with outcome. They do not argue against the notion of collecting process data, but stress that evaluation work should examine the relationship of process to outcome. The proposed framework discussed in this paper suggests areas to consider for both process and outcome measurement.

The most commonly accepted outcomes for RCB cited in the literature includes traditional measures of high quality research including publications, conference presentations, successful grant applications, and qualifications obtained. Many evaluations of RCB have used these as outcomes [ 9 , 10 , 22 , 23 ]. Some argue that publications in peer reviewed journals are a tall order for the low research skills base in some areas of health care practice [ 5 ], and argue for an appropriate time frame to evaluate progress. Process measures in this context could measure progress more sensitively and quickly.

However, using traditional outcomes may not be the whole story in terms of measuring impact. Position statements suggest that the ultimate goal of research capacity building is one of health improvement [ 17 , 18 , 24 ]. In order for capacity building initiatives to address these issues, outcomes should also explore the direct impact on services and clients: what Smith [ 25 ]defines as the social impact of research.

There is a strong emphasis within the primary care literature that capacity building should enhance the ability of practitioners to build their research skills: to support the development of research 'by' and 'with' practice [ 3 , 26 ], and suggests 'added value' to develop such close links to practice. A framework to measure RCB should explore and try to unpack this 'added value', both in terms of professional outcomes,[ 10 ] which include increasing professional enthusiasm, and supporting the application of critical thinking, and the use of evidence in practice. Whilst doing research alongside practice is not the only way these skills and attitudes can be developed, it does seem to be an important impact of RCB that should be examined.

The notion of developing RCB close to practice does not necessarily mean that it is small scale just because it is close to the coal face. Obviously, in order for individuals and teams to build up a track record of experience their initial projects may justifiably be small scale, but as individual's progress, they may gain experience to be able to conduct large scale studies, still based on practice problems, working in partnership with others. Similarly networks can support large scale studies as their capacity and infrastructure is developed to accommodate them.

The framework

The framework is represented by Figure 1 . It has two dimensions

figure 1

Research Capacity Building: A Framework for Evaluation.

• Four structural levels of development activity . These include individual, team, organisational, and the network or supra- organisational support level (networks and support units). These are represented by the concentric circles within the diagram.

• Six principles of capacity building . This are discussed in more detail below but include: building skills and confidence, developing linkages and partnerships, ensuring the research is 'close to practice', developing appropriate dissemination, investments in infrastructure, and building elements of sustainability and continuity. Each principle is represented by an arrow within the diagram, which indicates activities and processes that contribute towards capacity building. The arrows cut across the structural levels suggesting that activities and interventions may occur within, and across, structural levels. The arrow heads point in both directions suggesting that principles applied to each structural level could have an impact on other levels.

The framework acknowledges that capacity building is conducted within a policy context. Whilst this paper focuses on measurement at different structural levels, it should be acknowledged that progress and impact on RCB can be greatly nurtured or restricted by the prevailing policy. Policy decisions will influence opportunities for developing researchers, can facilitate collaborations in research, support research careers, fund research directed by practice priorities, and can influence the sustainability and the very existence of supportive infrastructures such as research networks.

The paper will explain the rationale for the dimensions of the framework, and then will suggest some examples of measurement criteria for each principle at different structural levels to evaluate RCB. It is hope that as the framework is applied, further criteria will be developed, and then used taking into account time constraints, resources, and the purpose of such evaluations.

Structural levels at which capacity building takes place

The literature strongly supports that RCB should take place at an individual and organisational level [ 8 , 15 , 27 , 28 ]. For example, the conceptual model for RCB in primary care put forward by Farmer & Weston [ 15 ] focuses particularly on individual General Practitioners (GPs) and primary care practitioners who may progress from non participation through participation, to become academic leaders in research. Their model also acknowledges the context and organisational infrastructure to support RCB by reducing barriers and accommodating diversity through providing mentorship, collaborations and networking, and by adopting a whole systems approach based on local need and existing levels of capacity. Others have acknowledged that capacity development can be focussed at a team level [ 11 , 29 ]. Jowett et al [ 30 ] found that GPs were more likely to be research active if they were part of a practice where others were involved with research. Guidance from a number of national bodies highlights the need for multiprofessional and inter-professional involvement in conducting useful research for practice [ 3 , 4 , 6 , 31 ] which implies an appropriate mix of skills and practice experience within research teams to enable this [ 32 ]. Additionally, the organisational literature has identified the importance of teams in the production of knowledge [ 18 , 33 , 34 ].

Developing structures between and outside health organisations, including the development of research networks seems important for capacity building [ 12 , 24 , 34 ]. The Department of Health in the UK [ 14 ] categorizes this supra-organisational support infrastructure to include centres of academic activity, Research & Development Support Units, and research networks.

As interventions for RCB are targeted at different levels, the framework for measuring its effectiveness mirrors this. However, these levels should not be measured in isolation. One level can have an impact on capacity development at another level, and could potentially have a synergistic or detrimental effect on the other.

The six principles of research capacity building

Evaluation involves assessing the success of an intervention against a set of indicators or criteria [ 35 , 36 ], which Meyrick and Sinkler [ 37 ] suggest should be based on underlying principles in relation to the initiative. For this reason the framework includes six principles of capacity building. The rationale for each principle is given below, along with a description of some suggested criteria for each principle. The criteria presented are not an exhaustive list. As the framework is developed and used in practice, a body of criteria will be developed and built on further.

Principle 1. Research capacity is built by developing appropriate skills, and confidence, through training and creating opportunities to apply skills

The need to develop research skills in practitioners is well established [ 3 , 4 , 6 ], and can be supported through training [ 14 , 26 ], and through mentorship and supervision [ 15 , 24 , 28 ]. There is some empirical evidence that research skill development increases research activity [ 23 , 38 ], and enhances positive attitudes towards conducting and collaborating in research [ 39 ]. Other studies cite lack of training and research skills as a barrier to doing research [ 30 , 31 ]. The need to apply and use research skills in practice is highlighted in order to build confidence [ 40 ]and to consolidate learning.

Some needs assessment studies highlight that research skills development should adopt 'outreach' and flexible learning packages and acknowledge the skills, background and epistemologies of the professional groups concerned [ 7 , 15 , 39 , 41 , 42 ]. These include doctors, nurses, a range of allied health professional and social workers. Developing an appropriate mix of professionals to support health services research means that training should be inclusive and appropriate to them, and adopt a range of methodologies and examples to support appropriate learning and experience [ 15 , 31 , 41 ]. How learning and teaching is undertaken, and the content of support programmes to reflect the backgrounds, tasks and skills of participants should therefore be measured. For example, the type of research methods teaching offered by networks and support units should reflect a range and balance of skills needed for health service research, including both qualitative and quantitative research methods.

Skills development also should be set in the context of career development, and further opportunities to apply skills to practice should be examined. Policy and position statements [ 14 , 26 ] support the concept of career progression or 'careers escalator', which also enables the sustainability of skills. Opportunities to apply research skills through applications for funding is also important [ 9 , 10 , 22 , 43 , 44 ].

At team and network level Fenton et al [ 34 ]suggest that capacity can be increased through building intellectual capacity (sharing knowledge), which enhances the ability to do research. Whilst there is no formal measure for this, an audit of the transfer of knowledge would appear to be beneficial. For example teams may share expertise within a project to build skills in novice researchers [ 45 ]which can be tracked, and appropriate divisions of workload through reading research literature and sharing this with the rest of the team/network could be noted.

The notion of stepping outside of a safety zone may also suggest increased confidence and ability to do research. This may be illustrated at an individual level by the practitioner-researcher taking on more of a management role, supervising others, or tackling new methodologies/approaches in research, or in working with other groups of health and research professionals on research projects. This approach is supported by the model of RCB suggested by Farmer and Weston [ 15 ] which supports progress from participation through to academic leadership.

Some examples of criteria for measuring skills and confidence levels are give in table 1 .

Principle 2. Research capacity building should support research 'close to practice' in order for it to be useful

The underlying philosophy for developing research capacity in health is that it should generate research that is useful for practice. The North American Primary Care Group [ 24 ] defined the 'ultimate goal' of research capacity development as the generation and application of new knowledge to improve the health of individuals and families (p679). There is strong support that 'useful' research is that which is conducted 'close' to practice for two reasons. Firstly by generating research knowledge that is relevant to service user and practice concerns. Many argue that the most relevant and useful research questions are those generated by, or in consultation with, practitioners and services [ 3 , 11 , 24 ], policy makers [ 46 ] and service users [ 47 , 48 ]. The level of 'immediate' usefulness [ 49 ] may also mean that messages are more likely to taken up in practice[ 50 ]. Empirical evidence suggests that practitioners and policy makers are more likely to engage in research if they see its relevance to their own decision making [ 31 , 39 , 46 ]. The notion of building research that is 'close to practice' does not necessarily mean that they are small scale, but that the research is highly relevant to practice or policy concerns. A large network of practitioners could facilitate large scale, experimental based projects for example. However, the adoption of certain methodologies is more favoured by practice because of their potential immediate impact on practice [ 47 ] and this framework acknowledges such approaches and their relevance. This includes action research projects, and participatory inquiry [ 31 , 42 ]. An example where this more participatory approach has been developed in capacity building is the WeLREN (West London Research Network) cycle [ 51 ]. Here research projects are developed in cycles of action, reflection, and dissemination, and use of findings is integral to the process. This network reports high levels of practitioner involvement.

Secondly, building research capacity 'close to practice' is useful because of the skills of critical thinking it engenders which can be applied also to practice decision making [ 28 ], and which supports quality improvement approaches in organisations [ 8 ]. Practitioners in a local bursary scheme, for example, said they were more able to take an evidence-based approach into their every day practice [ 9 ].

Developing a 'research culture' within organisations suggests a closeness to practice that impacts on the ability of teams and individuals to do research. Lester et al [ 23 ] touched on measuring this idea through a questionnaire where they explored aspects of a supportive culture within primary care academic departments. This included aspects around exploring opportunities to discuss career progression, supervision, formal appraisal, mentorship, and junior support groups. This may be a fruitful idea to expand further to develop a tool in relation to a health care environment.

Some examples of criteria for measuring the close to practice principle are give in table 2

3. Linkages, partnerships and collaborations enhance research capacity building

The notion of building partnerships and collaborations is integral to capacity building [ 19 , 24 ]. It is the mechanism by which research skills, and practice knowledge is exchanged, developed and enhanced [ 12 ], and research activity conducted to address complex health problems [ 4 ]. The linkages between the practice worlds and that of academia may also enhance research use and impact [ 46 ].

The linkages that enhance RCB can exist between

Universities and practice [ 4 , 14 , 43 ]

Novice and experienced researchers [ 22 , 24 , 51 ].

Different professional groups [ 2 , 4 , 20 , 34 ]

Different health and care provider sectors [ 4 , 31 , 47 , 52 ]

Service users, practitioners and researchers [ 47 , 48 ]

Researchers and policy makers [ 46 ]

Different countries [ 28 , 52 ]

Health and industry [ 53 , 54 ]

It is suggested that it is through networking and building partnerships that intellectual capital (knowledge) and social capital (relationships) can be built, which enhances the ability to do research [ 12 , 31 , 34 ]. In particular, there is the notion that the build up of trust between different groups and individuals can enhance information and knowledge exchange[ 12 ]. This may not only have benefits for the development of appropriate research ideas, but may also have benefits for the whole of the research process including the impact of research findings.

The notion of building links with industry is becoming progressively evident within policy in the UK [ 54 ] which may impact on economic outcomes to health organisations and the society as a whole[ 55 , 56 ].

Some examples of criteria for measuring linkages and collaborations are given in table 3 .

4. Research capacity building should ensure appropriate dissemination to maximize impact

A widely accepted measure to illustrate the impact of RCB is the dissemination of research in peer reviewed publications, and through conference presentations to academic and practice communities [ 5 , 12 , 26 , 57 ]. However this principle extends beyond this more traditional method of dissemination. The litmus test that ultimately determines the success of capacity building is that it should impact on practice, and on the health of patients and comminutes[ 24 ] that is; the social impact of research [ 25 ]. Smith [ 25 ]argues that the strategies of dissemination should include a range of methods that are 'fit for purpose'. This includes traditional dissemination, but also includes other methods, for example, instruments and programmes of care implementation, protocols, lay publications, and publicity through factsheets, the media and the Internet.

Dissemination and tracking use of products and technologies arising from RCB should also be considered, which relate to economic outcomes of capacity building [ 55 ]. In the UK, the notion of building health trusts as innovative organisations which can benefit economically through building intellectual property highlights this as an area for potential measurement [ 56 ].

Some examples of criteria for measuring appropriate dissemination are given in table 4

5. Research capacity building should include elements of continuity and sustainability

Definitions of capacity building suggest that it should contain elements of sustainability which alludes to the maintenance and continuity of newly acquired skills and structures to undertake research [ 18 , 19 ]. However the literature does not explore this concept well [ 19 ]. This in itself may be partly due problems around measuring capacity building. It is difficult to know how well an initiative is progressing, and how well progress is consolidated, if there are no benchmarks or outcomes against which to demonstrate this.

Crisp et al [ 19 ] suggests that capacity can be sustained by applying skills to practice. This gives us some insight about where we might look for measures of sustainability. It could include enabling opportunities to extend skills and experience, and may link into the concept of a career escalator. It also involves utilizing the capacity that has been already built. For example engaging with those who have gained skills in earlier RCB initiatives to help more novice researchers, once they have become 'experts', and in finding an appropriate place to position the person with expertise with the organisation. It could also be measured by the number of opportunities for funding for continued application of skills to research practice.

Some examples of criteria for measuring sustainability and continuity are gibe in table 5

6. Appropriate infrastructures enhance research capacity building

Infrastructure includes structures and processes that are set up to enable the smooth and effective running of research projects. For example, project management skills are essential to enable projects to move forward, and as such should be measured in relation to capacity building. Similarly, projects should be suitably supervised with academic and management support. To make research work 'legitimate' it may be beneficial to make research a part of some job descriptions for certain positions, not only to reinforce research as a core skill and activity, but also to review in annual appraisals, which can be a tool for research capacity evaluation. Information flow about calls for funding and fellowships and conferences is also important. Hurst [ 42 ] found that information flow varied between trusts, and managers were more aware of research information than practitioners.

The importance of protected time and backfill arrangements as well as funding to support this, is an important principle to enable capacity building [9, 15, 24, 58]. Such arrangements may reduce barriers to participation and enable skills and enthusiasm to be developed[ 15 ]. Infrastructure to help direct new practitioners to research support has also been highlighted[ 14 ]. This is particularly true in the light of the new research governance and research ethics framework in the UK [59]. The reality of implementing systems to deal with the complexities of the research governance regulations has proved problematic, particularly in primary care, where the relative lack of research management expertise and infrastructure has resulted in what are perceived as disproportionately bureaucratic systems. Recent discussion in the literature has focused on the detrimental impact of both ethical review, and NHS approval systems, and there is evidence of serious delays in getting research projects started [60]. Administrative and support staff to help researchers through this process is important to enable research to take place [61].

Some examples of criteria for measuring are given in table 6 .

This paper suggests a framework which sets out a tentative structure by which to start measuring the impact of capacity building interventions, and invites debate around the application of this framework to plan and measure progress. It highlights that interventions can focus on individuals, teams, organisations, and through support infrastructures like RDSUs and research networks. However, capacity building may only take place once change has occurred at more than one level: for example, the culture of an organisation in which teams and individuals work may have an influence of their abilities and opportunities to do research work. It is also possible that the interplay between different levels may have an effect on the outcomes at other levels. In measuring progress, it should be possible to determine a greater understanding of the relationship between different levels. The framework proposed in this paper may be the first step to doing this.

The notion of building capacity at any structural level is dependent on funding and support opportunities, which are influenced by policy and funding bodies. The ability to build capacity across the principles developed in the framework will also be dependent of R&D strategy and policy decisions. For example, if policy fluctuates in its emphasis on building capacity 'by', 'for' or 'with' practice, the ability to build capacity close to practice will be affected.

In terms of developing a science of RCB, there is a need to capture further information on issues of measuring process and outcome data to understand what helps develop 'useful' and 'useable' research. The paper suggests principles whereby a number of indicators could be developed. The list is not exhaustive, and it is hoped that through debate and application of the framework further indicators will be developed.

An important first step to building the science of RCB should be debate about identifying appropriate outcomes. This paper supports the use of traditional outcomes of measurement, including publications in peer reviewed journals and conference presentations. This assures quality, and engages critical review and debate. However, the paper also suggests that we might move on from these outcomes in order to capture the social impact of research, and supports the notion of developing outcomes which measure how research has had an impact on the quality of services, and on the lives of patients and communities. This includes adopting and shaping the type of methodologies that capacity building interventions support, which includes incorporating patient centred outcomes in research designs, highlighting issues such as cost effectiveness of interventions, exploring economic impact of research both in terms of product outputs and health gain, and in developing action oriented, and user involvement methodologies that describe and demonstrate impact. It also may mean that we have to track the types of linkages and collaborations that are built through RCB, as linkages that are close to practice, including those with policy makers and practitioners, may enhance research use and therefore 'usefulness'. If we are to measure progress through impact and change in practice, an appropriate time frame would have to be established alongside these measures.

This paper argues that 'professional outcomes' should also be measured, to recognize how critical thinking developed during research impacts on clinical practice more generally.

Finally, the proposed framework provides the basis by which we can build a body of evidence to link process to the outcomes of capacity building. By gathering process data and linking it to appropriate outcomes, we can more clearly unpack the 'black box' of process, and investigate which processes link to desired outcomes. It is through adopting such a framework, and testing out these measurements, that we can systematically build a body of knowledge that will inform the science and the art of capacity building in health care.

• There is currently little evidence on how to plan and measure progress in research capacity building (RCB), or agreement to determining its ultimate outcomes.

• Traditional outcomes of publications in peer reviewed journals, and successful grant applications may be the easy and important outcomes to measure, but do not necessarily address issues to do with the usefulness of research, professional outcomes, the impact of research activity on practice, or on measuring health gain.

• The paper suggests a framework which provides a tentative structure by which measuring the impact of RCB could be achieved, shaped around six principles of research capacity building, and includes four structural levels on which each principle can be applied.

• The framework could be the basis by which RCB interventions could be planned, and progress measured. It could act as a basis of comparison across interventions, and could contribute to establishing a knowledge base on what is effective in RCB in healthcare

Muir Gray JA: Evidence-based Healthcare. How to make health policy and management decisions. 1997, Edinburgh, Churchill Livingstone

Google Scholar  

Department of Health: Research and Development for a First Class Service. 2000, Leeds, DoH

Mant D: National working party on R&D in primary care. Final Report. 1997, London, NHSE South and West.

Department of Health: Strategic review of the NHS R&D Levy (The Clarke Report). 1999, , Central Research Department, Department of Health, 11-

Campbell SM, Roland M, Bentley E, Dowell J, Hassall K, Pooley J, Price H: Research capacity in UK primary care. British Journal of General Practice. 1999, 49: 967-970.

CAS   PubMed   PubMed Central   Google Scholar  

Department of Health: Towards a strategy for nursing research and development. 2000, London, Department of Health

Ross F, Vernon S, Smith E: Mapping research in primary care nursing: Current activity and future priorities. NT Research. 2002, 7: 46-59.

Article   Google Scholar  

Marks L, Godfrey M: Developing Research Capacity within the NHS: A summary of the evidence. 2000, Leeds, Nuffield Portfolio Programme Report.

Lee M, Saunders K: Oak trees from acorns? An evaluation of local bursaries in primary care. Primary Health Care Research and Development. 2004, 5: 93-95. 10.1191/1463423604pc197xx.

Bateman H, Walter F, Elliott J: What happens next? Evaluation of a scheme to support primary care practitioners with a fledgling interest in research. Family Practice. 2004, 21: 83-86. 10.1093/fampra/cmh118.

Article   PubMed   Google Scholar  

Smith LFP: Research general practices: what, who and why?. British Journal of General Practice. 1997, 47: 83-86.

Griffiths F, Wild A, Harvey J, Fenton E: The productivity of primary care research networks. British Journal of General Practice. 2000, 50: 913-915.

Fenton F, Harvey J, Griffiths F, Wild A, Sturt J: Reflections from organization science of primary health care networks. Family Practice. 2001, 18: 540-544. 10.1093/fampra/18.5.540.

Article   CAS   PubMed   Google Scholar  

Department of Health: Research Capacity Development Strategy. 2004, London, Department of Health

Farmer E, Weston K: A conceptual model for capacity building in Australian primary health care research. Australian Family Physician. 2002, 31: 1139-1142.

PubMed   Google Scholar  

Carter YH, Shaw S, Sibbald B: Primary care research networks: an evolving model meriting national evaluation. British Journal of General Practice. 2000, 50: 859-860.

Trostle J: Research Capacity building and international health: Definitions, evaluations and strategies for success. Social Science and Medicine. 1992, 35: 1321-1324. 10.1016/0277-9536(92)90035-O.

Albert E, Mickan S: Closing the gap and widening the scope. New directions for research capacity building in primary health care. Australian Family Physician. 2002, 31: 1038 -10341.

Crisp BR, Swerissen H, Duckett SJ: Four approaches to capacity building in health: consequences for measurement and accountability. Health Promotion International. 2000, 15: 99-107. 10.1093/heapro/15.2.99.

Ryan , Wyke S: The evaluation of primary care research networks in Scotland. British Journal of General Practice. 2001, 154-155.

Gillies P: Effectiveness of alliances and partnerships for health promotion. Health Promotion International. 1998, 13: 99-120. 10.1093/heapro/13.2.99.

Pitkethly M, Sullivan F: Four years of TayRen, a primary care research and development network. Primary Care Research and Development. 2003, 4: 279-283. 10.1191/1463423603pc167oa.

Lester H, Carter YH, Dassu D, Hobbs F: Survey of research activity, training needs. departmental support, and career intentions of junior academic general practitioners. British Journal of General Practice. 1998, 48: 1322-1326.

North American Primary Care Research Group: What does it mean to build research capacity?. Family Medicine. 2002, 34: 678-684.

Smith R: Measuring the social impact of research. BMJ. 2001, 323: 528-10.1136/bmj.323.7312.528.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sarre G: Capacity and activity in research project (CARP): supporting R&D in primary care trusts. 2002

Del Mar C, Askew D: Building family/general practice research capacity. Annals of Family Medicine. 2004, 2: 535-540.

Carter YH, Shaw S, Macfarlane F: Primary Care research team assessment (PCRTA): development and evaluation. Occasional paper (Royal College of General Practitioners). 2002, 81: 1-72.

Jowett S, Macleod J, Wilson S, Hobbs F: Research in Primary Care: extent of involvement and perceived determinants among practitioners for one English region. British Journal of General Practice. 2000, 50: 387-389.

Cooke J, Owen J, Wilson A: Research and development at the health and social care interface in primary care: a scoping exercise in one National Health Service region. Health and Social Care in the Community. 2002, 10: 435 -4444. 10.1046/j.1365-2524.2002.00395.x.

Raghunath AS, Innes A: The case of multidisciplinary research in primary care. Primary Care Research and Development. 2004, 5: 265-273.

Reagans RZER: Networks, Diversity and Productivity: The Social Capital of Corporate R&D Teams. Organisational Science. 2001, 12: 502-517. 10.1287/orsc.12.4.502.10637.

Ovretveit J: Evaluating Health Interventions. 1998, Buckingham, Open University

Meyrick J, Sinkler P: An evaluation Resource for Healthy Living Centres. 1999, London, Health Education Authority

Hakansson A, Henriksson K, Isacsson A: Research methods courses for GPs: ten years' experience in southern Sweden. British Journal of General Practice. 2000, 50: 811-812.

Bacigalupo B, Cooke J, Hawley M: Research activity, interest and skills in a health and social care setting: a snapshot of a primary care trust in Northern England. Health and Social Care in the Community.

Kernick D: Evaluating primary care research networks - exposing a wider agenda. British Journal of General Practice. 2001, 51: 63-

Owen J, Cooke J: Developing research capacity and collaboration in primary care and social care: is there enough common ground?. Qualitative Social Work. 2004, 3: 398-410. 10.1177/1473325004048022.

Hurst: Building a research conscious workforce. Journal of Health Organization and management. 2003, 17: 373-384.

Gillibrand WP, Burton C, Watkins GG: Clinical networks for nursing research. International Nursing Review. 2002, 49: 188-193. 10.1046/j.1466-7657.2002.00124.x.

Campbell J, Longo D: Building research capacity in family medicine: Evaluation of the Grant Generating Project. Journal of Family Practice. 2002, 51: 593-

Cooke J, Nancarrow S, Hammersley V, Farndon L, Vernon W: The "Designated Research Team" approach to building research capacity in primary care. Primary Health Care Research and Development.

Innvaer S, Vist G, Trommald M, Oxman A: Health policy- makers' perceptions of their use of evidence: a systematic review. Journal of Health Services Research and Policy. 2002, 7: 239-244. 10.1258/135581902320432778.

NHS Service Delivery Organisation : NHS Service Delivery and Against National R&D programme, National listening exercise. 2000, London, NHS SDO

Hanley J, Bradburn S, Gorin M, Barnes M, Evans C, Goodare HB: Involving consumers in research and development in the NHS: briefing notes for researchers. 2000, Winchester, Consumers in NHS Research Support Unit,

Frenk J: Balancing relevance and excellence: organisational responses to link research with decision making. Social Science and Medicine. 1992, 35: 1397-1404. 10.1016/0277-9536(92)90043-P.

National Audit Office.: An international review on Governments' research procurement strategies. A paper in support of Getting the evidence: Using research in policy making. 2003, London, The Stationary Office.

Thomas P, While A: Increasing research capacity and changing the culture of primary care towards reflective inquiring practice: the experience of West London Research Network (WeLReN). Journal of Interprofessional Care. 2001, 15: 133-139. 10.1080/13561820120039865.

Rowlands G, Crilly T, Ashworth M, Mager J, Johns C, Hilton S: Linking research and development in primary care: primary care trusts, primary care research networks and primary care academics. Primary Care Research and Development. 2004, 5: 255-263. 10.1191/1463423604pc201oa.

Davies S: R&D for the NHS- Delivering the research agenda: ; London. 2005, National Coordinating Centre for Research Capacity Development

Department of Health.: Best Research for Best Health: A New National Health Research Strategy. The NHS contribution to health research in England: A consultation. 2005, London, Department of Health

Buxton M, Hanney S, Jones T: Estimating the economic value to societies of the impact of health research: a critical review. Bulletin of the World Health Organisation. 2004, 82: 733-739.

Department of Health.: The NHS as an innovative organisation: A framework and guidance on the management of intellectual property in the NHS. 2002, London, Department of Health

Sarre G: Trent Focus Supporting research and development in primary care organisations: report of the capacity and activity in research project (CARP). 2003

Department of Health: Research Governance Framework for Health and Social Care. 2001, London, Department of Health.

Hill J, Foster N, Hughes R, Hay E: Meeting the challenges of research governance. Rheumatology. 2005, 44: 571-572. 10.1093/rheumatology/keh579.

Shaw S: Developing research management and governance capacity in primary care organizations: transferable learning from a qualitative evaluation of UK pilot sites. Family Practice. 2004, 21: 92-98. 10.1093/fampra/cmh120.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2296/6/44/prepub

Download references

Acknowledgements

My warm thanks go to my colleagues in the primary care group of the Trent RDSU for reading and commenting on earlier drafts of this paper, and for their continued support in practice.

Author information

Authors and affiliations.

Primary Care and Social Care Lead, Trent Research and Development Unit, formerly, Trent Focus Group, ICOSS Building, The University of Sheffield, 219 Portobello, Sheffield, S1 4DP, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jo Cooke .

Additional information

Competing interests.

The author(s) declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions.

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Cooke, J. A framework to evaluate research capacity building in health care. BMC Fam Pract 6 , 44 (2005). https://doi.org/10.1186/1471-2296-6-44

Download citation

Received : 12 June 2005

Accepted : 27 October 2005

Published : 27 October 2005

DOI : https://doi.org/10.1186/1471-2296-6-44

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Capacity Building
  • Research Capacity
  • Research Skill
  • Traditional Outcome
  • Research Capacity Building

BMC Primary Care

ISSN: 2731-4553

research capacity development project

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 11, Issue 7
  • Measuring research capacity development in healthcare workers: a systematic review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-8765-7384 Davide Bilardi 1 , 2 ,
  • http://orcid.org/0000-0001-8818-8148 Elizabeth Rapa 3 ,
  • http://orcid.org/0000-0001-7628-8408 Sarah Bernays 4 , 5 ,
  • http://orcid.org/0000-0003-2273-5975 Trudie Lang 1
  • 1 Nuffield Department of Medicine , University of Oxford Centre for Tropical Medicine and Global Health , Oxford , UK
  • 2 Fondazione Penta Onlus , Padova , Italy
  • 3 Department of Psychiatry , University of Oxford , Oxford , UK
  • 4 School of Public Health , University of Sydney–Sydney Medical School Nepean , Sydney , New South Wales , Australia
  • 5 Public Health and Policy , London School of Hygiene & Tropical Medicine , London , UK
  • Correspondence to Dr Davide Bilardi; davide.bilardi{at}gtc.ox.ac.uk

Objectives A key barrier in supporting health research capacity development (HRCD) is the lack of empirical measurement of competencies to assess skills and identify gaps in research activities. An effective tool to measure HRCD in healthcare workers would help inform teams to undertake more locally led research. The objective of this systematic review is to identify tools measuring healthcare workers’ individual capacities to conduct research.

Design Systematic review and narrative synthesis using Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist for reporting systematic reviews and narrative synthesis and the Critical Appraisals Skills Programme (CASP) checklist for qualitative studies.

Data sources 11 databases were searched from inception to 16 January 2020. The first 10 pages of Google Scholar results were also screened.

Eligibility criteria We included papers describing the use of tools/to measure/assess HRCD at an individual level among healthcare workers involved in research. Qualitative, mixed and quantitative methods were all eligible. Search was limited to English language only.

Data extraction and synthesis Two authors independently screened and reviewed studies using Covidence software, and performed quality assessments using the extraction log validated against the CASP qualitative checklist. The content method was used to define a narrative synthesis.

Results The titles and abstracts for 7474 unique records were screened and the full texts of 178 references were reviewed. 16 papers were selected: 7 quantitative studies; 1 qualitative study; 5 mixed methods studies; and 3 studies describing the creation of a tool. Tools with different levels of accuracy in measuring HRCD in healthcare workers at the individual level were described. The Research Capacity and Culture tool and the ‘Research Spider’ tool were the most commonly defined. Other tools designed for ad hoc interventions with good generalisability potential were identified. Three papers described health research core competency frameworks. All tools measured HRCD in healthcare workers at an individual level with the majority adding a measurement at the team/organisational level, or data about perceived barriers and motivators for conducting health research.

Conclusions Capacity building is commonly identified with pre/postintervention evaluations without using a specific tool. This shows the need for a clear distinction between measuring the outcomes of training activities in a team/organisation, and effective actions promoting HRCD. This review highlights the lack of globally applicable comprehensive tools to provide comparable, standardised and consistent measurements of research competencies.

PROSPERO registration number CRD42019122310.

  • organisational development
  • organisation of health services
  • medical education & training
  • public health

Data availability statement

Data are available upon reasonable request. All data relevant to the study are included in the article. The complete data set generated by the systematic review and included in the extraction log is available upon request.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2020-046796

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

Thoroughly conducted systematic review collecting data from all major existing databases and grey literature.

Topic not previously addressed in other reviews searching for tools to measure health research capacity building at individual level.

Brief overview of the identified tools to measure health research capacity building at individual level highlighting strengths and weaknesses of them.

Complex identification of relevant studies due to the lack of clarity on a common definition and terminology to identify health research capacity development.

None of the studies use the standard reporting procedures for qualitative or quantitative research.

Introduction

In 2004, the Global Forum for Health Research highlighted the challenge for low and middle-income countries to have the capacity to perform effective and locally led health research which address the major health problems affecting their own populations. 1–3 Twenty years later, low and middle-income countries still carry 90% of the global disease burden, but only 10% of global funding for health research is devoted to addressing these persistent health challenges. 4 Health research capacity development (HRCD) for healthcare workers has been recognised as a critical element to overcoming global health challenges, especially in low and middle-income countries. 5 For too long HRCD in low and middle-income countries has been documented through training programmes which enable local teams to participate in externally sponsored trials, creating a false appearance of growth and generating dependence on foreign support. 6 7

The process of progressive empowerment is usually referred to as capacity development. 8 This term has been used in multiple areas and applied in different sectors to develop new or existing competencies, skills and strategies at a macro or individual level. 9 In the field of health, research capacity development should support healthcare workers in generating local evidence-based results to inform policy and improve population health. The three health-related Millennium Development Goals, and more recently the targets ‘B’ and ‘C’ of the Sustainable Development Goals, all support the adoption of new strategies to strengthen the capacity of healthcare workers in all countries in performing their job and engaging in research. 10–12 One of the critical barriers in supporting HRCD is the lack of empirical measurement of competencies in relation to the performance of research activities. Existing frameworks and tools have been developed for a particular purpose in a particular context. 13 14 Others have identified barriers that healthcare workers encounter in engaging in research or have monitored and evaluated targeted training activities. 15 This systematic review aims to identify tools to measure individual healthcare workers’ capacities to conduct research.

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist 16 for reporting systematic reviews and narrative synthesis and Critical Appraisals Skills Programme (CASP) checklist 17 on critical appraisal for qualitative studies were used to design this systematic review and to refine the extraction log according to recognised guidelines.

Inclusion and exclusion criteria

The aim of the systematic review was to identify existing tools which measure individual capacities in conducting research in healthcare workers. The inclusion and exclusion criteria were defined in advance and documented using an adapted version of a SPIDER table ( table 1 ). The primary population of interest were all health-related professionals or healthcare workers involved in research activities. Healthcare workers delivering health services when research was not considered as the focus of the study were excluded. Occupational health research was excluded. Studies about volunteers, defined as people offering their services to support health activities with no specific training as health professionals, were also excluded. Initially, only healthcare workers working in low and middle-income countries were included, but this limitation was removed to identify any tool measuring HRCD in any setting. The Phenomenon of Interest was defined as: assessing HRCD; or identifying tools, frameworks and templates designed to assess HRCD. A comprehensive range of terms including synonyms for ‘assess’, ‘tool’ or ‘development’ was used. Studies were excluded which mentioned components that could be considered to assess, measure and ‘give evidence to’ research capacity development, but were not presented in any capacity development context. In addition, since the concept of capacity development is widely applied to different settings, studies on areas unrelated to health, such as ‘air pollution’, ‘financial capacity’ or ‘tobacco’, were also excluded. The study design criteria were broad to include qualitative, quantitative and mixed methods papers. Further criteria of eligibility included in the SPIDER table refer to the quality of the study (Evaluation) and the Research type.

  • View inline

SPIDER diagram—inclusion and exclusion criteria

Information sources and search strategy

Eleven databases were searched from inception to 16 January 2020: Ovid MEDLINE; Ovid Embase; Ovid PsycINFO; Ovid Global Health; EBSCO CINAHL; ProQuest Applied Social Sciences Index & Abstracts (ASSIA); ProQuest Sociological Abstracts; ProQuest Dissertations & Theses Global; Scopus; Web of Science Core Collection; and the WHO Global Index Medicus Regional Libraries. The first 10 pages of results from Google Scholar were also screened. The search strategies used free text terms and combinations of the relevant thesaurus terms, limited to English language publications only, to combine terms for capacity building, measuring and health research. The ‘NOT’ command was used to exclude papers about students, postgraduate students, tobacco, air pollution and a variety of other concepts to minimise the number of irrelevant results (see box 1 for a full set of search strategies).

Search strategy

Database: medline (ovid medline epub ahead of print, in-process & other non-indexed citations, ovid medline daily and ovid medline) 1946 to present.

Capacity Building/ (1965)

(capacit* adj2 build*).ti,ab. (5789)

(capacit* adj2 develop*).ti,ab. (3591)

(capacit* adj2 strengthen*).ti,ab. (924)

(competenc* adj2 improv*).ti,ab. (1460)

((professional* adj2 develop*) and (competenc* or capacit*)).ti,ab. (1747)

1 or 2 or 3 or 4 or 5 or 6 (13649)

Mentoring/ (820)

mentor*.ti,ab. (13369)

(assess* or measur* or evaluat* or analys* or tool* or equip*).ti,ab. (9653076)

“giv* evidence”.ti,ab. (3814)

framework*.ti,ab. (231138)

8 or 9 or 10 or 11 or 12 (9763562)

Research/ (196782)

clinical.ti,ab. (3158817)

(health* and research*).ti,ab. (337604)

14 or 15 or 16 (3588891)

7 and 13 and 17 (3433)

limit 19 to English language (3346)

(student* or graduate or graduates or postgraduate* or “post graduate*” or volunteer* or communit* or tobacco or “climate change” or “air pollution” or occupational or “financial capacity” or informatics or “IT system” or “information system” or transport or “cultural competenc*” or disabili* or trauma).ti,ab. (1828113)

20 not 21 (1673)

Google Scholar—screen the first 10 pages of results

Sorted by relevance:.

(“capacit* build*”|“build* capacit*”|“capacit* develop*”|“develop* capacit*”|“capacit* strengthen*”|“strengthen* capacit*”|“professional* develop*”|“completenc* improv*”|“improv* competenc*”)(“health* research*”|clinical) https://scholar.google.co.uk/scholar?q= (%22capacit*+build*%22%7C%22build*+capacit*%22%7C%22capacit*+develop*%22%7C%22develop*+capacit*%22%7C%22capacit*+strengthen*%22%7C%22strengthen*+capacit*%22%7C%22professional*+develop*%22%7C%22completenc*+improv*%22%7C%22improv*+competenc*%22)(%22health*+research*%22%7Cclinical)&hl=en&as_sdt=0,5

Study selection

Two researchers, DB and ER, independently screened and reviewed studies using the Covidence systematic review software. 18 In case of disagreement, DB and ER discussed the abstracts in question. After consensus on inclusion was reached, the full texts of all included studies were rechecked for inclusion by DB and confirmed by ER.

Study analysis procedure

Data from selected papers were extracted, and quality assessments performed using an extraction log created and validated against the CASP checklist 17 on critical appraisal for qualitative studies. Macro areas of interest in the log were: general information on the paper such as author and title, main focus and study design. The source of funding, conflict of interests and ethics approval were also recorded. A separate section of the extraction log recorded the characteristics of the tool used or described in each selected paper ( figure 1 ). The extraction log also included specific sections considering the study design, the methodology and the main findings of each paper. Furthermore, a dedicated section of the log collected data on the quality of each study, analysing selection biases and a critical appraisal derived from the CASP checklist. If a definition of capacity development was given, the definition was collected. Some of these sections of the extraction log are not present in figure 1 since it focuses on the description of the identified tool. The content method was used to define a narrative, described in the Discussion section.

  • Download figure
  • Open in new tab
  • Download powerpoint

Extraction log.

Patient and public involvement

Patients and/or the public were not involved inthe design, or conduct, or reporting, or dissemination plans of this research.

Database search and results screening

In December 2018, the first round of the search was performed in 11 different databases and in Google Scholar using the search strategy described in box 1 . A total of 13 264 suitable records were found. A total of 6905 duplicates were removed, resulting in 6359 unique records for inclusion screening by title and abstract ( table 2 ), which was performed throughout 2019. In January 2020, an additional search for papers published or included in publication databases in 2019 was performed using the same search strategy and resulted in 15 775 papers and after removal of duplications, a total of 1118 papers were found. These papers were then added to the 6359 papers identified from the first search. A total of 7474 unique papers were included for title and abstract inclusion screening (three duplicate records were removed in the Covidence software).

Search results

The 7474 unique relevant studies identified were uploaded to the Covidence systematic review software. Two researchers, DB and ER, independently screened the studies, including or excluding according to the criteria in the SPIDER table ( table 1 ). A total of 7280 studies were considered irrelevant. The full-text papers for the remaining 178 references were reviewed. Reasons for exclusion were identified by streamlining the SPIDER table criteria into three main criteria: wrong setting, irrelevant study design and wrong focus of the study. A reason for exclusion was assigned to each paper. All 178 studies described some form of activity to measure the competencies related to performing health research. Thirty were excluded because they were literature reviews on a different aspect of health research or because they described a general perspective on the topic of health capacity development without offering any specific measurement or without reference to research. In addition, 42 studies were excluded because of the wrong setting, since competencies were measured at the level of research institutions or within a specific network. An additional 90 studies were excluded because the study design did not match the inclusion criteria: 38 studies described the use of a measurement tool tailored to the context (eg, specific profession, intervention or setting) and not at the individual level; the remaining 34 studies were excluded because there was no mention of a specific tool to measure HRCD. The final 18 papers reported the use of an evaluation tool, but the tool was an ad hoc pre/postintervention questionnaire with low potential of applicability in a context different from the one described in the paper. A total of 162 studies were therefore excluded, leaving 16 studies for this review ( figure 2 ).

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) screening diagram.

Analysis of the findings across the selected papers

A total of 16 studies met the inclusion criteria set for this systematic review. 19–34 The 16 articles were analysed using the extraction log created and validated against the CASP qualitative checklist.

The results are summarised in table 2 . None of the papers were published before 2006 and only nine of them were published after 2014. 20 21 23–26 31 33 34 The majority (n=13) applied a tool in high-income settings. 19 20 22–24 26–32 34 Seven papers described the use of tools in Australia, 20 22 24 26 28 29 34 three in low and middle-income countries (one in Ghana, Kenya, Malawi, Sri Lanka, 25 one in the Pacific Islands 21 and one in the Philippines 33 ), one in Europe (Norway), 19 one in the USA 32 and one measured HRCD in a group linked to a specific intervention located in multiple areas of the world. Three of them described the creation of a tool without applying it to any specific context, 23 30 31 but they were all designed by research groups in high-income countries (one in the USA and two in the UK).

All of the selected studies applied quantitative, qualitative and mixed methods analyses. The preferred approach (n=7) was to generate quantitative data using an HRCD tool. 20 24 26–28 33 34 One-third of the studies (n=5) used a mixed methods approach 19 21 22 25 29 ; quantitative tools were associated with semistructured interviews, or in some cases qualitative questions were added to the questionnaire. The three studies describing the creation of a tool were not analysed under this methodological category.

Of the 16 selected studies, three used the term ‘capacity development’, 23 25 30 and two included a definition of the concept. 25 30 Seven papers used ‘capacity building’, 20–22 24 26 28 31 of which four also included a definition. 20 22 24 28 In two papers, the capacity building definition was associated with the definition of ‘research culture’. 20 22 Two additional papers used alternative generic terms like ‘research capacity’ 33 or ‘research self-efficacy’. 32 Four papers did not refer to any specific term and therefore no definition was given. 19 27 29 34

Five of the 16 selected papers openly declared no conflict of interests. 22–24 28 31 Eight stated the source of funding used to carry out the activities described. 19 21 23 27–31 The number of participants in the studies varied from 28 enrolled participants for a qualitative study 21 to 3500 users of an online measurement tool. 27

Analysis of the tools from the selected papers

The tools described or used in the 16 selected papers varied in nature, length and applicability. In general, even when there were similarities, each paper described a different perspective on the use of a tool. Four papers applied a questionnaire-type tool to assess research competencies and skills. 19 21 25 33 The length of these questionnaires varied from 19 21 to 59 19 health research capacity-related questions, with the addition of open-ended qualitative questions in two studies, 19 21 and a structured interview in another study. 25

Three studies 22 24 34 used, with a range of adaptations, the Research Capacity and Culture tool and one study 20 revised this tool into a Research Capacity and Context tool referencing Research Capacity and Culture tool as a primary source. Another recurrence in the papers was the use of the ‘Research Spider’ tool. 28 29 Again, the original tool had been adapted to the context, and in one case, 29 the tool was used as a base for qualitative research on HRCD. Two additional papers described tools designed ad hoc to measure the impact of an intervention (CareerTrac 27 and Cross-Sectional Electronic Survey 26 ). These last two papers were not excluded under pre/postintervention since the action was wider, at a programme level and the tool used to measure HRCD was the main focus of the paper. Furthermore, another paper described a tool for a specific category of healthcare workers (Nursing Research Self-Efficacy Scale—NURSES). 32 Three papers 23 30 31 focused on the creation of a new tool and described the process of identifying a set of competencies required to run health research. The outcome of two of them was defined as a ‘core competency framework’. 23 31 The third defined the outcome of the analysis as a ‘set of indicators’. 30

In terms of the target population, the identified tools aimed to measure HRCD in a range of different healthcare worker professions. One-third of the papers (n=5) focused on measuring HRCD on allied health professionals (AHPs). 20 22 24 26 34 Nurses were the main focus in two other studies, 19 32 and four studies applied a tool to a range of health professions (ranging from laboratory scientists to data managers). 21 25 28 29 Two other papers focused on groups linked to a specific intervention. 27 33 All 16 papers included, alongside healthcare workers, representatives of technical professions in health such as managers, directors, faculty members and consumer organisation representatives. In the case of the three papers describing the creation of a new tool, they suggest that these tools would be applicable to all research roles. 23 30 31

As per inclusion criteria, the main level of measurement of the tools was at the individual level. Seven papers only measured HRCD at the individual level. 19 23 28 29 31–33 Three papers added to the individual level of measurement by including information on the perceived barriers in performing health research 21 26 29 ; of these three, two also focused on understanding what motivates healthcare workers to become involved in health research. 26 29 The five studies, which used the Research Capacity and Culture tool and its variants, included the measurement of HRCD at the individual level, and at the team and organisational level. 20 22 24 25 34 One paper described the creation of a tool designed to be used at the organisational level, but embedded a measurement of HRCD at the individual level as well. 30

The most common way a selected tool was validated was by referencing the main paper that described the selected tool and its validation process (n=6). 23 28 29 32–34 This was the case for some of the ad hoc questionnaires, 23 33 34 of the ‘Research Spider’ tool 28 29 and of the NURSES tool. 32 Papers which described an original process or used modified versions of an original tool validated the tool through a contextual validation process described in the paper. 21 22 24 25 31 These validation processes included a consultation of a panel of experts 22 24 31 or a reiterative process of validity. 21 25 One paper stated that the tool used was a validated tool without referencing the process or tool. 20

Overall, only two papers 23 31 focused specifically on tools to measure HRCD on a wider level, without linking the measurement to a specific group or a geographical area which was done in the majority of papers. 19 24 25 28 29 33 In four cases, the tools described were adapted to identify determinants or barriers of HRCD in a defined setting 20 30 34 or to promote HRCD in relation to a specific disease or research topic. 21 In other cases, the papers focused on a tool aiming to assess the impact of specific interventions or programmes on HRCD. 26 27

Summary of evidence

This systematic review aimed to identify tools which measure individual capacities in conducting research in healthcare workers; the 16 included articles 19–34 which demonstrated that tools to measure HRCD in healthcare workers are available, even if they are limited in number. In most cases, the identified tools do not originate from the need to measure and foster HRCD as a necessary strategy to promote research capacity. There is, therefore, a need to design more comprehensive tools which are globally applicable and able to provide comparable, standardised and consistent measurements of research competencies.

The importance of measuring HRCD has only been recognised recently. 15 As the date of publication of the identified papers shows, the appreciation of the contribution that health research can offer in capacity development at a personal level only began in the first decade of this new millennium. Almost half of the selected papers (n=7) refer to studies whose data have been collected after 2014. 20 21 24 26 31 33 34 Of note is the high number of new publications which were retrieved from the academic databases (1118 papers) when the search strategy was rerun in 2020.

Questionnaires were the most commonly used method for assessing research skills and competencies. Almost two-thirds of the papers (n=10) 19 20 22 24 26 28 29 32–34 based the measuring system of different research skills at a personal level using a 5-point Likert scale (n=6) 19 26 28 29 32 33 or a 10-point scale (n=4). 20 22 24 34 This choice highlights the need for a validated quantitative tool based on a set of competency-related questions that can bring standardisation, comparability and consistency across different roles and contexts. However, the extensive use of mixed methods, combining quantitative questionnaires with other qualitative instruments, reflects that HRCD depends on a complex series of components that need to be identified both qualitatively and quantitatively.

By not limiting the selection of articles for this review to those tools used in low and middle-income countries, this review has revealed that most of the tools identified were used in high-income settings. It is important to note that excluding pre/postintervention assessments significantly reduced the inclusion of studies performed in low and middle-income countries. This finding highlights that although health systems in low and middle-income countries may benefit from providing evidence for HRCD, 5 they are rarely the focus of the HRCD literature. Most of the measurements of HRCD in lower income settings appear, in fact, to be narrowly linked to the measurement of the effectiveness of training offered for a specific study or limited to a particular disease. Even when the perspective is broader than a particular study, it is mostly limited to the evaluation and sustainability of training programmes and not linked to a plan of career progression and research competency acquisition. More attention should therefore be given in creating tools which are able to measure, support and promote long-lasting research capabilities in the perspective of professional growth for healthcare workers.

Three essential findings of this systematic review support a change in the perception of HRCD and the tools needed to measure it. First, many of the excluded papers (42 out of 162 excluded papers from the last round of analysis) focused exclusively on the institutional level of measuring research capacity. This is mostly because training interventions are designed to prepare a team to run a study and rarely to promote individual HRCD. 1 35 36 In some cases, the measurement via a tool is also an exercise to demonstrate the investment in training activities for reporting purposes. 37 38 It is therefore important to start promoting a more effective research culture which is independent of specific diseases or roles. This progression could be achieved by championing systems which measure the changes in research capacities at a team and personal level using a globally applicable tool. Most of the tools excluded were evaluation tools designed for, or used in, a specific setting and thus not suitable for a comparable, standardised and consistent analysis of long-term research competency acquisition strategies.

Second, papers that focused on measuring HRCD at the individual level confirmed that research is seen as an opportunity to learn the cross-cutting skills needed in healthcare. A defined set of standardised competencies required to conduct research could be used to measure an individual, team and organisation’s abilities. This was the focus of two papers 23 31 which identified a framework of core competencies. Most of the tools (n=7) were designed to be applied to a wider variety of health professions. 21 23 25 28–31 HRCD can be accessed at different entry points depending on the specific job title, but the set of skills acquired is common and shared among the research team. 1 The approach on assessing these inter-related competencies should therefore be global and not role or disease based. 39 The measurement at an individual level is essential to promote a consistent and coherent career progression for each person and role. 40 However, the overall capability in running research programmes should be measured at a team level where all roles and competencies complement each other, skills are made visible, and measurable as a whole against an overall competency framework. Individual and institutional/team levels are therefore two aspects of HRCD that grow together supported by a common comparable, standardised and consistent tool.

Third, the lack of a standard definition for HRCD can lead to post-training evaluations being categorised as HRCD activities. Although pre/post-training evaluations are important, it might be helpful to define what a ‘structured action’ is to promote HRCD. As previously mentioned, the term ‘capacity development’ is not universally used, with many synonyms such as ‘research capacity’ or ‘capacity strengthening’, creating the possibility of different interpretations. Furthermore, inconsistent terminology was found in describing activities in support of HRCD that in reality were very similar (eg, workshop, training, course). Steinert et al 41 suggest that there should be a standard definition in the context of educational capacity development. This suggestion, alongside a common taxonomy to describe health professions, would support the identification of HRCD as a defined process with specific characteristics and not with a general effort for research training.

The most common tool identified in this review was the Research Capacity and Culture tool. 20 22 24 34 The Research Capacity and Culture tool consists of 52 questions that examine participants’ self-reported success or skill in a range of areas related to research capacity or culture across three domains including the organisation (18 questions), team (19 questions) and individual (15 questions). The Research Capacity and Culture tool includes questions on perceived barriers and motivators for undertaking research. The respondents of the Research Capacity and Culture tool are asked to rate a series of statements relevant to these three domains on a scale of 1–10, with 1 being the lowest and 10 being the highest possible skill or success level. It represents a good example of a comprehensive tool. As confirmed by the review findings, a potential limitation is its application mainly in an Australian context and almost exclusively to measure HRCD in AHPs. 22 24 34 The generalisability of the tool should thus be confirmed. Nevertheless, the Research Capacity and Culture tool represents a strong example of how having a tool refined around a context, and a specific health profession can be an incentive in measuring HRCD.

Another tool highlighted by this review was the ‘Research Spider’ tool. 28 29 42 This tool collects information on individual research experience and interest in research skill development in 10 core areas. These include ‘writing a research protocol’, ‘using quantitative research methods’, ‘publishing research’, ‘finding relevant literature’ and ‘applying for research funding’. In each area, the level of experience is measured on a 5-point Likert scale, from 1 (no experience) to 5 (high experience). The primary aim of the ‘Research Spider’ is to be a flexible tool. This flexibility is confirmed in two studies 28 29 which used the ‘Research Spider’, with one 28 using it as the main measurement, and the other 29 as a quantitative base for qualitative semistructured interviews. The advantage of this tool is that it provides a visual overview of personal research competencies. However, although the limited number of measurement areas (n=10) makes the tool a good initial evaluation instrument, it does not offer a specification of the subskills of each area.

A critical mention should be reserved for the two papers which described the creation of a comprehensive research core competency framework. 23 31 Despite no specific tool being described and the competency scores being visualised by using a spider diagram, these studies present the most accurate overview of the skills required in running research programmes related to health. As mentioned before, a tool which applies a scoring system to the list of competencies identified by these frameworks has the potential of being widely applicable and reliable. This wide applicability and the absence of explicit biases in measuring research skills improvement can foster a more robust approach to research in health. The measurement of HRCD unrelated to specific interventions would maximise the benefit of research at every level. At a personal level, it would clarify a potential career progression path highlighting possible gaps; at the team level, it would support a multidisciplinary approach to health challenges; and at an institutional level, the measurement of HRCD would make the know-how generated by the international scientific community accessible to a broader group of local health workers. Overall, health practice at a global scale would benefit from the incentive of getting involved in research derived from measuring the impact of it on improving competencies. Thus positive outcomes of measuring HRCD could place the issue of universal transferability, and applicability of research methodology and results at a higher level of priority in the design of health research projects.

Limitations of the systematic review

Methodological limitations are recognised for this systematic review. First, there is a lack of clarity on a common definition and terminology to identify HRCD which complicates the search strategy. A long reiteration process was necessary when developing the search strategies for the databases to try and include all the possible variants used to define ‘tool’, ‘capacities’ and ‘development’. Despite this effort, some studies may have been missed. Second, there was a lack of studies which referenced a standard reporting procedure, despite the presence of standards available for reporting qualitative or quantitative research 43–45 as well as for mixed methods research. 46 Other limitations typical for reviews may also apply. Third, while this review has attempted to be as comprehensive as possible, some sources might not have been detected due to the challenge in finding all the relevant grey literature, and the restriction to English language sources only. Finally, it was not possible to analyse the psychometric aspects of each identified tool due to inconsistent reporting.

Conclusions

Sixteen studies using or describing tools to measure HRCD were identified and analysed in this systematic review. 19–34 Identifying capacity development with pre/postintervention evaluations or to generically evaluate capacity development without using a tool was common. There is a need for a clear distinction between simply measuring training activity outcomes in healthcare workers and effective action promoting HRCD for healthcare workers.

The most recurrent tools described were the Research Capacity and Culture tool 20 22 24 34 and the ‘Research Spider’ tool. 28 29 A variety of other tools, mostly questionnaire based, were identified, and in most cases, a broader applicability than described in the specific context of the paper may be possible. Two frameworks systematising research core competencies were identified. 23 31 The potential of tools derived from these frameworks could be significant. The applicability of each tool depends on the context and on the level of accuracy needed. Such tools could be routinely incorporated into standard personal development reviews in order to consistently support capacity development in research studies and organisations.

Future directions for HRCD include the design of a standardised, comparable and consistent tool to measure individual HRCD not linked to training evaluation, but support a long-term research competencies acquisition strategy. In addition, the harmonisation of definitions and terminologies used in identifying HRCD actions and processes could facilitate standardisation and comparability of HRCD strategies.

Ethics statements

Patient consent for publication.

Not required.

Acknowledgments

Authors are sincerely thankful for the immense and competent support of Elinor Harriss, Librarian of the Bodleian Health Care Libraries; Rebekah Burrow who instructed on the different steps and tool needed to perform the present systematic review; and Filippo Bianchi, first DPhil colleague who provided the basic knowledge on systematic reviews.

  • Franzen SRP ,
  • Chandler C ,
  • Research GFfH, editor
  • Hoelscher M , et al
  • Laabes EP ,
  • Zawedde SM , et al
  • Lusthaus C ,
  • Adrien M-H ,
  • Perstinger M
  • Meyers DC , et al
  • Lansang MA ,
  • UN General Assembly
  • Smith H , et al
  • Bauer D , et al
  • Liberati A , et al
  • The Critical Appraisal Skills Programme UK
  • Akerjordet K ,
  • Severinsson E
  • Alison JA ,
  • Zafiropoulos B ,
  • Ekeroma AJ ,
  • Kenealy T ,
  • Shulruf B , et al
  • Golenko X , et al
  • Furtado T ,
  • Boggs L , et al
  • Hughes I , et al
  • Njelesani J ,
  • Dacombe R ,
  • Palmer T , et al
  • Petersen M ,
  • Raghavan R ,
  • Farmer EA ,
  • Sonstein SA ,
  • Namenek Brouwer RJ ,
  • Gluck W , et al
  • Swenson-Britt E ,
  • Torres GCS ,
  • Estrada MG ,
  • Sumile EFR , et al
  • Williams C ,
  • Miyazaki K ,
  • Borkowski D , et al
  • Siribaddana S
  • Lyytikainen M
  • Siribaddana S , et al
  • Ijsselmuiden C ,
  • Marais DL ,
  • Becerra-Posada F , et al
  • Reeder JC ,
  • DeGruttola V ,
  • Dixon D , et al
  • Steinert Y ,
  • Centeno A , et al
  • Morgan S , et al
  • Lipsey MW ,
  • Frambach JM ,
  • van der Vleuten CPM ,

Contributors DB and ER designed and conducted the systematic review. DB wrote the draft of the systematic review and revised it according to the commentaries of ER, SB and TL. DB provided the final version of the manuscript. ER critically reviewed the manuscript and substantially contributed to the final version of the manuscript. SB critically reviewed both the design of the systematic review and the manuscript, and was involved in the development of meaningful inclusion criteria. TL critically reviewed the design of the study, made important suggestions for improvement, critically reviewed the manuscript and substantially contributed to the final version of the manuscript. All authors approved the final version of the manuscript.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

IMAGES

  1. The process for research capacity development

    research capacity development project

  2. Research Capacity Development

    research capacity development project

  3. -The steps of each phases of the capacity development framework

    research capacity development project

  4. Capacity Development Process PowerPoint Presentation Slides

    research capacity development project

  5. GEOSS/AWCI: Capacity Building Program

    research capacity development project

  6. PPT

    research capacity development project

VIDEO

  1. 2023 Year in Review

  2. Western Province Capacity Development Project

  3. Capacity Building Program on National Curriculum Framework |Foundational Stage

  4. Research Capacity Development

  5. Water Diplomacy Center Training Courses 2024

  6. 3