Soft Skills In Research: Effective Communication And Teamwork

Uncover the value of soft skills in scientific research and the immense impact they can have on the world of exploration and discovery.

' src=

In the context of research, innovation and discovery cannot solely rely on technical expertise. Soft skills, including communication, teamwork, adaptability, and ethical awareness, guide researchers through scientific inquiry. Beyond the confines of laboratory experiments and data analysis, these interpersonal skills foster collaboration, facilitate meaningful dialogue and promote the responsible conduct of research. In this article, we explore the critical importance of soft skills in research, examining how they can enhance productivity, improve research outcomes, and shape the future of scientific advancement.

Understanding Soft Skills

What are soft skills.

Soft skills are a set of interpersonal, communication, and behavioral attributes that enable individuals to interact effectively with others. Unlike hard or technical skills, which are typically job-specific and measurable, soft skills are more about how individuals behave. Examples of soft skills include:

Leadership: Leadership involves more than just giving directions; it’s about inspiring and motivating individuals or teams to achieve common objectives. Visionary leaders provide purpose and direction, while decisiveness enables them to make tough decisions swiftly, even in uncertain situations. Delegation plays a crucial role by assigning tasks to team members based on their strengths and abilities, fostering autonomy and accountability. Additionally, effective conflict resolution skills are essential for addressing disputes constructively, maintaining team harmony, and keeping everyone focused on the task at hand.

Creativity: Creativity is the engine of innovation, generating novel ideas and solutions. Curiosity opens the mind to new possibilities, while imagination allows individuals to envision alternatives beyond conventional thinking. Creative individuals are not afraid to take risks, explore new possibilities, and push boundaries to uncover fresh perspectives and insights. Creativity fuels progress and drives positive change by challenging the status quo and inspiring new ways of thinking and doing.

Critical Thinking: Critical thinking enables individuals to analyze information objectively, evaluate evidence, and make informed decisions. It involves questioning assumptions, recognizing biases, and applying logic and reasoning to draw sound conclusions. Critical thinkers are adept at identifying underlying assumptions, evaluating the credibility of sources, and considering alternative perspectives before arriving at conclusions.

Emotional Intelligence: This is the ability to recognize, understand, and manage one’s own emotions, as well as empathize with the emotions of others. Self-awareness enables individuals to recognize their emotional triggers and responses, while self-regulation allows for the effective management of emotions in various situations. Social awareness consists of understanding the emotions and perspectives of others, while relationship management skills facilitate positive interactions and collaboration.

Adaptability: Adaptability is a vital soft skill characterized by the ability to thrive in changing environments and circumstances. Those proficient in adaptability demonstrate flexibility, resilience, and a proactive approach to navigating transitions, whether in the workplace or personal life. They embrace change as an opportunity for growth, quickly adjusting their strategies and mindset to effectively meet new challenges. Valued in both professional and personal contexts, adaptability enables individuals to remain productive, engaged, and resilient in uncertainty, contributing to their success and well-being.

Conflict Resolution: Conflict resolution skills are essential for addressing disagreements and disputes in a constructive and mutually beneficial manner. Active listening is crucial for fully understanding the perspectives of all parties involved, while empathy fosters compassion and understanding. Negotiation facilitates finding mutually agreeable solutions, while compromise entails finding common ground and reaching a consensus.

Time Management: Time management skills enable individuals to prioritize tasks, allocate resources effectively, and meet deadlines efficiently. Setting realistic goals provides clarity and direction while creating schedules helps organize activities and manage time effectively. Identifying and eliminating time-wasting activities is essential for minimizing distractions and maximizing productivity.

Related article: Time Management for Researchers: A Comprehensive Toolkit

Networking: Networking builds and maintains professional relationships to leverage connections for career advancement and opportunities. Effective communication is key to establishing rapport and fostering meaningful connections, while actively seeking out opportunities to connect with others helps expand one’s professional network. Building and nurturing relationships over time allows individuals to tap into resources, expertise, and opportunities for growth and development.

Presentation Skills: Presentation is essential for effectively communicating ideas, proposals, or findings to an audience. Public speaking delivers the information clearly, confidently, and engagingly, while visual design enhances the clarity and impact of presentations. Storytelling captivates audiences and makes information more memorable, while audience engagement techniques encourage participation and interaction.

Resilience: This is the ability to bounce back from setbacks, adapt to change, and maintain a positive outlook in the face of challenges. Perseverance is to stay committed to goals despite obstacles or setbacks, while optimism fosters a hopeful and positive attitude. A growth mindset embraces challenges as opportunities for learning and growth, leading to greater resilience and personal development.

The Importance Of Soft Skills In Research

Soft skills play a crucial role in research, contributing to the success and effectiveness of scientific endeavors in several ways. While technical expertise is undoubtedly crucial in conducting research, soft skills play a complementary and equally vital role in ensuring success in this field. Furthermore, soft skills encompass a wide range of interpersonal and communication abilities.

By developing and honing soft skills, researchers can enhance their effectiveness, productivity, and overall success in their work. Possessing strong soft skills can open up new opportunities for collaboration, funding, and career advancement, ultimately contributing to the advancement of knowledge and innovation in their respective fields. Therefore, investing in the acquisition of soft skills alongside technical expertise is essential for researchers seeking to make meaningful contributions to their fields and address complex challenges facing society.

Related article: Funding for Research — Why, Types of Funding, When, and How?

Soft Skills For Researchers

Communication skills.

Communication skills are indispensable throughout the research process, serving as the foundation for effective collaboration, knowledge dissemination, and project advancement. Researchers rely on clear and concise communication to articulate hypotheses, methodologies, and results to peers, advisors, and stakeholders, ensuring alignment and understanding among team members. Particularly crucial in interdisciplinary collaborations, effective communication bridges disciplinary gaps, facilitates idea exchange, and integrates diverse perspectives into cohesive solutions. Researchers adept at communication can navigate these interactions smoothly, translating technical jargon into accessible language and fostering mutual understanding among team members.

Moreover, the impact of good communication extends beyond internal collaborations to external interactions with the broader scientific community and society. Clear and compelling communication of research findings enhances visibility, credibility, and impact. Whether through academic publications, conference presentations, or public outreach efforts, researchers proficient in communication can inspire interest, provoke discussion, and catalyze further inquiry. Ultimately, investing in the development of communication skills not only benefits individual researchers but also advances scientific knowledge and addresses global challenges collaboratively, emphasizing the critical role of communication in the research landscape.

Problem-Solving Skills

Problem-solving skills are integral to the research process, guiding researchers through the complexities and uncertainties inherent in scientific inquiry. In research, challenges are inevitable, ranging from methodological dilemmas to unexpected experimental outcomes. Proficient problem-solving abilities empower researchers to navigate these obstacles effectively, identify viable solutions, and make informed decisions to propel their projects forward. Moreover, problem-solving skills foster a proactive and adaptive mindset, enabling researchers to approach problems with resilience, creativity, and resourcefulness. By cultivating these skills, researchers not only enhance their ability to overcome hurdles but also strengthen their capacity to innovate, explore new avenues of inquiry, and generate impactful discoveries.

Examples of Problem-Solving Scenarios in Research

Experimental troubleshooting : Researchers encounter unexpected issues or inconsistencies in their experiments, requiring them to identify the underlying problems and devise solutions to ensure the reliability and validity of their results.

Data analysis challenges: Researchers face complex datasets with missing or conflicting information, necessitating the development of innovative analytical approaches and algorithms to extract meaningful insights and draw valid conclusions.

I nterdisciplinary collaboration: In collaborative research projects involving experts from diverse fields, researchers must navigate differences in terminology, methodologies, and perspectives to integrate findings and address complex research questions effectively.

Literature review discrepancies: Researchers encounter conflicting findings or inconsistencies in the existing literature, prompting them to critically evaluate the evidence, reconcile discrepancies, and identify gaps for further investigation.

Funding constraints: Researchers encounter budget limitations or funding cuts, requiring them to explore alternative sources of funding, optimize resource allocation, and develop cost-effective strategies to continue their research projects.

Ethical dilemmas: Researchers face ethical considerations or dilemmas in their research, such as conflicts of interest, privacy concerns, or potential harm to participants, necessitating careful ethical deliberation and decision-making to ensure research integrity and compliance with ethical standards.

Technology limitations: Researchers encounter limitations or challenges with existing technologies or tools, prompting them to innovate and develop new methodologies, techniques, or instrumentation to overcome technical obstacles and advance their research goals.

Fieldwork complications: Researchers conducting fieldwork face logistical challenges, environmental constraints, or unforeseen circumstances, requiring them to adapt their research plans, problem-solve on the spot, and implement contingency measures to ensure the success of their fieldwork activities.

Teamwork stands as a cornerstone in research, offering a collaborative framework that fosters synergy, innovation, and collective problem-solving. Research endeavors often entail complex challenges that require diverse perspectives, skills, and expertise to address effectively. In this context, teamwork enables researchers to pool their strengths, leverage complementary talents, and navigate interdisciplinary boundaries to achieve common objectives. 

Teamwork also cultivates an environment of mutual support and shared accountability, where individuals collaborate seamlessly, communicate openly, and respect each other’s contributions. By harnessing the collective intelligence and collective effort of a cohesive team, researchers can amplify their impact, tackle ambitious projects, and push the boundaries of knowledge beyond what could be achieved individually.

Examples of How Efficient Teamwork Accelerates Research

Division of Labor: In a research team, members can divide tasks according to their expertise, allowing for simultaneous progress on multiple aspects of the project. For instance, while one team member conducts experiments, another can analyze data, and another can draft reports. This division of labor ensures efficiency and accelerates the overall research process.

Pooling Resources: Through teamwork, researchers can pool their resources, including equipment, funding, and intellectual capacity. By sharing resources, teams can access specialized tools and expertise that may not be available to individual researchers, thereby speeding up the completion of experiments and analyses.

Brainstorming and Problem-Solving: Collaborative brainstorming sessions allow team members to generate innovative ideas and solutions to research challenges. Through open discussion and exchange of perspectives, teams can quickly identify potential obstacles and develop strategies to overcome them, leading to faster progress in the research process.

Feedback and Peer Review: Efficient teamwork involves providing constructive feedback and engaging in peer review processes. By soliciting input from team members, researchers can identify areas for improvement and refine their methodologies or interpretations more quickly. This iterative process of feedback accelerates the refinement of research findings and ensures their accuracy and validity.

Networking and Collaboration: Research teams often collaborate with external partners, such as other research institutions, industry partners, or community organizations. Through these collaborative efforts, teams can leverage additional expertise, resources, and data, facilitating faster progress in achieving research goals and objectives.

Critical Thinking

Critical thinking guides researchers in evaluating evidence, analyzing data, and drawing well-reasoned conclusions. In research, where the pursuit of knowledge often navigates complex and ambiguous situations, critical thinking helps researchers approach problems with skepticism, intellectual rigor, and a willingness to challenge assumptions. 

By applying logical reasoning, sound judgment, and systematic inquiry, researchers can assess the validity of hypotheses, identify gaps in existing knowledge, and formulate novel research questions that push the boundaries of inquiry. It enables researchers to navigate ethical considerations, recognize biases, and uphold the integrity and credibility of their work. Ultimately, critical thinking underpins the entire research endeavor, driving the quest for truth, innovation, and intellectual advancement.

Examples of Critical Thinking in Action

Reviewing Literature: When conducting a literature review, researchers critically evaluate existing studies to identify gaps, inconsistencies, or areas requiring further investigation. They assess the validity, reliability, and relevance of previous research findings, considering factors such as sample size, methodology, and potential biases. Through this critical analysis, researchers inform their study design and contribute to the advancement of knowledge in their field.

Designing Experiments: Before conducting experiments, researchers engage in critical thinking to develop robust study designs that address research questions effectively and minimize biases. They carefully consider factors such as control variables, randomization procedures, and sample selection criteria to ensure the validity and reliability of their findings. By anticipating potential confounding factors and addressing them proactively, researchers enhance the rigor and credibility of their experiments.

Analyzing Data: In the data analysis phase, researchers apply critical thinking skills to interpret research findings accurately and draw meaningful conclusions. They scrutinize statistical analyses, examining factors such as effect sizes, significance levels, and confidence intervals to assess the strength of evidence supporting their hypotheses. Additionally, researchers critically evaluate outliers, anomalies, and potential sources of error, ensuring the integrity and validity of their data interpretations.

Identifying Bias: Researchers critically examine their assumptions, biases, and preconceptions throughout the research process to minimize their influence on study outcomes. They strive for objectivity and impartiality in data collection, analysis, and interpretation, employing strategies such as blind or double-blind procedures to reduce bias. By acknowledging and addressing potential sources of bias, researchers enhance the credibility and reliability of their research findings.

Also read: How To Avoid Bias In Research: Navigating Scientific Objectivity

Drawing Conclusions: When concluding research findings, researchers engage in critical thinking to assess the strength of evidence and the validity of their interpretations. They consider alternative explanations, potential confounding variables, and limitations of the study design, weighing the evidence carefully before making definitive claims. By exercising skepticism and intellectual rigor, researchers ensure that their conclusions are grounded in sound reasoning and supported by empirical evidence.

Improving Your Soft Skills

Training and courses.

Enhancing soft skills requires a proactive approach to learning and development, incorporating a combination of self-directed practice, feedback, and structured training opportunities. One practical way to improve soft skills is through experiential learning, where individuals actively engage in real-world scenarios that require the application of specific skills. This can involve volunteering for leadership roles in group projects, participating in networking events to hone communication skills, or seeking opportunities to collaborate with diverse teams to cultivate teamwork and adaptability. Additionally, soliciting feedback from peers, mentors, or supervisors can provide valuable insights into areas for improvement and guide targeted skill development efforts.

Organizations may provide in-house training programs or workshops focused on specific soft skills relevant to their industry or organizational culture. Professional associations, community colleges, and continuing education programs often offer seminars or certificate programs tailored to develop soft skills for specific career paths or industries. By leveraging these training opportunities, individuals can systematically enhance their soft skills, augment their professional capabilities and open up new opportunities for personal and career growth.

Regular Practice And Consistent Learning

Habitual practice plays a fundamental role in improving soft skills, as consistent repetition allows individuals to reinforce desired behaviors and cultivate proficiency over time. Just as athletes train regularly to hone their physical abilities, individuals aspiring to develop soft skills must engage in deliberate practice to refine their interpersonal, communication, and problem-solving capabilities. By incorporating soft skill development into daily routines and activities, individuals can gradually build competence and confidence in applying these skills across various contexts. Consistent practice not only enhances skill proficiency but also fosters a growth mindset, where setbacks are viewed as opportunities for learning and improvement rather than obstacles to progress.

Tips for Cultivating a Habit of Consistent Learning

To cultivate a habit of consistent learning and skill development, individuals can adopt several strategies to integrate learning activities seamlessly into their daily lives. Setting specific, achievable goals related to soft skill improvement can provide motivation and focus for learning efforts. Breaking down large goals into smaller, manageable tasks can make progress more tangible and sustainable. 

Establishing a regular schedule or routine for learning activities, such as dedicating a specific time each day for skill practice or scheduling regular check-ins to track progress, can help maintain consistency and accountability. Embracing a growth mindset and viewing challenges as opportunities for growth can also foster resilience and perseverance in the face of setbacks. Leveraging available resources such as books, online courses, workshops, or mentorship opportunities can provide valuable guidance and support for ongoing learning and skill development.

Ultimately, the significance of soft skills in enhancing research competency cannot be overstated. As researchers strive to address increasingly complex and interdisciplinary challenges, the ability to effectively communicate ideas, collaborate with diverse teams, and think critically becomes indispensable. Soft skills not only complement technical expertise but also enable researchers to navigate uncertainties, innovate, and drive scientific progress forward. By recognizing and investing in the development of soft skills, individuals and organizations can foster a culture of excellence, collaboration, and continuous learning, thereby advancing the frontiers of knowledge and addressing society’s most pressing challenges with ingenuity and impact.

Discover insights on how to make learning a habit you enjoy: “ How To Make Learning A Habit You Enjoy “.

Science Figures, Graphical Abstracts, And Infographics For Your Research

Mind the Graph platform offers invaluable support to scientists by providing a comprehensive suite of tools for creating visually compelling science figures, graphical abstracts, and infographics tailored to their research needs. The platform offers a diverse range of customizable templates and design elements, empowering researchers to create professional-quality visuals with ease. Whether crafting graphical abstracts to summarize key findings, designing infographics to illustrate research processes, or creating science figures to visualize experimental results, Mind the Graph enables scientists to effectively communicate their research in a visually impactful manner. By harnessing the power of visual storytelling, Mind the Graph helps scientists captivate audiences, disseminate knowledge, and elevate the impact of their research endeavors.

illustrations-banner

Subscribe to our newsletter

Exclusive high quality content about effective visual communication in science.

Sign Up for Free

Try the best infographic maker and promote your research with scientifically-accurate beautiful figures

no credit card required

Content tags

en_US

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies

Communication

  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Definitions and Concepts of Communication

Introduction, general overviews.

  • Reference Works
  • Anthologies
  • Online Bibliographies
  • History of the Idea
  • Culturally Based Concepts
  • Conceptual Models and Metaphors
  • Multiplicity of Definitions
  • Intentionality, Behavior, and Systems
  • Metacommunication and Paradox
  • Representation, Experience, and Mutual Understanding
  • Incommunicability and the Limits of Communication
  • Communication as Constitutive
  • Materiality and Discursivity
  • Machine and Human Communication
  • Communicative Action, Strategic Action, and Dialogue
  • Communication as Ideology
  • Cultural Bias and De-Westernization

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Codes and Cultural Discourse Analysis
  • Communication History
  • Critical and Cultural Studies
  • Ethnography of Communication
  • Global Englishes
  • Intercultural Communication
  • Intergroup Communication
  • International Communications
  • Interpretation/Reception
  • Media Sociology
  • Philosophy of Communication
  • Religion and the Media
  • Religious Rhetoric
  • Sense-Making/Sensemaking
  • The Interface between Organizational Change and Organizational Change Communication
  • Wilbur Schramm

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Media Streaming
  • Web3 and Communication
  • Find more forthcoming titles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Definitions and Concepts of Communication by Robert T. Craig LAST REVIEWED: 15 April 2024 LAST MODIFIED: 21 June 2024 DOI: 10.1093/obo/9780199756841-0172

What is communication? The question is deceptively simple, not because there is no straightforward answer but because there are so many answers, many of which may seem perfectly straightforward in themselves. Communication is human interaction . . . the transfer of information . . . effect or influence . . . mutual understanding . . . community . . . culture . . . and so on. Any effort to reconcile these straightforward definitions quickly runs into contradictions and puzzles. Human interaction involves the transfer of information, but machines also exchange information, and so do animals and chemical molecules. Is human communication essentially different in some way? Effect or influence is not the same as mutual understanding and is sometimes quite the opposite. Is mutual understanding ever really possible? Is communication an intentional act or a process that goes on regardless of our intentions? If communication is culture, is it necessarily also community? Doesn’t the concept of communication vary, depending on how it is understood and practiced in each culture? Is it all relative, then, or are there good reasons to be critical of some cultural concepts? Obviously, communication can be defined in many different ways, and at least some of those differences seem potentially consequential. Whether we think of communication as essentially information transfer, or mutual understanding, or culture can make a difference, not only for how we understand the process intellectually but also for how we communicate in practice. Of course, we need not all agree on a single definition or choose a single definition for ourselves, but we can learn a lot by contemplating and debating the theoretical and practical implications of different concepts and theories of communication. This is what communication theorists do, and the academic subject of communication theory is a rich and varied resource for learning how to think about communication. The field of communication theory encompasses several distinct intellectual traditions, some thousands of years old, others very new. Some theories lend themselves to scientific empirical studies of communication, others to philosophical reflection or cultural criticism. This article is intended to represent the diversity of communication theory, hopefully in ways that are useful and inviting of further study rather than merely confusing. Included are introductory overview essays, textbooks, and other general sources such as encyclopedias, anthologies, and journals. Other sections cover historical studies on the idea of communication, ethnographic studies on culturally based concepts of communication, and theoretical models of the communication process. The section titled Conceptual Issues is divided into eleven subsections, each focusing on a key conceptual issue or controversy in communication theory.

For readers wanting to dip a toe in communication theory before diving in, the articles in this section provide overviews of the concept of communication while introducing important issues and conceptual approaches. Eadie and Goret 2013 surveys key concepts of communication that have influenced the academic field of communication studies. Cobley 2008 sketches the origins and historical development of the concept of communication. Steinfatt 2009 discusses the problem of defining communication and some characteristics of communication that affect the usefulness of definitions. Craig 1999 presents a conceptual model of communication theory as a field that integrates seven distinct intellectual traditions.

Cobley, Paul. 2008. Communication: Definitions and concepts. In International encyclopedia of communication . Edited by Wolfgang Donsbach. Oxford and Malden, MA: Blackwell.

Sketches the ancient origins of the concept of communication, the distinction between communication as process and product, the social uses of communication, and 20th-century concepts that contributed to communication theory. Also notes the importance of understanding miscommunication.

Craig, Robert T. 1999. Communication theory as a field. Communication Theory 9.2:119–161.

DOI: 10.1111/j.1468-2885.1999.tb00355.x

Conceptualizes communication theory as a field of “metadiscursive practice” in which diverse theoretical concepts of communication are engaged with each other and with ordinary (nontheoretical) concepts in ongoing debates about practical communication problems. Identifies seven interdisciplinary “traditions” of communication theory, each grounded in a distinct, practically oriented definition of communication.

Eadie, William F., and Robin Goret. 2013. Theories and models of communication: Foundations and heritage. In Theories and models of communication . Edited by Paul Cobley and Peter J. Schulz, 17–36. Handbooks of Communication Science, HOCS 1. Berlin and Boston: De Gruyter Mouton.

With a focus on concepts of communication within the academic field of communication studies, this chapter organizes conceptions of communication under five broad categories: shaper of public opinion; language use; information transmission; developer of relationships; and definer, interpreter, and critic of culture.

Steinfatt, Thomas M. 2009. Definitions of communication. In Encyclopedia of communication theory . Edited by Stephen W. Littlejohn and Karen A. Foss. Thousand Oaks, CA: SAGE.

Argues that the problem of defining communication is not to discover the correct meaning of the term but is rather to construct a definition that is useful for studying communication. Distinguishes several characteristics of communication that affect the usefulness of definitions. A critique of this piece is that it presupposes a transmission (speaker to listener) model of communication and fails to address alternative models that highlight constitutive, systemic, and other characteristics of communication (see under Conceptual Issues ).

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Communication »
  • Meet the Editorial Board »
  • Accounting Communication
  • Acculturation Processes and Communication
  • Action Assembly Theory
  • Action-Implicative Discourse Analysis
  • Activist Media
  • Adherence and Communication
  • Adolescence and the Media
  • Advertisements, Televised Political
  • Advertising
  • Advertising, Children and
  • Advertising, International
  • Advocacy Journalism
  • Agenda Setting
  • Annenberg, Walter H.
  • Apologies and Accounts
  • Applied Communication Research Methods
  • Argumentation
  • Artificial Intelligence (AI) Advertising
  • Attitude-Behavior Consistency
  • Audience Fragmentation
  • Audience Studies
  • Authoritarian Societies, Journalism in
  • Bakhtin, Mikhail
  • Bandwagon Effect
  • Baudrillard, Jean
  • Blockchain and Communication
  • Bourdieu, Pierre
  • Brand Equity
  • British and Irish Magazine, History of the
  • Broadcasting, Public Service
  • Capture, Media
  • Castells, Manuel
  • Celebrity and Public Persona
  • Civil Rights Movement and the Media, The
  • Co-Cultural Theory and Communication
  • Cognitive Dissonance
  • Collective Memory, Communication and
  • Comedic News
  • Communication Apprehension
  • Communication Campaigns
  • Communication, Definitions and Concepts of
  • Communication Law
  • Communication Management
  • Communication Networks
  • Communication, Philosophy of
  • Community Attachment
  • Community Journalism
  • Community Structure Approach
  • Computational Journalism
  • Computer-Mediated Communication
  • Content Analysis
  • Corporate Social Responsibility and Communication
  • Crisis Communication
  • Critical Race Theory and Communication
  • Cross-tools and Cross-media Effects
  • Cultivation
  • Cultural and Creative Industries
  • Cultural Imperialism Theories
  • Cultural Mapping
  • Cultural Persuadables
  • Cultural Pluralism and Communication
  • Cyberpolitics
  • Death, Dying, and Communication
  • Debates, Televised
  • Deliberation
  • Developmental Communication
  • Diffusion of Innovations
  • Digital Divide
  • Digital Gender Diversity
  • Digital Intimacies
  • Digital Literacy
  • Diplomacy, Public
  • Distributed Work, Comunication and
  • Documentary and Communication
  • E-democracy/E-participation
  • E-Government
  • Elaboration Likelihood Model
  • Electronic Word-of-Mouth (eWOM)
  • Embedded Coverage
  • Entertainment
  • Entertainment-Education
  • Environmental Communication
  • Ethnic Media
  • Experiments
  • Families, Multicultural
  • Family Communication
  • Federal Communications Commission
  • Feminist and Queer Game Studies
  • Feminist Data Studies
  • Feminist Journalism
  • Feminist Theory
  • Focus Groups
  • Food Studies and Communication
  • Freedom of the Press
  • Friendships, Intercultural
  • Gatekeeping
  • Gender and the Media
  • Global Media, History of
  • Global Media Organizations
  • Glocalization
  • Goffman, Erving
  • Habermas, Jürgen
  • Habituation and Communication
  • Health Communication
  • Hermeneutic Communication Studies
  • Homelessness and Communication
  • Hook-Up and Dating Apps
  • Hostile Media Effect
  • Identification with Media Characters
  • Identity, Cultural
  • Image Repair Theory
  • Implicit Measurement
  • Impression Management
  • Infographics
  • Information and Communication Technology for Development
  • Information Management
  • Information Overload
  • Information Processing
  • Infotainment
  • Innis, Harold
  • Instructional Communication
  • Integrated Marketing Communications
  • Interactivity
  • Intercultural Capital
  • Intercultural Communication, Tourism and
  • Intercultural Communication, Worldview in
  • Intercultural Competence
  • Intercultural Conflict Mediation
  • Intercultural Dialogue
  • Intercultural New Media
  • Intergenerational Communication
  • Interpersonal Communication
  • Interpersonal LGBTQ Communication
  • Interpretive Communities
  • Journalism, Accuracy in
  • Journalism, Alternative
  • Journalism and Trauma
  • Journalism, Citizen
  • Journalism, Citizen, History of
  • Journalism Ethics
  • Journalism, Interpretive
  • Journalism, Peace
  • Journalism, Tabloid
  • Journalists, Violence against
  • Knowledge Gap
  • Lazarsfeld, Paul
  • Leadership and Communication
  • LGBTQ+ Family Communication
  • Mass Communication
  • McLuhan, Marshall
  • Media Activism
  • Media Aesthetics
  • Media and Time
  • Media Convergence
  • Media Credibility
  • Media Dependency
  • Media Ecology
  • Media Economics
  • Media Economics, Theories of
  • Media, Educational
  • Media Effects
  • Media Ethics
  • Media Events
  • Media Exposure Measurement
  • Media, Gays and Lesbians in the
  • Media Literacy
  • Media Logic
  • Media Management
  • Media Policy and Governance
  • Media Regulation
  • Media, Social
  • Media Systems Theory
  • Merton, Robert K.
  • Message Characteristics and Persuasion
  • Mobile Communication Studies
  • Multimodal Discourse Analysis, Approaches to
  • Multinational Organizations, Communication and Culture in
  • Murdoch, Rupert
  • Narrative Engagement
  • Narrative Persuasion
  • Net Neutrality
  • News Framing
  • News Media Coverage of Women
  • NGOs, Communication and
  • Online Campaigning
  • Open Access
  • Organizational Change and Organizational Change Communicat...
  • Organizational Communication
  • Organizational Communication, Aging and
  • Parasocial Theory in Communication
  • Participation, Civic/Political
  • Participatory Action Research
  • Patient-Provider Communication
  • Peacebuilding and Communication
  • Perceived Realism
  • Personalized Communication
  • Persuasion and Social Influence
  • Persuasion, Resisting
  • Photojournalism
  • Political Advertising
  • Political Communication, Normative Analysis of
  • Political Economy
  • Political Knowledge
  • Political Marketing
  • Political Scandals
  • Political Socialization
  • Polls, Opinion
  • Product Placement
  • Public Interest Communication
  • Public Opinion
  • Public Relations
  • Public Sphere
  • Queer Intercultural Communication
  • Queer Migration and Digital Media
  • Race and Communication
  • Racism and Communication
  • Radio Studies
  • Reality Television
  • Reasoned Action Frameworks
  • Reporting, Investigative
  • Rhetoric and Communication
  • Rhetoric and Intercultural Communication
  • Rhetoric and Social Movements
  • Rhetoric, Religious
  • Rhetoric, Visual
  • Risk Communication
  • Rumor and Communication
  • Schramm, Wilbur
  • Science Communication
  • Scripps, E. W.
  • Selective Exposure
  • Sesame Street
  • Sex in the Media
  • Small-Group Communication
  • Social Capital
  • Social Change
  • Social Cognition
  • Social Construction
  • Social Identity Theory and Communication
  • Social Interaction
  • Social Movements
  • Social Network Analysis
  • Social Protest
  • Sports Communication
  • Stereotypes
  • Strategic Communication
  • Superdiversity
  • Surveillance and Communication
  • Symbolic Interactionism in Communication
  • Synchrony in Intercultural Communication
  • Tabloidization
  • Telecommunications History/Policy
  • Television, Cable
  • Textual Analysis and Communication
  • Third Culture Kids
  • Third-Person Effect
  • Time Warner
  • Transgender Media Studies
  • Transmedia Storytelling
  • Two-Step Flow
  • United Nations and Communication
  • Urban Communication
  • Uses and Gratifications
  • Video Deficit
  • Video Games and Communication
  • Violence in the Media
  • Virtual Reality and Communication
  • Visual Communication
  • Web Archiving
  • Whistleblowing
  • Whiteness Theory in Intercultural Communication
  • Youth and Media
  • Zines and Communication
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [195.158.225.244]
  • 195.158.225.244

Undergraduate Research Opportunities Center

Present or publish your research or creative activity, what is research communication.

"The ability to interpret or translate complex research findings into language, format, and context that non experts understand" (IDS 2011).

Research Communication

Research: Discovering new knowledge

What is research

Communication: The exchange of information

What is communciation

Research Dissemination vs. Research Communication: What is the difference?

Research Dissemination Research Communication
Making information accessible Including all aspects of research dissemination
Sharing research products via the internet, journals, and presentations Tailoring the message for a variety of audiences

Research communication incorporates the dissemination process but doesn't stop there! The process of tailoring your message for your audience is the hallmark of effective research communication .

Why should Research Communication matter..

Professional development

Improved communication skills

experiencing the real world of researching as well as presenting 

To the general public:

Improve the quality of life

Help with miscommunication and misconceptions

Increase interest and participation in the research field especially in the underrepresented social groups

To the research community:

Increase knowledge and further implement research in the future

Strong impacts in the research and science fields

Lead to new collaborations

The 7 C's of Communication

A checklist for communication:

The C The Meaning
Clear Making message easy to percieve
Concise Keeping it brief
Concrete Solidifying depiction of what you are communicating 
Correct  Presenting error free information
Coherent  Making information logical
Complete Presenting all the information
Courteous Keeping an open, honest and friendly communication pattern

Thanks for helping us improve csumb.edu. Spot a broken link, typo, or didn't find something where you expected to? Let us know. We'll use your feedback to improve this page, and the site overall.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Campbell Syst Rev
  • v.19(1); 2023 Mar

Logo of csysrev

Communication skills training for improving the communicative abilities of student social workers

Emma reith‐hall.

1 Health and Life Sciences, De Montfort University, Leicester UK

2 Department of Social Policy and Social Work, University of Birmingham, Birmingham UK

Paul Montgomery

Associated data.

Good communication is central to effective social work practice, helping to develop constructive working relationships and improve the outcomes of people in receipt of social work services. There is strong consensus that the teaching and learning of communication skills for social work students is an essential component of social work qualifying courses. However, the variation in communication skills training and its components is significant. There is a sizeable body of evidence relating to communication skills training therefore a review of the findings helps to clarify what we know about this important topic in social work education. We conducted this systematic review to determine whether communication skills training for social work students works and which types of communication skills training, if any, were more effective and lead to the most positive outcomes.

This systematic review aimed to critically evaluate all studies which have investigated the effectiveness of communication skills training programmes for social work students. The research question which the review posed is: ‘What is the effectiveness of communication skills training for improving the communicative abilities of social work students?’ It was intended that the review would provide a robust evaluation of communication skills training for social work students and help explain variations in practice to support educators and policy‐makers to make evidence‐based decisions in social work education, practice and policy.

Search Methods

We conducted a search for published and unpublished studies using a comprehensive search strategy that included multiple electronic databases, research registers, grey literature sources, and reference lists of prior reviews and relevant studies.

Selection Criteria

Study selection was based on the following characteristics: Participants were social work students on generic (as opposed to client specific) qualifying courses; Interventions included any form of communication skills training; eligible studies were required to have an appropriate comparator such as no intervention or an alternative intervention; and outcomes included changes in knowledge, attitudes, skills and behaviours. Study selection was not restricted by geography, language, publication date or publication type.

Data Collection and Analysis

The search strategy was developed using the terms featuring in existing knowledge and practice reviews and in consultation with social work researchers, academics and the review advisory panel, to ensure that a broad range of terminology was included. One reviewer conducted the database searches, removing duplicates and irrelevant records, after which each record was screened by title and abstract by both reviewers to ensure robustness. Any studies deemed to be potentially eligible were retrieved in full text and screened by both reviewers.

Main Results

Fifteen studies met the inclusion criteria. Overall, findings indicate that communication skills training including empathy can be learnt, and that the systematic training of social work students results in some identifiable improvements in their communication skills. However, the evidence is dated, methodological rigour is weak, risk of bias is moderate to high/serious or incomplete, and extreme heterogeneity exists between the primary studies and the interventions they evaluated. As a result, data from the included studies were incomplete, inconsistent, and lacked validity, limiting the findings of this review, whilst identifying that further research is required.

Authors’ Conclusions

This review aimed to examine effects of communication skills training on a range of outcomes in social work education. With the exception of skill acquisition, there was insufficient evidence available to offer firm conclusions on other outcomes. For social work educators, our understanding of how communication skills and empathy are taught and learnt remain limited, due to a lack of empirical research and comprehensive discussion. Despite the limitations and variations in educational culture, the findings are still useful, and suggest that communication skills training is likely to be beneficial. One important implication for practice appears to be that the teaching and learning of communication skills in social work education should provide opportunities for students to practice skills in a simulated (or real) environment. For researchers, it is clear that further rigorous research is required. This should include using validated research measures, using research designs which include appropriate counterfactuals, alongside more careful and consistent reporting. The development of the theoretical underpinnings of the interventions used for the teaching and learning of communication skills in social work education is another area that researchers should address.

1. PLAIN LANGUAGE SUMMARY

1.1. communication skills training helps improve how social work students interact with the people they safeguard and support.

Communication skills training, including empathy training, can help social work students to develop their communication skills. Opportunities to practise communication skills in a safe and supportive environment through role‐play and/or simulation, with feedback and reflection, helps students to improve their skills. The effect of doing this training face‐to‐face, online or via blended learning is largely unknown.

1.2. What is this review about?

Good communication skills are important for social work practice and are commonly taught on social work qualifying courses. There is a range of different types of educational interventions, with wide variations in theoretical basis, approach, duration and mode of delivery. This systematic review looks at whether different interventions are effective in producing the following outcomes: social work students’ knowledge, attitudes, skills and behaviours.

What is the aim of this review?

This Campbell systematic review assesses whether communication skills training for social work students works, and which types of communication skills training, if any, were more effective and led to the most positive outcomes.

1.3. What studies are included?

This review summarises quantitative data from randomised and non‐randomised studies. The 15 studies included in this review were undertaken in Canada, Australia and North America. The research is very limited in terms of scope and quality, and there are important weaknesses in the evidence base.

1.4. Does communication skills training improve the communicative abilities of social work students?

Systematic communication skills training shows some promising effects in the development of social work students’ communicative abilities, especially in terms of their ability to demonstrate empathy and interviewing skills.

1.5. What do the findings of this review mean?

Communication is very important for social work practice, so we need to ensure that student social workers have opportunities to develop their communication skills.

Too few studies fully assessed student characteristics such as age, sex and ethnicity or took account of how previous experience, commitments and motivation affected students’ learning.

Consideration of stakeholder involvement and collaboration (such as by people with lived experience) was also lacking. Only the role of the educator was considered.

The studies were largely of poor quality and investigated many different implementation features, which made it difficult to draw any firm conclusions about what makes the teaching and learning of communication skills in social work education effective.

Researchers conducting studies into communication skills training should seek to carry out robust and rigorous outcomes‐focused studies. They should also consider trying to see how and where these interventions might work, as well as understanding for whom they may be effective.

1.6. How up‐to‐date is this review?

The review authors searched for studies that had been published until 15 June 2021.

2. BACKGROUND

2.1. description of the condition.

Good communication is central to social work practice (Koprowska,  2020 ; Lishman,  2009 ), underpinning the success of a wide range of social work activities. People in receipt of social work services value social workers who are warm, empathic, respectful, good at listening and demonstrate understanding and compassion (Beresford et al.,  2008 ; Department of Health,  2002 ; Ingram,  2013 ; Kam,  2020 ; Munford & Sanders,  2015 ; Social Care Institute for Excellence,  2000 ; Tanner,  2019 ). Even in diverse and challenging circumstances, effective communication is thought to build constructive working relationships and enhance social work outcomes (Healy,  2018 ).

Communication, sometimes referred to as interpersonal communication, ‘involves two (or more) people interacting to exchange information and views’ (Beesley et al.,  2018 ). It is driven and directed by the desire to achieve particular goals and is underpinned by perceptual, cognitive, affective and behavioural operations (Hargie,  2017 ). In social work practice and education, the values of the profession and the specific social, cultural, political and ideological contexts in which social workers operate, influence the nature of interpersonal communication (Harms,  2015 ; Koprowska,  2020 ; Thompson,  2003 ).

Research has tended to focus on particular aspects of communication, or the impact of communication in specific contexts. For example, a study examining how social workers communicate with parents in child protection scenarios, identified that social workers who demonstrated empathy towards simulated clients, encountered less resistance and more disclosure (Forrester et al.,  2008 ). Social workers who use creative and play‐based approaches confidently facilitate engagement and communication with children (Ferguson,  2016 ; Handley & Doyle,  2014 ). Adapting skills and strategies to address specific communication difficulties is equally important in social work with adults. Offering choices through pictographs can help people with Aphasia to answer open questions (Rowland & McDonald,  2009 ), whilst Talking Mats can facilitate conversation with people who have dementia (Murphy et al.,  2007 ). Research into the experiences and preferences of palliative care patients found small impactful supererogatory acts demonstrated compassion which allowed them to ‘feel heard, understood, and validated’ (Sinclair et al.,  2017 , p. 446). Communicating effectively with adults in receipt of health and social care services also enables them to better participate in important decisions about their care.

The impact of failing to communicate effectively has been well documented, particularly through reports into incidents of child deaths (Laming,  2003 ,  2009 ; Munro,  2011 ). Consequently, the importance of teaching communication skills to social work students as a means of enabling them to communicate effectively has long been recognised (Smith,  2002 ). More recently, there have been calls for the expansion and/or improvement of this training (Luckock et al.,  2006 ; Narey,  2014 ). Considerable time, effort and money has been spent on achieving this aim, leading to a wide range of communication skills training courses becoming embedded in social work programmes across the globe. Communication generally, and some communication skills specifically, feature in the educational standards of different countries including the Australian Social Work Education and Accreditation standards, the Educational Policy Accreditation Standards in the US and the Professional Capabilities Framework in the UK (Australian Association of Social Workers,  2020 ; British Association of Social Workers,  2018 ; Council on Social Work Education,  2015 ). One of the consequences of the coronavirus pandemic was increasing diversification of the delivery of teaching and learning in Higher Education. However, the impact of online or blended learning on the development of student social workers’ communication skills remains to be seen.

2.2. Description of the intervention

Communication skills training (CST) can be defined as ‘any form of structured didactic, e‐learning, and experiential (e.g., using simulation and role‐play) training used to develop communicative abilities’ (Papageorgiou et al.,  2017 , p. 6). Although ‘communication skills training’ (CST) is the name given to the intervention on a wide range of professional and vocational courses, in social work education, the intervention is more commonly referred to as the ‘teaching and learning of communication skills’; a trend reflected in the titles of various knowledge and practice reviews. Given purpose, role and context have a significant impact on communication in social work practice, conceptualisations which integrate knowledge, values and skills, for example the knowing, being and doing domains developed by Lefevre and colleagues (Lefevre et al.,  2008 ) have become increasingly popular (Ayling,  2012 ; Woodcock Ross,  2016 ). In social work education, the intervention includes not only communication processes, but also an understanding of the broader contextual issues in which those interactions in social work practice occur. This views communication in social work as both an art and a science (Healy,  2018 ), which alongside a move away from purely instructional methods, helps explain the preference for the term ‘teaching’ or ‘education’ rather than ‘training’ among social work academics and researchers. There is a tendency within this discipline for significant variation in terminology due to the wide knowledge base from which social work draws. The term ‘communication skills’ is not applied uniformly in the social work literature‐microskills, interpersonal skills and interviewing skills are frequently used alternatives.

In spite of a lack of consensus about what the intervention is called, ‘the inclusion of a dedicated communication skills module early in the course, or a strong communication component within an early module about methods, skills and practice’ is commonplace (Dinham,  2006 , p. 841). A consensus appears to be emerging in the wider social work literature, regarding what the basic communication skills for social work practice actually entail. These microskills comprise non‐verbal communication such as making eye contact and nodding, alongside a range of verbal techniques including clarifying, reflecting, paraphrasing, summarising and asking open questions. Described in detail in a number of social work text books (Beesley et al.,  2018 ; Cournoyer,  2016 ; Healy,  2018 ; Sidell & Smiley,  2008 ), and featuring in the educational standards, competency and capability frameworks of various countries (Australian Association of Social Workers,  2020 ; British Association of Social Workers,  2018 ; Council on Social Work Education,  2015 ), these skills form part of the content of a number of communication skills courses and preparation for practice modules. Microskills help social workers and social work students to ‘establish and maintain empathy, communicate non‐verbally and verbally in effective ways, establish the context and purpose of the work, open an interview, actively listen, establish the story or the nature of the problem, ask questions, intervene and respond appropriately’ (Harms,  2015 , p. 22). Microskills are considered to be transferable across client groups and settings.

Using case study scenarios that students might encounter in practice, microskills are rehearsed in different social work contexts or circumstances. Typically, students practice the microskills through undertaking simulated social work tasks such as assessments and care planning. When applied to social work tasks and contexts, communication skills are sometimes referred to within the social work literature as interviewing skills. An interview is a ‘person‐to‐person interaction that has a definite and deliberate purpose’ (Kadushin & Kadushin,  2013 , p. 6). It is through social work interviews that ‘important connections and relationships are developed, and where important concepts such as partnership and empowerment are taken forward’ (Trevithick,  2012 , p. 185).

The pedagogic practices used to teach communication skills to social work students include a wide range of affective, cognitive and behavioural components, whereby students participate in a variety of activities. Following face‐to‐face taught input including theory, communication skills are generally rehearsed, using role‐play with peers (e.g., Koprowska,  2003 ), simulated practice with service users (e.g., Moss et al.,  2007 ) or actors (e.g., Petracchi & Collins,  2006 ). Tutors and peers may also model communication skills to demonstrate different techniques. Critical reflection, which facilitates students’ self‐awareness is encouraged. Feedback is an important component in helping learners develop an understanding of their strengths and areas for development, and a range of feedback mechanisms are welcomed by students (Tompsett, Henderson, Mathew Byrne, et al.,  2017 ). Video and playback are often used to support the learning that occurs through feedback and reflective processes. Some universities have purpose‐built recording suites or provide students with equipment such as tablets to facilitate the recording of communication skills practice. The rationale for video and playback is ‘that each student's adult ability to be their own best assessor’ is ‘utilised to the full’ (Moss et al.,  2007 , p. 715); the value of which has been recognised by students elsewhere (Bolger,  2014 ; Cartney,  2006 ). A learning environment, characterised by trust, safety and security, appears to be an important mechanism for students to make use of experiential activities. Opportunities for observing skills in practice, through shadowing a social worker or allied practitioner, feature in some communication skills or preparation for practice modules. Attention may also be devoted to specific areas of communication: communicating with children, communicating with people who have hearing impairments, and inter‐professional communication are some examples.

No specific blueprint for CST in social work exists, thus the nature of the training sessions and course length vary from one educational institution to another. Typically, in the UK, CST is delivered to first year undergraduate and postgraduate students before they commence their first practice placement: in England, this may comprise some of the 30 days of skills training which universities typically provide. Content and teaching activities tend to be designed and delivered on an individual basis by social work academics, often with involvement from people with lived experience (service users and carers), practitioners and local employers. Examples of gap‐mending strategies for user involvement are beginning to find their way into the literature (Askheim et al.,  2017 ) and have been applied to the teaching and learning of communication skills (Reith‐Hall,  2020 ), however such activities are far from mainstream. Minimum requirements, dosage, and delivery methods are not prescribed, leading to considerable heterogeneity of the intervention in practice.

2.3. How the intervention might work

Training or education‐based interventions aimed at improving the communicative abilities of student social workers seek to bring about changes in learners’ knowledge, values and skills in terms of how to communicate effectively in social work practice.

Psychological perspectives and counselling theories, including the work of humanistic and client‐centred theorists such as Rogers, Carkhuff and Egan tend to underpin microskills training. Other communication theories, including the model of interpersonal communication developed by Hargie (Hargie,  2006 ) also provide a theoretical basis for the skills taught on some of these courses. Concerns have been raised that psychological and counselling theories have been applied to social work uncritically (Trevithick et al.,  2004 ), without due consideration of the challenges this may present. A number of social work academics have pulled together theory and research on communication skills in recent years (e.g., Beesley et al.,  2018 ; Harms,  2015 ; Healy,  2018 ; Koprowska,  2020 ; Lishman,  2009 ; Woodcock Ross,  2016 ) in an attempt to address this issue. Nonetheless, it still remains ‘difficult to identify a coherent theoretical framework that informs the learning and teaching of communication skills in social work’ (Trevithick et al.,  2004 , p. 18).

The theoretical underpinnings of the pedagogic practices used to teach communication skills are not always clear (Dinham,  2006 ; Trevithick et al.,  2004 ). The conception of reflection in and on action (Schön,  1983 ) and the importance of ‘learning by doing’ (Schön,  1987 , p. 17) are often cited as underpinning the teaching of communication skills modules in social work education. Experiential learning, ‘the process whereby knowledge is created through the transformation of experience’ (Kolb,  1984 , p. 38) is another of the prevailing philosophies, although Trevithick et al.,  2004 , p. 24) suggest there is an uncritical assumption that ‘experiential is best’. Reference is sometimes made to theories of adult learning, whereby students are expected to draw on their own experiences, take responsibility for their own learning, and engage in peer learning. This mode of learning ‘is understood to encourage the sustained internalisation of skills’ (Dinham,  2006 , p. 847). Such ideas build on the concept of andragogy (Knowles,  1972 ,  1998 ) whereby mutual processes of learning and growth are encouraged.

The knowledge review conducted by Trevithick et al. ( 2004 ) identified articles where the theoretical foundations for teaching skills in social work were made explicit. The communication skills module at the University of York in the UK, based on Agazarian's theory, is located within a systems framework (Koprowska,  2003 ). Relational teaching based on relational/cultural theory should underpin teaching in social work education, whereby mutual engagement, mutual empathy, and mutual empowerment foster growth in relationships between tutors and students (Edwards & Richards,  2002 ). These examples are the exception to the rule; few articles theorise the teaching and learning process (Eraut,  1994 ). Generally speaking, ‘communication skills have been taught, but not reflected upon; experienced, but not theorised’ (Moss et al.,  2007 , p. 711).

A wide variety of approaches for teaching communication skills to social work students exist in practice. Given there is more expertise in the teaching and learning of communication skills than the literature denotes, academics should continue theorising and researching this aspect of the curriculum (Dinham,  2006 ). Although rigorous high‐quality evaluation of outcomes in social work education is still in the early stages of development (Carpenter,  2011 ), teaching communication skills to social work students is an aspect of the curriculum which has attracted considerable attention, therefore a review of the findings can help to clarify what we know about this important topic.

2.4. Why it is important to do this review

A variety of communication skills courses have been proposed and are in use in social work education. It is nearly twenty years since a number of practice and knowledge reviews highlighted the lack of evaluation into communication skills courses, an issue which warranted further research (Dinham,  2006 ; Trevithick et al.,  2004 ). To support this endeavour, methodological guidance for evaluating outcomes in social work education (Carpenter,  2005 ,  2011 ) has been produced. Consequently, a number of empirical studies (Koprowska,  2010 ; Lefevre,  2010 ; Tompsett, Henderson, Gaskell Mew, et al.,  2017 ) have sought to evaluate the teaching of communication skills among social work students, or investigate the impact of particular components of the intervention. Existing literature suggests that teaching social work students communication skills increases their self‐efficacy in terms of communicative abilities (Koprowska,  2010 ; Lefevre,  2010 ; Tompsett et al.,  2017 ). Good communication is fundamental to effective social work practice.

No comprehensive systematic review or meta‐analysis of this aspect of social work education has been undertaken; questions concerning whether the teaching of communication skills to social work students is effective and produces positive outcomes remain unanswered. It is time therefore to identify, summarise and synthesise the empirical research into a systematic review. Doing so will form a reliable, scientifically rigorous, and accessible account that can be used by educators and policy‐makers to guide decisions about which approaches are effective in teaching communication skills to social work students. In this time of political uncertainty and financial constraint, ‘it is important to accumulate evidence of the outcomes of social work education so that policy‐makers and the public can be confident that it is producing high‐quality social workers’ (Carpenter,  2016 , p. 192), who are suitably equipped to deal with the demands of social work practice. We conducted this systematic review to determine whether CST for social work students works and which types of CST, if any, were the most effective and lead to the most positive outcomes. To improve uptake and relevance, the systematic review was developed in consultation with stakeholders (including academics, students, practitioners, and people with lived experience) and advice was sought from leading social work organisations. The review also sheds light on areas where more research is required.

3. OBJECTIVES

This systematic review aimed to critically evaluate all studies which have investigated the effectiveness of CST programmes for social work students. The PICO (Population, Intervention, Comparator, Outcomes) framework and stakeholder collaboration informed the development of the research question. Student social workers constituted the population, CST was the intervention under investigation, the absence of CST or a course unrelated to communication were the comparators, and attitudes, knowledge, confidence and behavioural changes were the outcomes of interest. Stakeholders had agreed that neither the comparator nor the outcomes should be specified within the research question itself, on the grounds that researchers and academics were unlikely to have specified these elements in the primary studies. The review built on an existing knowledge review (conducted by Trevithick et al.,  2004 ) but was not restricted by the year of publication or language. The research question which the review posed is: ‘What is the effectiveness of CST for improving the communicative abilities of social work students?’ It was intended that the review would provide a robust evaluation of CST for social work students and explain variations in practice. To test the effectiveness of interventions, hierarchies of evidence point to systematic reviews of (preferably randomised) controlled trials. Therefore, we sought to conduct a rigorous and systematic review of such studies about CST, supporting educators and policy‐makers to make evidence‐based decisions in social work education, practice and policy.

The protocol for this review was published in the Campbell Collaboration Library (Reith‐Hall & Montgomery,  2019 ).

4.1. Criteria for considering studies for this review

4.1.1. types of studies.

The studies were required to include an appropriate comparator to be eligible for inclusion in the review, irrespective of whether outcome data were reported in a useable way. Permitted study designs included: randomised trials, non‐randomised trials, controlled before‐after studies, repeated measures studies and interrupted time series studies. To be included, interrupted time series studies needed a clearly defined point in time when the intervention occurred and at least three data points before and three after the intervention. The justification for this wider range of study types was to identify any potential risk of harm which we hoped to assess through wider evidence. Potential risk of harm included any negative effects of CST on students’ communicative abilities, for example, service users and carers might have indicated that students’ poor communication left them feeling more confused, agitated, misunderstood or distressed (i.e., worse) than they did before the interaction.

To ensure quality of evaluation, all studies were critically appraised and an analysis of the results by study design was considered. The comparison group were composed of those who received no educational intervention or those receiving educational interventions other than CST. Trials comparing the effects of two different educational interventions to improve communication skills were also included in this review. In accordance with Campbell policies and guidelines (The Campbell Collaboration,  2014 ), studies without comparison groups or appropriate counterfactual conditions were excluded.

4.1.2. Types of participants

All social work students who were taught communication skills on a generic qualifying social work course in a university setting were included hence undergraduate and postgraduate students were among the types of participants. Students on post‐qualifying courses were excluded.

4.1.3. Types of interventions

Only studies in which the intervention group received CST and in which the control group received nothing or received an alternative training to the intervention group were included. For the intervention, any underpinning theoretical model and any mode of teaching (taught input, videotape recording, role‐play with peers, simulated interviews with service users and carers or actors) were considered acceptable. Interventions that took place either entirely or predominantly in a university setting were included.

4.1.4. Types of outcome measures

Outcomes included changes in (1) knowledge, (2) attitudes, (3) confidence/self‐efficacy and (4) behaviours measured using objective and subjective scales. It was anticipated that these measures might be study‐specific rating scales, developed for use in evaluating communication skills. Stakeholder involvement indicated that behavioural change was an important outcome for all stakeholders. In addition, students and educators deemed confidence/self‐efficacy to be a relevant outcome. In keeping with the literature on outcomes in social work education (Carpenter,  2005 ,  2011 ), student satisfaction alone was not considered as an outcome measure in this review.

4.2. Search methods for identification of studies

We conducted a search for published and unpublished studies using a comprehensive search strategy informed by the guide to information retrieval for Campbell systematic reviews (Kugley et al.,  2017 ). We also sought advice from information specialists. Our search strategy included searching multiple electronic databases, research registers, grey literature sources, and reference lists of prior reviews and relevant studies. Study selection was not restricted by geography, language, publication date or publication status. The original search took place in September 2019 and an updated search took place in June 2021.

4.2.1. Electronic searches

To identify eligible studies the following data sources were searched using the search strings set out in Supporting Information: Appendix  A :

  • (a) Education Abstracts (EBSCO)
  • (b) ERIC (EBSCO)
  • (c) MEDLINE (OVID)
  • (d) PsycINFO (OVID)
  • (e) Web of Science/Knowledge Database Social Science Citation Index
  • (f) Social Services Abstracts (Proquest)
  • (g) ASSIA—Applied Social Sciences Index and Abstracts (Proquest)

Relevant reviews were searched for in the following databases:

  • (i) Database of Abstracts of Reviews of Effectiveness
  • (j) The Campbell Library
  • (k) Cochrane Collaboration Library

We also searched grey literature, using the following databases and websites:

  • (m) Google Scholar—using a series of searches, the first 2 pages of results for each search were screened
  • (n) ProQuest Dissertations and Theses

4.2.2. Searching other resources

We searched for conference proceedings and abstracts through Web of Science and ERIC, followed by a Scopus search which did not unearth any new sources of information. We also looked at generic websites including Google and Bing as well as government websites and professional websites such as gov.uk and the department for education, the Higher Education Academy, British, Australian and American Councils/Associations of Social Work and Social Work Education, Community Care and the Social Care Institute for Excellence website, which includes Social Care online. Several searches containing the key words used in the database searches were replicated for these additional sources.

We also searched the reference lists of the included studies and relevant reviews to identify additional studies. Prominent authors were contacted for further information about their studies and asked if they were aware of any other published or ongoing studies meeting our inclusion criteria. In addition, a final step towards the end of analysis, a manual search of the most recent issue(s) of five key journals that provided relevant studies were identified and checked. These were the Journal of Social Work Education, Social Work Education, the British Journal of Social Work, Children and Youth Services Review and Research on Social Work Practice.

4.3. Data collection and analysis

We collected and analysed data according to our protocol (Reith‐Hall & Montgomery,  2019 ).

One reviewer (ERH) conducted the database searches, removing duplicates and irrelevant records. Having anticipated that the searches would result in very few records to screen, each record was screened by title and abstract by both reviewers (ERH and PM), to ensure robustness. Any studies deemed to be potentially eligible were retrieved in full text and screened by both reviewers. There were no disagreements hence discussions with an arbitrator was not required and consensus was reached in all cases.

The search strategy was developed using the terms featuring in existing knowledge and practice reviews and in consultation with social work researchers and academics, to ensure that the broad range of terminology was included. Search strings included terms relating to the intervention and population but not study design. A sample search strategy for Medline can be found in Supporting Information: Appendix  A . Search strings and search limits were modified for each database. Proximity searching was not required.

4.3.1. Selection of studies

Included studies were any form of design where appropriate counterfactual conditions were satisfied, in accordance with the Cochrane Effective Practice and Organisation of Care guidelines for the inclusion of non‐randomised studies (Cochrane EPOC,  2017 ).

To ensure that the effects of an individual intervention were only counted once, we anticipated applying the following conventions: (1) Where there were multiple measures reported for the same outcome, an average effect size for each outcome would be calculated within each study. (2) Where the same outcome construct was measured across multiple time domains, the main analysis would focus on synthesising the evidence relating to effect sizes at immediate post‐test. Any subsequent measures of outcomes beyond immediate post‐test would be analysed and reported separately.

4.3.2. Data extraction and management

Once eligible studies were found, an initial analysis of intervention descriptions was undertaken for each. The Campbell data collection template form was used to identify the core components of programmes and to develop an overarching typology and coding frame.

Details of study coding categories

Components included:

  • Duration and intensity of the programme.
  • Whether programme delivery included people with lived experience (e.g., service users and carers)
  • Whether programmes used audio and video recording
  • Whether communication skills were practised with peers, service users or actors
  • Whether programmes included observation of social workers in practice
  • The theoretical frameworks underpinning the intervention

Alongside extracting data on programme components, descriptive information for each of the studies was extracted and coded to allow for potential sensitivity and subgroup analysis. This included information regarding:

  • Study characteristics in relation to design, sample sizes, measures and attrition rates.
  • Whether the study was conducted by a research team associated with the programme or an independent team.
  • Stage of programme development, for example whether it was a new programme being piloted or an established programme being replicated.
  • Participants’ characteristics in relation to age, sex, ethnicity, geo‐political region and socio‐economic background.

We considered subgrouping the different types of intervention and population, based on factors such as length of course and teaching methods, age and sex, however the small number of included studies did not warrant subgroup analysis.

Coding was carried out by the review team independently; discrepancies were discussed, and a consensus reached.

Quantitative data was extracted to allow for calculation of effect sizes (using mean change scores and post‐test means and standard deviations). Data was extracted for the intervention and control groups on the relevant outcomes measured to assess the intervention effects.

4.3.3. Assessment of risk of bias in included studies

Assessment of methodological quality and potential for bias was conducted using the Cochrane Risk of Bias tool for randomised studies (Higgins et al.,  2019 ) and the ROBINS‐I tool for non‐randomised studies (Sterne, Higgins, et al.,  2016 ; Sterne, Hernán, et al.,  2016 ).

4.3.4. Measures of treatment effect

Continuous outcomes were reported by the included studies, so we used the standardised mean difference (SMD) as our effect size metric where means and standardised deviations were provided by study authors. Where means and standard deviations were not available, we calculated SMDs from t ‐values and calculated standard deviations from standard errors where these were provided using recommended methods (Higgins et al.,  2022 ). Hedges’ g was used for estimating SMDs to correct for the bias associated with small sample sizes. In studies with more than two groups, we calculated effect sizes using the experimental and control groups that were most relevant to answering our research question or used data from groups with the largest numbers in them.

Treatment of qualitative research

This systematic review was limited to synthesising the available evidence on the effectiveness of CST to social work students. It was beyond the remit of this present review to synthesise the associated evidence related to process evaluations of such programmes hence we did not include qualitative research.

4.3.5. Unit of analysis issues

The unit of analysis for this review was social work students. No unit of analysis issues were identified for the included studies.

4.3.6. Dealing with missing data

Study authors were contacted and accompanying or linked papers were sought in an effort to retrieve missing data.

4.3.7. Assessment of heterogeneity

Widespread clinical heterogeneity within the included studies rendered other anticipated measures of treatment effect non‐viable. For example, the included populations consisted of undergraduate, postgraduate, mixed and unreported students, whilst the interventions differed according to duration, uptake, mode and key features. Widespread methodological diversity was present in terms of designs, methodologies, and outcome measures across studies.

4.3.8. Assessment of reporting biases

Reporting was generally poor among the included studies as evidenced by limited use of reporting instruments such as CONSORT and no references to pre‐published protocols were made by study authors. A more detailed discussion of this issue can be found in the Risk of Bias section. Use of a funnel plot, which helps to identify potential reporting bias in the included studies was not feasible, given the small number of studies included in this review.

The use of a highly sensitive and inclusive systematic search of bibliographic databases, grey literature sources, reference list searching, correspondence with study authors and hand searching sought to counteract potential bias in our reporting of this review.

4.3.9. Data synthesis

As a result of this heterogeneity, meta‐analysis was not feasible, nor was it possible to implement methods outlined in the protocol, such as sensitivity and subgroup analysis. I 2 and Tau 2 were not measured or reported in this review. Similarly, we were unable to use the new GRADE Guidance for Complex Interventions (unpublished) to summarise the overall quality of evidence relating to the primary outcomes.

4.3.10. Subgroup analysis and investigation of heterogeneity

N/a in view of there being no meta‐analysis.

4.3.11. Sensitivity analysis

Summary of findings and assessment of the certainty of the evidence, 5.1. description of studies.

There are 15 studies included in this review. An overview of the key characteristics of the included studies, which are described in terms of study design, participants, interventions, comparators, outcomes, outcome measures, geographical location, publication status and implementation factors are provided in Table  1 .

Included studies table.

First author, dateStudy design PopulationInterventionComparatorOutcomesMeasuresLocationPublication statusImplementation factors (e.g., amount, duration)

Barber, 

Experiment 1

Case control32

Undergraduate social work students

Male,  = 8 (25%)

Age range: 19‐46

Mean age: 25.7

Microskills training

 = 16 final year students

No training

 = 16 first year students

Counsellor:

Trustworthiness

Attractiveness

Expertness

Non‐verbal communication

Counselor Rating Form (Barak & LaCrosse,  ).

Non‐verbal rating skills

La Trobe, Victoria, AustraliaPublished journal article

Amount: ‘extensive’

Duration: 4‐year programme

Barber, 

Experiment 2

Case control50

Undergraduate social work students

Population characteristics not stated

Microskills training

 = 25 final year students

No training

 = 25 first year students

Trustworthiness

Attractiveness

Expertness

Counselor Rating Form (Barak & LaCrosse,  ).

Non‐verbal rating skills

La Trobe, Victoria, Australia

Amount: ‘extensive’

Duration: 4‐year programme

Collins,  Case control67Masters level social work students

Skills lab training course (  = 54)

Age range: 21–43

Mean age: 26.78

Male,  = 9 (17%)

Lecture‐based training course (  = 13)

Male,  = 6 (46%)

Interviewing skills; empathy, warmth, genuineness

Skills acquisition measure (SAM)

Carkhuff's communication index

Analogue interview

Client interview

University of Toronto, CanadaDissertation thesis

Amount: not stated

Duration: 2 months

Greeno,  RCT54

Undergraduate and master's level social work students

Male,  = 8 (15%)

51% (  = 28) Caucasian

45% (  = 24) Black

4% (  = 2) Hispanic

Age range: 20–55

Mean age: 29.7

Live supervision with standardised clientsTAU—online self‐study

Perceived empathy

Empathic behaviour

Motivational Interviewing Treatment Integrity; Toronto Empathy QuestionnaireUniversity of Maryland, Baltimore, MD, USAPublished journal article

Amount: 3 days (6 h of didactic teaching, followed by 2 days of Live Supervision or 2 days of online learning)

Duration: unstated (study took place over 7 months, which included 5‐month follow‐up

Pecukonis, 

MI skills, adherent behaviours and proficiency level

Self‐efficacy

Motivational Interviewing Treatment Integrity coding system; general self‐efficacy scalePublished journal article
Hettinga,  RCT38

Masters social work students (  = 34)

Undergraduate students (  = 1)

Male,  = 7 (18%)

Age range: 22‐45

Mean age: 31.3

Communication skills training using videotaped interview playback with instructional feedback  = 23 (3 did not complete measures)Communication skills training (face to face) with group feedback  = 15

Self‐esteem

Self‐perceived interviewing competence

Rosenberg Self‐Esteem Scale

Self‐Perceived Interviewing Competence Questionnaire

University of Minnesota, USADissertation thesis

Amount: 3 h per week

Duration: 1 quarter of an academic year

Keefe,  Case control56

Second year master's social work students

Population characteristics not stated.

(1) a course of instruction with both experiential and didactic content  = 19

(2) a structured meditation series  = 20.

TAU

 = 17

Empathic skillKagan's Affective Sensitivity ScaleThe University of UtahPublished journal article

Instruction

Amount: 2.5 h per week

Duration: 1 quarter of an academic year

Zen meditation

Amount: 30 min per day

Duration: 3 weeks

Larsen,  RCT94

First year master's social work students

Population characteristics not stated.

Communication laboratories consisting of didactic and experiential learning  = 59

Traditional didactic instruction

 = 35

Facilitative conditions (empathy, non‐possessive warmth, genuineness)The Index of Therapeutic CommunicationThe University of UtahPublished journal article

Amount: 10 h

Duration: Not stated

Laughlin,  RCT78

Undergraduate social work students

Male:  = 11 (14.1%)

Age range: 20‐59

Mean age: 23.4

Median age of 21.

(1) Experimental Group I: self‐instruction manual plus audio practice tapes with supervisor evaluation, feedback, and reinforcement

(2) Experimental Group II: self‐instruction manual plus audio practice tapes with self‐evaluation and self‐reinforcement

(3) Control Group I: introductory section of self‐instruction manual, expectation set, and instructions to practice

(4) Control Group II: no instructional materials.

Revised version of the Carkhuff Communication Index (Carkhuff,  ).

Carkhuff's empathic understanding scale (Carkhuff,  )

University of California at Berkeley.Dissertation thesis

Amount: Not stated but experimental groups participated in 3 lab sessions.

Duration: 2 weeks

Ouellette,  Case control30

Undergraduate social work students

Male,  = 2 (6.7%)

Age range: 20–40+

Mean age: Not specified

Age 20–29, 53.3%

Age 30 and 39, 30%

Age 40+, 16.6%

60% (  =   = 18) Caucasian

33.% (  = 10) African American

3.3% (  = 1) Hispanic

3.3% (  = 1) ‘Other’.

Online  =  16Classroom  = 14Basic interviewing skillsBasic practice interviewing skills scaleIndiana UniversityPublished journal article

Amount: 1 × 3‐h session per week

Duration: 15 weeks

Rawlings,  Case control32

Undergraduate social work students

Male,  = 2 (6.3%)

Age range: Not specified

Mean age: 20.81

78% (  = 25) Caucasian

10% (  = 3) Hispanic

6% (  = 2) Biracial

3% (  = 1) African American

3% (  = 1) ‘Other’.

Exiting social work students (  = 16)Starting SW students (  = 16)

Self‐efficacy

Skill performance

Social Work Direct Practice Self‐Efficacy Scale basic practice skill performance (Chang & Scott,  )

Three item direct practice skill sub‐scale reflecting core conditions for each student

Case Western Reserve UniversityDissertation

Amount: not stated

Duration: BSW degree

Schinke,  RCT23

Graduate social work students

Males  = 7 (30.4%)

Age range: Not stated

Mean age: 29.87

Interviewing skills

 = 12

Delayed start control group

 = 11

Attitudes towards their own role‐played interviewing behaviour

Videotaped interview ratings

Counselor effectiveness scale developed by Ivey (1971)

University of WashingtonPublished journal article

Amount: 4 h

Duration one‐off session

Toseland,  Case control68

Undergraduate social work students (  = 55)

Undergraduate social welfare students (  = 13)

Population characteristics not stated.

Interpersonal skills training (  = 55)13 social welfare students—no skills training (  = 13)Ten interpersonal helping skillsThe Carkhuff Indices of Communication and Discrimination and the Counseling Skills Evaluation Parts 1 and 2Not statedPublished journal article

Amount: 15 × 2 sessions in the lab plus lectures (Total of 45 h)

Duration: one semester

VanCleave,  Case control45

Masters level social work students

Age range: early twenties to mid fifties

Mean age: Not stated.

Age 20‐25, 35% (  = 16)

Age 26‐30, 27% (  = 12)

Age 31‐35, 11% (  = 5)

Age 35+, 27% (  = 12)

Male  = 3 (6.6%)

95% (  = 43) Caucasian, 2% (  = 1) African American

2% (  = 1) Japanese

Additional empathy training (  = 22)TAU (  = 23)

Empathic response

Perspective taking and empathic concern

Carkhuff's Index for Communication scripts (CIC)

A 14‐question, self‐survey for Empathic Concern (EC) and the Perspective Taking (PT) subscales of the Davis ( ) Interpersonal Reactivity Index (IRI).

University of Southern IndianaDissertation thesis

Amount: 10 h

Duration: within a 3‐month cycle

Vinton,  Case control62

Undergraduate social work students

Age range: 19–54

Mean age: 25.9

Male  = 7 (11.3%)

Videotape other

Videotape other + self

Delayed start control group

Perceived empathy

Empathy

Questionnaire Measure of Emotional Empathy

(QMEE)

Carkhuff's level of empathy scale

Florida State UniversityPublished journal article

Amount: not stated but includes a 100‐min standardised lecture.

Duration: not stated

Wells,  RCT14

Social work students (type not specified)

Population characteristics not stated.

Role‐playOwn problemsEmpathyCarkhuff's empathic understanding scaleUniversity of PittsburghPublished journal article

Amount: 1 day of didactic training, 6 × 2‐h sessions of experiential learning

Duration: not stated

5.2. Results of the search

The main bibliographic database and registers search, completed in September 2019, returned 1998 records with an additional 12 added after the search was updated in June 2021. After 882 duplicate records were removed, 1128 were subjected to initial screening by title, and abstract if necessary, following which a further 1021 records were removed because they were not relevant to the topic. Of the 107 remaining records, 2 could not be retrieved despite endeavours to locate them through different libraries and searches, therefore 105 records were fully screened for eligibility, 9 of which met the inclusion criteria.

Another 650 studies were identified through recent editions of five key journals identified through the database search. A further 19 studies were identified through other methods including citation searching within the included studies. Of the 669 studies subjected to initial screening, 627 were removed because they were not relevant to the topic. One record could not be retrieved resulting in 41 records being fully screened for eligibility, of which 34 records were excluded, and 7 records (reporting 6 studies) were included.

Of the fifteen studies which met the inclusion criteria for this systematic review, two experiments are reported in a single paper (Barber,  1988 ), one study is reported in two papers (Greeno et al.,  2017 ; Pecukonis et al.,  2016 )‐with both authors contributing to the write‐up of each, and another study (Larsen & Hepworth,  1978 ) is also written up as the first author's PhD thesis (Larsen,  1975 ).

The search results are shown in the PRISMA diagram (adapted from Page et al.,  2021 ) in Figure  1 .

An external file that holds a picture, illustration, etc.
Object name is CL2-19-e1309-g001.jpg

PRISMA diagram.

5.2.1. Included studies

Study design characteristics.

Despite the varied terminology used by the study authors to describe their research designs, eight reports, addressing nine studies (Barber,  1988 ; Collins,  1984 ; Keefe,  1979 ; Ouellette et al.,  2006 ; Rawlings,  2008 ; Toseland & Spielberg,  1982 ; VanCleave,  2007 ; Vinton & Harrington,  1994 ) employed a case‐controlled design, some of which conform to the parameters of a pre‐experimental static group comparison design (Campbell & Stanley,  1963 ). This means that participants were divided between two groups but in a non‐randomised way. Given students were not randomised to the different groups, these studies suffer from weak internal validity, with confounders such as maturation, the Hawthorne effect, testing effects and pre‐existing differences between the intervention and control groups. Such issues are common in educational research.

Six of the studies reported in seven papers were randomised controlled trials (RCTs), five of which were conducted in the mid to late 1970s. The increase of research activity surrounding this topic during this decade likely results from the development of teaching models such as Ivey and Authier's micro‐counselling model (Ivey & Authier,  1971 ; Ivey et al.,  1968 ) and the Truax and Carkhuff Human Relations training model (Carkhuff,  1969c ; Truax & Carkhuff,  1967 ), alongside the development of research measures, including the Carkhuff scales (Carkhuff,  1969a ,  1969b ), which are the most cited research instrument in this review.

Wells ( 1976 ) is the earliest of the included studies to use an RCT design, comparing role‐play and students’ ‘own problem’ procedures, but the sample size contained just 14 students. Hettinga ( 1978 ) had a somewhat larger sample of 38 students, in which immediate feedback from an instructor was compared with group feedback provided later. Although quasi‐randomisation took place, it is unlikely the allocation method affected the results. In the same year, Larsen and Hepworth ( 1978 ) investigated the role of experiential learning; controls received traditional didactic instruction. Schinke et al. ( 1978 ) randomly allocated a group of 23 students to either an intervention group or a waiting‐list control. Laughlin ( 1978 ) used a more complex design consisting of two experimental groups and two control groups. Despite using pre‐tests, a strategy which can help overcome methodological challenges associated with small sample sizes (social work cohorts are typically small), the study was hopelessly underpowered. The most recent of the included studies, reported in the two papers by Pecukonis et al. ( 2016 ) and Greeno et al. ( 2017 ), offers the most robust research design of the included studies. Not only did they exceed the minimum sample size calculated in an a priori power analysis, but the overall risk of bias was lower than other studies included in this review.

In terms of comparators, in four of the studies the control group received no intervention (Barber,  1988 ; Rawlings,  2008 ; Toseland & Spielberg,  1982 ); three studies reported controls receiving treatment as usual (TAU) (Greeno et al.,  2017 ; Keefe,  1979 ; Pecukonis et al.,  2016 ; VanCleave,  2007 ), however the TAU in Greeno and Pecukonis’ study was an online intervention, as opposed to an absence of an intervention; and a further five studies compared two different interventions. These included an experiential approach with traditional didactic learning (Larsen & Hepworth,  1978 ); lab‐based versus lecture‐based training (Collins,  1984 ); online versus classroom‐based teaching (Ouellette et al.,  2006 ); videotaped interview playback with instructional feedback versus peer group feedback (Hettinga,  1978 ); and role‐play versus students’ ‘own problems’ procedures (Wells,  1976 ). In a rather complex design, Laughlin  1978 study included two treatment arms, and two control groups, one of which received no treatment. In two subsequent studies (Schinke et al.,  1978 ; Vinton & Harrington,  1994 ), the controls had a delayed start (operating as a waiting list procedure).

Significant issues with measurement are evident within the included studies and are acknowledged by several of the researchers (Collins,  1984 ; Greeno et al.,  2017 ; Laughlin,  1978 ; Vinton & Harrington,  1994 ). Methodological challenges will be considered in Section  6 .

Publication status

Five of the studies were dissertation theses (Collins,  1984 ; Hettinga,  1978 ; Laughlin,  1978 ; Rawlings,  2008 ; VanCleave,  2007 ), with the remainder being reported in peer reviewed journals (Barber,  1988 ; Greeno et al.,  2017 ; Keefe,  1979 ; Larsen & Hepworth,  1978 ; Ouellette et al.,  2006 ; Pecukonis et al.,  2016 ; Schinke et al.,  1978 ; Toseland & Spielberg,  1982 ; Vinton & Harrington,  1994 ; Wells,  1976 ).

Population characteristics

A total of 743 research participants were contained within the 15 included studies. Of the included studies, seven studies (reported in Barber,  1988 ; Laughlin,  1978 ; Ouellette et al.,  2006 ; Rawlings,  2008 ; Toseland & Spielberg,  1982 ; Vinton & Harrington,  1994 ) contained undergraduate students ( N  = 352) and five studies (Collins,  1984 ; Keefe,  1979 ; Larsen & Hepworth,  1978 ; Schinke et al.,  1978 ; VanCleave,  2007 ) comprised Master's social work students as their participants ( N  = 285). One study (Wells,  1976 ) failed to specify student type ( N  = 14) whilst two studies (Greeno et al.,  2017 ; Hettinga,  1978 ; Pecukonis et al.,  2016 ) used a combination of undergraduate and Master's students ( N  = 92).

Ten of the included studies report on the number and percentage of men and women in the student samples. In Collins' ( 1984 ) study, of the 54 students in the lab group, 17% ( N  = 9) were men, however of the 13 students from the lecture group sample, 46% ( N  = 6) were men; the number of men in the lecture group was unusually high. Collins ( 1984 , p. 74) acknowledges this is not explained by the admissions procedures at either of the universities involved in the study. However, it must be remembered that the 13 students from the lecture group, who volunteered to be part of the study, are not necessarily representative of the cohort demographic.

A more consistent picture is evident amongst the other studies, in which men make up less than a third of the social work students in the samples, reflecting a demographic pattern found among qualified social workers. The number and percentage of men in the student samples (arranged in ascending order by percentage) were as follows: 6% ( N  = 2) for Rawlings ( 2008 ); almost 7% for both Ouellette et al. ( 2006 ) and VanCleave ( 2007 ) ( N  = 2 and N  = 3, respectively); 11% ( N  = 7) for Vinton and Harrington ( 1994 ); 14% ( N  = 11) for Laughlin ( 1978 ); 15% ( N  = 8) in the study reported by Pecukonis et al. ( 2016 ) and Greeno et al. ( 2017 ); 18% ( N  = 7) for Hettinga ( 1978 ); 25% ( N  = 8) in Barber ( 1988 ) ‐ experiment 1 and just over 30% ( N  = 7) in the study conducted by Schinke et al. ( 1978 ). The sex of students was not reported in five of the studies (Barber,  1988 ‐experiment 2; Keefe,  1979 ; Larsen & Hepworth,  1978 ; Toseland & Spielberg,  1982 ; Wells,  1976 ).

Due to differences in reporting practices, the age characteristics of the students in the included studies are harder to compare. In the same five studies identified above (Barber,  1988 ‐experiment 2; Keefe,  1979 ; Larsen & Hepworth,  1978 ; Toseland & Spielberg,  1982 ; Wells,  1976 ), age characteristics were not reported.

The age range was not specified in Rawlings' ( 2008 ) study, although students had the lowest mean age of 20.8 (18.8 for entering students and 22.9 for exiting students). The mean age of students in Laughlin's study was 23.4, with the broadest age ranges of 20‐59. In Barber's ( 1988 ) paper, for experiment 1 the ages ranged from 19 to 46 years, with a mean age of 25.7 years. With a slightly broader age range of 19–54, students in Vinton and Harrington's ( 1994 ) study had a mean age of 25.9. In Collins' ( 1984 ) study, the ages of the lab trained students ranged from 21 to 43 years, with a mean age of 26.7; the lecture trained students are described as being ‘slightly older’ (p. 74). The age range for the students in the study reported by Pecukonis et al. ( 2016 ) and Greeno et al. ( 2017 ) was 20–55, with a mean age of 29.7. An age range was not specified in Schinke et al's. ( 1978 ) study, although the mean age was 29.87. Of the studies where data about mean age were available, students in the study undertaken by Hettinga,  1978 had the oldest mean age of 31.3, with an age range of 22–45. In Ouellette et al's. ( 2006 ) study, an age range of 20–40+ is reported. A mean age is not provided, however 53.3% of students were between the ages of 20 and 29, 30% were between the ages of 30 and 39, and 16.6% were older than 40. In keeping with the age ranges of the other studies, the age range in VanCleave's ( 2007 ) study was described as early twenties to mid‐fifties. No mean age was provided, however 35% ( N  = 16) of students were between the ages of 20 and 25, almost 27% ( N  = 12) were between 26 and 30, 11% ( N  = 5) were between the ages of 31 to 35 and almost 27% ( N  = 12) were over 35 years.

Only the four studies conducted since 2000 reported information on ethnicity, in the following ways: In the study conducted by Ouellette et al. ( 2006 ), 60% ( N  = 18) of students were Caucasian, 33.3% ( N  = 10) were African American, 3.3% ( N  = 1) were Hispanic, and 3.3% ( N  = 1) identified as ‘Other’. Rawlings ( 2008 ) identified that 78% of students ( N  = 25) were Caucasian, almost 10% ( N  = 3) were Hispanic, just over 6% ( N  = 2) were Biracial, 3% ( N  = 1) were African American and 3% ( N  = 1) were defined as ‘Other’. In the study reported by Pecukonis et al. ( 2016 ) and Greeno et al. ( 2017 ), just over 51% ( N  = 28) of students were Caucasian, 45% ( N  = 24) were Black and almost 4% ( N  = 2) were Hispanic. In VanCleave's ( 2007 ) study, over 95% ( N  = 43) of students were Caucasian, one student was African American and one was Japanese—each accounting for just over 2%. The earlier studies did not report on the ethnicities of their participants, reflecting changes to trends in the collection of demographic data.

Data is absent for other demographic characteristics within the included studies.

Location characteristics

There is little variation within the geo‐political contexts in which the included studies were conducted. This is important because it reflects some priorities such as the primacy placed on experimental design, at the expense of others, including stakeholder involvement. One study, Collins ( 1984 ) ( N  = 67) was undertaken in Toronto, Canada, whilst Barber ( 1988 ) reports on two experiments conducted in Victoria, Australia ( N  = 82). One study, Toseland and Spielberg ( 1982 ) did not provide a location ( N  = 68). The remaining 11 studies were carried out in different US states, where the focus on evidence‐based teaching and learning in social work education is firmly established. Involvement and participation from people with lived experience was noticeably absent—the second of the Barber ( 1988 ) experiments and the client interviews in Collins' ( 1984 ) study being the exceptions. None of the included studies were conducted in the UK, where a strong tradition of service user and carer involvement in social work education prevails, which arguably explains, but does not justify, the omission of contributions from people with lived experience within the body of research identified in this review.

Intervention characteristics

Theoretical orientation.

Experiential learning is referred to in the majority of the studies (Collins,  1984 ; Greeno et al.,  2017 ; Keefe,  1979 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ; Pecukonis et al.,  2016 ; Rawlings,  2008 ; Schinke et al.,  1978 ; Toseland & Spielberg,  1982 ) as the underpinning theoretical orientation of the intervention under investigation. However, the term is not applied consistently. With its wide range of different meanings, ideologies, methods and practices, experiential learning is conceptually complex and difficult to define (Moon,  1999 ). Conceptualisations arising from two different traditions are evident within the included studies: first, the work of Carkhuff and Truax ( 1965 ) and Ivey and Authier ( 1971 ), which derive from psychotherapy, and second, the work of Kolb ( 1984 ) and Schön ( 1987 ) which is grounded in a constructivist view of education and has been particularly instrumental within professional courses.

Although deriving from psychotherapy, the microskills counselling approach developed by Ivey et al. ( 1968 ) and Ivey and Authier ( 1971 ) has informed the teaching of interviewing skills in social work education. Content comprises well‐defined counselling skills including attending behaviour, minimal activity responses, and verbal following behaviour. Six of the included studies made reference to the work of Ivey and colleagues, however five of them (Collins,  1984 ; Hettinga,  1978 ; Laughlin,  1978 ; Rawlings,  2008 ; VanCleave,  2007 ) did so simply within a discussion of the wider literature. It is only in Schinke et al.'s ( 1978 ) study where Ivey's work has a direct impact on the empirical evaluation itself; an adapted version of the Counsellor Effectiveness Scale developed by Ivey and Authier ( 1971 ) was used as one of the study's measuring instruments.

Referred to as the Human Relations training model, the work of Carkhuff and Truax ( 1965 ) and Carkhuff ( 1969c ) has been more influential than Ivey's approach. A brief exploration of empathy as a theoretical construct helps to explain why Carkhuff and Truax's work has influenced social work education and practice. Whilst linguistic relevance can be seen in the Greek word ‘empatheia’, which means appreciation of another's pain, the philosophical underpinnings of the term empathy actually derives from the German word Einfühlung. Theodor Lipps expanded the conceptualisation of empathy to include the notion of minded creatures, of which inner resonance and imitation are a part. Lipps’ ideas influenced how empathy came to be understood in psychotherapy and is evident in the work of Sigmund Freud and Carl Rogers. Empathy was identified by Rogers ( 1957 ) as one of ‘the necessary and sufficient conditions’ for therapeutic personality change; his ideas about person‐centred practice remain central to social work education and practice today. Charles Truax, a protégé of Rogers, worked closely with Robert Carkhuff, to explore how conceptual orientations such as empathy could be observed, repeated, measured and taught. Carkhuff and Truax ( 1965 ) developed and evaluated an integrated didactic and experiential approach in a counselling and psychotherapy context, which ‘focuses upon therapist development and growth’ (p. 333). Their work, and the ideas that influenced them, are evident throughout the earlier studies of this review where empathy was the focus. Barber ( 1988 ) cited Carkhuff's work on empathy in his discussion of the literature, whilst Keefe ( 1979 ) referred to it for teaching purposes only. Seven studies (Collins,  1984 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ; Toseland & Spielberg,  1982 ; VanCleave,  2007 ; Vinton & Harrington,  1994 ; Wells,  1976 ) used the Carkhuff scales (Carkhuff,  1969a ;  1969b ) as an outcome measure in their empirical research. As identified by Elliott et al. ( 2018 ), the Carkhuff scales were some of the earliest observer measures, which may well explain the popularity of this instrument. The focus the researchers of the included studies placed on empathy is striking and will be considered further in subsequent sections.

Also apparent in the literature is the experiential learning approach deriving from the experiential learning cycle developed by Kolb ( 1984 ) and the concept of reflective practice articulated by Schön ( 1987 ). Rawlings ( 2008 ), who provides the most comprehensive overview of experiential learning in the included studies, draws on the work of both. Huerta‐Wong and Schoech ( 2010 ) suggest that experiential learning has been a teaching technique used extensively to teach social workers skills in the United Kingdom and the United States since the 1990s. They explain that ‘experiential learning proposes that effective learning is influenced by a cycle of experimentation, reflection, research, and exercising’ (Huerta‐Wong & Schoech,  2010 , p. 86), elements of which feature in the body of work comprising this review. The experimentation component is well defined and clearly identifiable. Keefe ( 1979 ) describes highly structured role‐play situations occurring within an experiential learning component. Similarly, in the live supervision intervention reported by Pecukonis et al. ( 2016 ) and Greeno et al. ( 2017 ), experiential learning opportunities are described as occurring within a small group format, using a one‐way mirror in a classroom setting to practice with standardised clients. VanCleave ( 2007 ) appears to draw on both concepts of experiential learning outlined above: the ‘homework experientials’ featuring in the training intervention comprise a series of practical tasks based on a range of different learning styles, which students complete between sessions to augment the development of empathy. In Ouellette et al.'s ( 2006 ) study, reference is made to the importance of adult learning principles and effective active learning paradigms in technology‐supported instructional environments.

Bandura's propositions are also evident within the included studies. VanCleave ( 2007 ) draws on social learning theory (Bandura,  1971 ), recognising that the modelling of skills is important for learning. Ideas about self‐reinforcement (Bandura,  1976 ) influenced Laughlin ( 1978 ), in a consideration of the impact of internal and external motivation. The exploration into the role of self‐efficacy by Rawlings ( 2008 ) in skill development was informed by self‐efficacy and social cognitive theory (Bandura,  1997 ). Behaviour, according to social cognitive theory, is influenced by goals, outcome expectations, self‐efficacy expectations and socio‐structural determinants (Bandura,  1982 ). Much of the literature indicates the potential impact of students’ self‐efficacy beliefs for the teaching and learning of communication skills in social work education.

Irrespective of which conceptualisation is used, the value of experiential learning has withstood the test of time and is the front runner in terms of the theoretical orientation underpinning the teaching and learning of CST, or specific components of it, both of which are addressed in this review. Toseland and Spielberg ( 1982 ) consider experiential learning fundamental to the systematic training that the teaching of communication skills requires. In a review of practice of teaching and learning of communication skills in social work education in England, Dinham ( 2006 ) identified a strong emphasis on experiential and participative teaching and learning methods.

Other theories, for example ego psychology in Hettinga ( 1978 ) are discussed particularly in the dissertation theses; however, the theoretical orientations underpinning the pedagogical approaches are largely ill‐defined or absent from the outcome studies in this review.

Delivery and approach

The included studies do provide some insight into the delivery format and teaching methods under investigation, especially where studies compare teaching modalities or approaches. A concerning issue in the earlier studies is whether practicing skills in communication and empathy (utilising an experiential component) is more effective than a purely didactic traditional lecture‐based approach. Larsen and Hepworth ( 1978 ) compared the efficacy of a traditional didactic intervention with an experiential intervention used within communication laboratories. Collins ( 1984 ) also compared a lecture‐based training course with a skills lab training course. The results of these studies supported practice‐based experiential learning. By contrast, when Keefe ( 1979 ) compared an experiential‐didactic course to a structured meditation experience with a control group, the experiential group did not make the expected gains, whereas those receiving  meditation did. In an extension of the basic design, Keefe ( 1979 ) found a combination of experiential training and structured meditation proved most effective.

Some of the more current studies focussed on classroom‐based teaching versus online delivery, an issue particularly relevant in the current global pandemic, which in many instances has seen teaching move to purely online or blended delivery. Ouellette et al. ( 2006 ) compared a classroom‐based instructional approach with an online web‐based instructional approach and found no significant differences between the two. In the study reported by Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ) however, live supervision with standardised clients compared favourably with the TAU, which they describe as being online self‐study.

Other studies compared more specific components within the intervention. The role of active learning for students was important whether that included participation in role‐play with peers or simulated clients. Wells ( 1976 ) in comparing the use of roleplay with using participants’ own problems, found neither one proved preferential but identified the active experimentation of students as being the key factor in their interpersonal skills development.

The role of the instructor was also an issue of interest. Hettinga ( 1978 ) examined the benefits of 1:1 instructor feedback compared with small group feedback, Laughlin ( 1978 ) focused on the role of instructor feedback versus self‐evaluation whilst Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ) expressed optimism for the use of live supervision. Again, whilst no claim can be made for whom the feedback provider (self, peers or instructor) should be, active engagement with the evaluation and feedback process seems to be the underlying mechanism which facilitates change. Opportunities for playback was another area for investigation. Reflecting the rapid development of technology in recent years, Laughlin ( 1978 ) investigated the use of audiotapes whereas Vinton and Harrington's ( 1994 ) instructional package consisted of watching videotapes of themselves or others engaging in communicative interactions. Opportunities to observe practice have a facilitative quality, a point recognised by the study authors who drew on Bandura's work.

Although there are not enough studies comparing like for like to draw any firm conclusions, the current body of research indicates that the rehearsal of skills through role‐play or simulation accompanied by opportunities for observation, feedback and reflection offer benefits for systematic CST, facilitating small gains, on skill‐based outcome measures at least. Some of the authors included in this review are confident in recommending specific teaching methods. Toseland and Spielberg ( 1982 ) suggest practice, feedback and modelling are necessary; Schinke et al. ( 1978 ) add role playing, cueing, and positive reinforcement to this list. Greeno et al.'s ( 2017 ) advice to educators is similar, with the added recommendation of supervision. Pecukonis et al. ( 2016 ) highlighted modelling of techniques to students as key. In a review of empathy training in which meta‐analysis was feasible, Teding van Berkhout and Malouff ( 2015 ) suggest that studies in which behavioural skills were developed through instruction, modelling, practice and feedback had higher, but not significantly higher, effect sizes than those in which some or all of these components were missing. Findings from qualitative research indicate that students learn communication and interviewing skills through the practice, observation, feedback and reflection that accompany simulation and role‐play activities, which Banach et al. ( 2020 ) found mapped onto Kolb's ( 1984 ) model of experiential learning. Further exploration of these issues is required.

Implementation factors: Amount, duration and uptake

Considerable variation in terms of amount and duration is evident across the included studies. The briefest intervention was a single 4‐h training session (Schinke et al.,  1978 ) whilst the longest intervention, described as ‘extensive’ appears to be interspersed throughout a 4‐year degree course (Barber,  1988 ). Literature has documented the ability to teach empathy at a minimally facilitative level in as few as 10 h (Carkhuff,  1969c ; Carkhuff & Berenson,  1976 ; Truax & Carkhuff,  1967 ). Indeed, Larsen and Hepworth ( 1978 ) found positive change occurred from a 10‐h intervention, but ‘estimated that 20 h, preferably 2 h per week for 10 weeks, would be ample’ (p. 79). However, Toseland and Spielberg ( 1982 ) suggested that the course under investigation in their study, which lasted approximately 45 h (30 h of which were experiential learning in a laboratory) may not be sufficient to increase students’ skill to the level of competence expected of a professional worker. In the study undertaken by VanCleave ( 2007 ), implementation of the intervention appeared to vary between students, because ‘when assignment by cohort could not be achieved, training was subdivided into smaller groups. Given the flexibility of the researcher, individual training was accommodated’ (p. 119). It is likely this variation occurred to enhance student participation in the study, maximising data collection opportunities for research purposes.

A number of studies did not report details regarding the amount and duration of the intervention, and some provided rather vague or imprecise details, rendering comparative aims regarding amount and duration of training futile.

The studies focus on what was taught, but data on uptake is sorely lacking. Some of the included studies (Collins,  1984 ; Larsen & Hepworth,  1978 ; Ouellette et al.,  2006 ) compared students’ personal and demographic characteristics alongside their pre‐course training and/or experience. The role of sex, age and pre‐course experience were key considerations. Social work courses attract few men compared to women, and often have small cohorts, making judgements on demographic characteristics difficult. Vinton and Harrington ( 1994 ), who examined the impact of sex on students’ empathy levels, found women had higher QMEE scores than men at both pre and post‐test. This is consistent with a study undertaken by Zaleski ( 2016 ) which found female students in medicine, dentistry, nursing, pharmacy, veterinary, and law were found to have higher levels of empathy than their male peers.

Counterintuitively, age was not found to be significantly correlated to communication skills. Ouellette et al. ( 2006 ) queried whether age was a factor in learning, yet summary statements was the only item on their interview rating scale found to be significantly correlated to age. Collins ( 1984 ) found that the amount of prior training had no impact on students’ ability to demonstrate interpersonal skills. Similarly, in a comparison of the mean levels achieved by groups dichotomised on the basis of age, sex, previous social work experience, and undergraduate social welfare or other major, Larsen and Hepworth ( 1978 ) found such attributes yielded no significant differences on either pre‐ or post‐test scores. Both studies challenge the assumption that students with more social care experience before training possess more or better communication skills than those without. In terms of uptake, Larsen and Hepworth ( 1978 , p. 78) suggested that ‘a mix with contrasting skill levels appears advantageous’, because ‘students with higher‐level skills modelled facilitative responses in the practice sessions for students with lower skills, thus encouraging and assisting the latter to achieve higher levels of responding’. In the study conducted by Laughlin ( 1978 ), self‐instruction students exhibited significantly higher mean scores for enjoyment and number of optional practice items completed than students in an instructor‐led group. Self‐instruction ‘creates a sense of self‐reliance, confidence, and personal responsibility for learning which promotes enjoyment and devotion to task not present under circumstances of external control’ (Laughlin,  1978 , p. 67). Self‐instruction appears to facilitate uptake. Other issues affecting student learning such as concentration or care‐giving responsibilities and their impact on uptake were not addressed in any of the studies included in this review.

5.2.2. Excluded studies

There were 33 papers covering 30 studies, which narrowly missed the inclusion criteria, or which content experts might expect to see in the review. There were two main reasons for exclusion, both of which are outlined in the review protocol (Reith‐Hall & Montgomery,  2019 ). First, the study design did not meet the minimum standards of methodological rigour, predominantly because an appropriate comparator was lacking. Second, the population was too specific, drawn from social work courses purely focusing on child welfare or working with children, or too general‐including students drawn from a variety of different courses. A full list of excluded studies and reasons for exclusion is presented in Table  2 .

Excluded studies table.

Author (first)DateReason for exclusion
Andrews2017No comparator
Bakx2006No comparator
Barclay2012No comparator
Bogo2017No intervention
Bolger2014No comparator
Carrillo1993No comparator
Carrillo1994Unsuitable comparator
Carter2018No comparator
Cartney2006No comparator
Cetingok1988Insufficient time points
Collins1987Unsuitable intervention and comparator
Corcoran2019No comparator
Domakin2013No comparator
Gockel2014No comparator
Hansen2002No comparator
Hodorowicz2018Population too specific (child welfare training)
Hodorowicz2020Population too specific (child welfare training)
Hohman2015No comparator
Kopp1982No comparator
Kopp 1985No comparator
Kopp1990No comparator
Koprowska2010Unsuitable comparator
Lefevre2010No comparator
Magill1985No comparator
Mishna2013Unsuitable comparator
Nerdrum1995Population too specific (child care pedagogues)
Nerdrum 1997Population too specific (child care pedagogues)
Nerdrum 2003Population too specific (child care pedagogues)
Patton2020Population too general (psychology & social justice)
Rogers2009No comparator
Scannapieco2000Population too specific (child welfare training)
Tompsett2017Instrument development
Wodarski1988No clear intervention

5.3. Risk of bias in included studies

Both review authors assessed the risk of bias of the included studies, independently applying the ‘Risk of bias’ tools—ROB 2 (Sterne et al.,  2019 ) for the randomised trials and Robins‐I for the non‐randomised studies of interventions (Sterne, Hernán, et al.,  2016 ). Both tools comprise a set of bias domains, intended to cover all issues that might lead to a risk of bias (Boutron et al.,  2021 ). We used the Methodological Expectations of Cochrane Intervention Reviews (MECIR) guidance (Higgins et al.,  2021 ), The Revised Cochrane risk‐of‐bias tool for randomised trials (RoB 2) (Higgins et al.,  2019 ) and the Risk of Bias in Non‐randomised Studies of Interventions (ROBINS‐I): detailed guidance (Sterne, Higgins, et al., 2016) to inform our judgements. To answer the review's research question, we were interested in assessing the effect of assignment to the intervention, as opposed to adherence to the intervention. Discrepancies between review author judgements were resolved through discussion.

Both reviewers judged there to be a moderate or high/serious risk of bias in all but three of 15 included studies, with only one study receiving a low risk of bias rating overall, with an additional two studies receiving a low bias rating overall for one outcome measure but not the other. The lack of information for certain domains was a problem in all of the studies, highlighting that in future, researchers should report a greater level of detail to enable the risk of bias to be fully assessed. Using a tool such as CONSORT SPI (Grant et al.,  2018 ) would facilitate this.

5.3.1. Risk of bias in randomised trials

As shown in Table  3 , there was considerable variation within the risk of bias domains of the non‐randomised studies. Only one study was rated as low risk of bias, one was rated as having ‘some concerns’, three were rated as being at high risk of bias and one study (reported in two papers) received a mix of overall bias ratings, according to the outcomes measured. Limitations were evident in all of the studies, including the lack of information reported in domains 2 and 5.

Risk of bias summary table for randomised studies based on ROB 2.

StudyDomain 1Domain 2Domain 3Domain 4Domain 5Overall risk of bias
Risk of bias arising from the randomisation processRisk of bias due to deviations from the intended interventionsRisk of bias due to missing outcome dataRisk of bias in measurement of the outcomeRisk of bias in selection of the reported result
Hettinga ( )LOWNot reportedHIGHSelf‐perceived skillsNot reportedHIGH
HIGH
Self‐esteem
SOME
Larsen and Hepworth ( )LOWNot reportedLOWLOW Not reportedLOW
Laughlin ( )LOWNot reportedHIGHHIGHNot reportedHIGH
Greeno et al. ( )LOW Not reportedLOWPerceived empathyNot reportedPerceived empathy
SOMESOME
Behaviour changeBehaviour change
LOWLOW
Pecukonis et al. ( )Self‐efficacySelf‐efficacy
SOMESOME
Behaviour changeBehaviour change
LOWLOW
Schinke et al. ( )SOMENot reportedLOWSelf‐perceived skillsNot reportedSOME
SOME
Behaviour change
LOW
Wells ( )SOMEHIGHHIGHLOWNot reportedHIGH

Domain 1—Bias arising from the randomisation process

Randomisation aims to avoid an influence of either known or unknown prognostic factors. There was considerable variation provided by the study authors regarding the randomisation process. Where there was sufficient information about the method of recruitment and allocation to suggest the groups were comparable with respect to prognostic factors (Hettinga,  1978 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ), the risk of bias was considered low. This level of detail is provided by Laughlin ( 1978 ): a table of random numbers ensured allocation sequence generation; manila envelopes were used for allocation sequence concealment; and potential prognostic factors such as age, prior job and training experience were measured as equivalent for all groups at the outset.

Conversely, information required for ROB 2 was missing from the other studies, some of which was gleaned by directly contacting study authors. Elizabeth Greeno provided additional details about the randomisation process, enabling the risk of bias in the study reported by Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ) to be rated as low. Schinke et al. ( 1978 ) and Wells ( 1976 ) stated that students were randomly assigned to groups, however they did not provide any details about how students were recruited or allocated. Both authors have passed away so further information could not be ascertained. Although there were no obvious baseline differences between groups to indicate a problem with the randomisation process, the absence of detailed information led to a judgement of some concern for both studies in this domain.

Domain 2—Risk of bias due to deviations from the intended interventions (effect of assignment to intervention)

Given placebos and sham interventions are generally not feasible in educational interventions, students and staff tended to be aware of which intervention the students were assigned to, particularly since students were largely drawn from cohorts known to each other. Control group scores were markedly different from intervention scores, suggesting contamination between groups did not occur. In reviewing the papers, there were no reports of control groups receiving the active intervention, nor did trialists report that they had changed the intervention. However, a lack of information about deviations from the intended interventions is reflected in our use of the term ‘not reported’.

Similarly, there was no information as to whether an appropriate analysis had been used to estimate the effect of assignment to intervention. Higgins et al. ( 2019 , p. 26) acknowledge that ‘exclusions are often poorly reported, particularly in the pre‐CONSORT era before 1996’. Apart from the study reported by Pecukonis et al. ( 2016 ) and Greeno et al. ( 2017 ), the randomised trials included in this review were conducted in the 1970s, which helps to explain why making interpretations of the risk of bias for these empirical studies was particularly difficult. For most of the randomised trials, there was nothing to suggest that there was potential for a substantial impact (on the result) of the failure to analyse participants in the group to which they were randomised. However, again a lack of information led the reviewers to replace a bias rating with ‘not reported’. Wells ( 1976 ) study provides an exception to this rule. Noting that two students from each group swapped due to placement clashes, Wells did not perceive this as an issue. However, the data of these students were analysed in terms of the interventions they received rather than the interventions to which they were initially assigned. As a result, both review authors deemed the risk of bias rating to be high for this domain.

Domain 3: Risk of bias due to missing outcome data

Some studies (Greeno et al.,  2017 ; Larsen & Hepworth,  1978 ; Pecukonis et al.,  2016 ; and Schinke et al.,  1978 ) retained almost all of their participants hence no data or very little data were missing, warranting a low risk of bias rating for the missing outcome data domain. Pecukonis et al. ( 2016 ) for example, identify low attrition as a strength in their study, highlighting that retention at T3 and T4 was 96% and 94%, respectively (p. 501).

Three studies were judged to be at high risk of bias due to missing data and a lack of any accompanying information. Laughlin ( 1978 ) identified that out of 68 students in her study, ‘seven subjects failed to complete either the pre‐ or post‐test because of absence from class on the day these tests were administered’ (p. 40). Information about the group for which data were missing was not provided. In Wells' ( 1976 ) study, the four students who were not present at post‐testing were excluded from the analysis, and whilst the number may seem small, they represent a significant proportion of the original study sample, which comprised only 14 students. Hettinga ( 1978 , p. 57) ‘assumes that no interaction of selection and mortality occurred’, yet researcher assumptions do not constitute evidence. In all three of these studies, the reasons for the absences were unclear and there was no evidence to indicate that the result was not biased by missing outcome data. The authors did not discuss whether missingness depended on, or was likely to depend on, its true value. Yet it is possible, likely even, that missingness in the outcome data could be related to the outcome's true value if, for example, students who perceived their communication skills to be poor decided not to attend the post‐test measurements. As a result of this, and the study authors’ lack of attention to these issues, we judged there to be a high risk of bias due to missing outcome data in the trials undertaken by Hettinga ( 1978 ), Laughlin ( 1978 ), and Wells ( 1976 ).

Domain 4: Risk of bias in measurement of the outcome

Randomised trials are judged as low risk of bias in measurement of the outcome if: the methods are deemed appropriate, do not differ between intervention groups, and ensure that independent assessors are blinded to intervention assignment. Wells' ( 1976 ) study explicitly met this criterion. Based on Larsen and Hepworth's ( 1978 ) article, the risk of bias would have been rated conservatively high because the study does not say if the outcome assessors knew to which group the students belonged. However, in her PhD thesis, on which Larsen and Hepworth's ( 1978 ) article is based, Larsen ( 1975 ) clearly states that three social work raters were blind to the identification of the student and to their intervention/control group status. The additional information enabled reviewers to judge this domain as being at low risk of bias.

In studies where two different outcome measures were used, bias ratings were judged separately, indicated by the split outcomes in domain 4 in Table  3 . For Greeno et al. ( 2017 ), Pecukonis et al. ( 2016 ) and Schinke et al. ( 1978 ), low bias ratings were given for measures of behaviour change due to evidence of independent raters, blind to the intervention status of participants. However, the self‐report measures used by each, warrant a higher risk of bias. According to the Rob 2 guidance, for self‐reported outcomes, the assessment of outcome is potentially influenced by knowledge of the intervention received, leading to a judgement of at least some concerns (Higgins et al.,  2019 , p. 51). If review authors judge it likely that participants’ reporting of the outcome was influenced by knowledge of the intervention received, then a high risk of bias is justified. The adapted Counselor Effectiveness Scale, used by Schinke et al. ( 1978 ) required participants to rate their attitudes towards their own performance. In this study, students were aware of which intervention group they belonged to, yet the waiting list control procedure reduced potential issues such as social desirability, hence a rating of some concerns was considered appropriate. In the study reported by Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ), whose subjective measures included perceived empathy and self‐efficacy respectively, it seems probable that students were aware of the intervention group they belonged to. Given there were no differences between groups on either outcome measure, it seems unlikely that participants’ reporting of the outcome(s) was influenced by knowledge of the intervention received. The ‘some concerns’ rating was applied to both.

Hettinga ( 1978 ) reports that the researcher had no knowledge as to which treatment groups the participants were randomly assigned. However, the outcome assessors were the students who were completing two subjective measures—the Rosenberg Self‐Esteem Scale and the self‐perceived interviewing competence (SPIC) questionnaire. It is likely that the students were aware of which intervention they received. The lack of change for self‐esteem meant this outcome measure was given the ‘some concerns’ rating. However, we took a more cautious approach to students’ self‐perceived interviewing competence as the results were significant. Knowledge of the intervention could have had an impact, for example, if those students in the self‐instruction group had tried harder. There was no information to determine the likelihood that assessment of the outcome was influenced by knowledge of the intervention received, which led to a conservative judgement from the reviewers of a high risk of bias for this outcome measure.

In the study conducted by Laughlin ( 1978 ), the high risk of bias is due to known differences in the measurement of the outcome between the intervention groups. Students in the self‐reinforcement group rated their own empathic responses, whereas the supervisor rated the responses of students receiving the other experimental condition. Higgins et al. ( 2019 , p. 50) point out that, ‘outcomes should be measured or ascertained using a method that is comparable across intervention groups’, which is clearly not the case in this study.

Domain 5: Risk of bias in selection of the reported result

Bias due to selective reporting can occur when all the planned results are not completely reported. Whilst there were no unusual reporting practices identified within the randomised studies, none of them had stated their intentions in a published protocol, or additional sources of information in the public domain, making decisions about the risk of bias in selection of the reported result very difficult to ascertain. Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ) report on the same study, hence these papers were compared for consistency, however, they report on different outcomes, limiting the usefulness of this approach. Email contact with Elizabeth Greeno suggests that whilst the authors had a formal plan to follow, this was not published. Consequently, verifying how reported results were selected was not possible. Due to a lack of information in all the included randomised trials, we could not make a risk of bias judgement for this domain.

Overall risk of bias

Only one included study (Larsen & Hepworth,  1978 ) received a low risk of bias rating overall; one study (Schinke et al.,  1978 ) was considered to have some concerns; three studies (Hettinga,  1978 ; Laughlin,  1978 ; Wells,  1976 ) received high risk of bias ratings overall and one study (reported by Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ) varied between low risk and some concerns of risk of bias depending on the outcome measure reviewed. The lack of information, evident in all of the domains is problematic and may have elevated the risk of bias for some studies and in some domains. The absence of protocols or accompanying documentation for the studies has compounded this issue. Boutron et al. ( 2021 ) state that the completeness of reporting of published articles is generally poor, and that information fundamental for assessing the risk of bias is commonly missing. Whilst reporting is seen to be improving over time, the majority of the included trials were conducted in the 1970s, and are evidently, a product of their time. Where study authors have not provided sufficient information, we have indicated that information was not reported. We also acknowledge that we adopted a conservative approach, therefore we might have judged the risk of bias harshly, potentially elevating the risk of bias either at the domain level or in the overall bias judgement for some studies. Frequent discussions supported our endeavours to be consistent.

5.3.2. Risk of bias in non‐randomised studies

As shown in Table  4 , there are clear similarities across some domains as well as some marked differences in the risk of bias ratings of the non‐randomised studies, which were judged in accordance with Robins‐I. For the overall bias ratings, the review authors either judged there to be a ‘moderate’ or ‘serious’ risk of bias in each study outcome reviewed, or in one instance, a ‘no information’ rating was issued, because assessing the risk of bias was not feasible.

Risk of bias table for non‐randomised studies based on Robins‐I.

StudyDomain 1Domain 2Domain 3Domain 4Domain 5Domain 6Domain 7Overall risk of bias
Risk of bias due to confoundingRisk of bias in the selection of participantsRisk of bias in the classification of interventionsRisk of bias in the deviation of interventionsRisk of bias due to missing outcome dataRisk of bias in measurement of the outcomeRisk of bias in selection of the reported result
Barber ( ) SERIOUSLOWLOWNo informationNo informationNo informationNo informationSERIOUS
Barber ( ) SERIOUSLOWLOWNo informationNo informationNo informationNo informationSERIOUS
Collins ( )MODERATELOWSERIOUSNo informationLOW

Analogue measure

MODERATE

No informationSERIOUS

Other measures

LOW

Keefe ( )No informationLOWLOWNo informationLOWSERIOUSNo informationSERIOUS
Ouellette ( )MODERATELOWLOWNo informationLOWLOWNo informationMODERATE
Rawlings ( )MODERATELOWLOWNo informationSERIOUS

Direct practice

LOW

No informationSERIOUS

Self‐efficacy

MODERATE

Toseland ( )MODERATELOWLOWNo informationLOWNo informationNo informationMODERATE
VanCleave ( )MODERATELOWLOWNo informationLOW

Empathic response

LOW

No information

Empathic response

MODERATE

Empathic concern

SERIOUS

Empathic concern

SERIOUS

Vinton ( )No informationLOWLOWNo information

Emotional empathy

LOW

Emotional empathy

SERIOUS

No information

Emotional empathy

SERIOUS

Expressed empathy

No information

Expressed empathy

No information

Expressed empathy

No information

Domain 1: Bias due to confounding

Sterne, Higgins, et al. ( 2016 , p. 20) suggest ‘baseline confounding is likely to be an issue in most or all NRSI’, which was reflected in the included studies of this review. The lack of information in two of the studies (Keefe,  1979 ; Vinton & Harrington,  1994 ) meant that an assessment of bias of confounding could not be provided. The other non‐randomised studies were rated as having at least moderate risks of confounding, since by the nature of their designs, causal attribution was not possible. As one study author comments, ‘selection of a nonrandom design subjected the research to confounds and threats to validity’ (VanCleave,  2007 , p. 105). Indeed, VanCleave ( 2007 ) discusses optimal group equivalency, and suggests that the distributions of some key confounders ‘fell pretty evenly’ (p. 135) between the intervention and control groups hence a moderate risk of bias was appropriate.

Whilst it is clearly not possible to control for all confounders, attempts were made by some study authors to use an analysis method that controlled for some of the most obvious ones, resulting in judgements of moderate risk of bias. Collins ( 1984 ) measured pre‐existing group differences, analysed them using a χ 2 test and found them to be unproblematic. Toseland and Spielberg ( 1982 ) used χ 2 and Kendall's T to measure a wide range of confounding variables, such as age, educational experiences and previous human services experience, from which they determined that students in the intervention and control groups were similar to one another regarding key characteristics. Ouellette et al. ( 2006 ) performed similar analyses on a wider range of confounding variables, which included age, credit hours and hours per week of paid employment undertaken during the semester, previous interviewing experience, grade point average and paid employment hours. Age was the only variable to be statistically significant; the online group were a little older than the classroom group.

In a design comparing first and final year students, Rawlings ( 2008 ) sought to establish comparability betwixt groups based on sex, ethnicity, grade point average, and age. Again, it appeared only age was significant, reflecting the fact that the final year students were further into their studies than those entering their first year. Barber ( 1988 ) employed a similar design, however both experiments were rated as having a serious risk of bias due to confounding factors. Student characteristics were not measured in either experiment, so it is impossible to be sure that the group receiving the microskills training did not differ in some way (other than the dependent variable) to the comparator student cohort.

Domain 2: Bias in selection of participants into the studies

This domain is only concerned with ‘selection into the study based on participant characteristics observed after the start of intervention… the result is at risk of selection bias if selection into the study is related to both the intervention and the outcome (Sterne, Higgins, et al.,  2016 , p. 30). There was nothing to suggest that any students were selected based on participant characteristics after the intervention had commenced in any of the studies, therefore a low risk of bias was given to all of the studies for this domain.

Domain 3: Bias in classification of interventions

All of the non‐randomised studies used population‐level interventions therefore the population is likely to be clearly defined and the collection of the information is likely to have occurred at the time of the intervention (Sterne, Higgins, et al., 2016, p. 33). As a result, the bias ratings for this domain were low in almost all of the studies. We could have issued no information ratings but decided a low rating was probably a better reflection of the non‐randomised studies in this domain. One study provides an exception to the rule. Collins ( 1984 , p. 67) stated, ‘it was not possible to establish a control group where no laboratory training took place’. This suggests the lecture‐trained and lab‐trained groups were not as distinctly different as was necessary, hence the serious risk of bias rating was applied for this domain.

Domain 4: Bias due to deviations from intended interventions

None of the studies reported on whether deviation from the intended intervention took place, hence the no information rating was issued for this domain across all of the studies.

Domain 5: Bias due to missing data

For some of the non‐randomised studies (Collins,  1984 ; Keefe,  1979 ; Ouellette et al.,  2006 ; Toseland & Spielberg,  1982 ), data sets appeared complete or almost complete. In VanCleave's ( 2007 ) study, where attrition was slightly higher, the number of missing participants was similar across the intervention group ( N  = 3) and control group ( N  = 2); reasons for drop‐out were also provided. A low bias rating was given for the missing data domain in these studies.

In Vinton and Harrington's ( 1994 ) study, a complete data set was provided for the QMEE scores, hence a low bias rating judgement was warranted, but the absence of student numbers for the Carkhuff scores meant a bias rating for this outcome measure could not be issued. An absence of information, on which to base a judgement, was also reflected in the results of Barber's ( 1988 ) experiments.

In Rawlings' ( 2008 ) study, results were reported as if all student data were present, however data were missing for some of the entering students. It is concerning that the results tables do not acknowledge the missing data. An imputational approach such as last observation carried forward or the use of group means would have enabled missing data to be dealt with, but instead the researcher has simply analysed the data available. Given that the missingness is not explained, both reviewers agreed that a serious risk of bias was justified.

Domain 6: Bias in measurements of outcomes

The timing of outcome measurements was problematic in three of the studies. A delay of approximately 3 weeks occurred in Collins' ( 1984 ) study for students completing the analogue measures, which reduced the time gap between pre‐and‐post‐test training scores. A bias rating of moderate concern was justified given this could have led to an under‐estimation of the positive gains made by students on this outcome measure.

In Keefe's ( 1979 ) study, although students were tested after their respective interventions, the interventions were of different durations hence the data collection time points varied. These are not comparable assessment methods. The meditation group was also tested three times, thus familiarity with the test may have produced the higher scores on the Affective Sensitivity Scale, rather than demonstrating a genuine improvement. Keefe ( 1979 ) states that levels of meditation attainment were blind rated (p. 36), however students in the experiential intervention group self‐assessed only, the subjectivity of which increased bias in the measurements of outcomes. These issues elevated the risk of bias in this domain to serious.

VanCleave ( 2007 ) reports, ‘the Davis self‐inventory was completed by the participant before, or following, each 8 excerpt role played situation’ (p. 118). Inconsistency surrounding the timing of when the instrument was completed led to a serious bias rating for the outcome measure of empathic concern and perspective taking. However, a low rating was given for empathic response where timing issues were not a cause for concern and independent raters were not aware of students’ intervention group status. The different ratings applied to each outcome is represented by the split ratings for this domain in Table  4 .

The same approach of splitting the outcome measures domain was taken in Rawlings' ( 2008 ) study. The direct practice outcome was judged to have a low risk of bias rating because assessors were blinded to the intervention status, whereas the self‐efficacy outcome received a moderate risk of bias rating, as the students themselves were the outcome assessors. Given the students comprised discreet cohorts, knowledge of the intervention group was not considered problematic by the reviewers. Conversely, the self‐assessment measure in Vinton and Harrington's ( 1994 ) study warranted a serious risk of bias rating. The potential for study participants to be influenced by knowledge of the intervention they received was considerable. The emotional empathy scores of the control group dropped considerably at post‐test, which could be an indication that the students had become aware that their peers were receiving beneficial interventions aimed at developing empathy, which they were not. Discussions between students were more likely in this study given they were all in the same cohort. Contamination effects could have impacted students’ self‐assessment scores.

Independent outcome assessors and appropriate blinding were used in all of the outcome measures used in Collins' ( 1984 ) study and in the video‐tape interviews in Ouellette et al.'s ( 2006 ) study, which, with the exception of the timing issues associated with Collins' ( 1984 ) analogue measure, resulted in low bias ratings for the outcomes measures in these two studies.

Key information was lacking in some studies. Notably in Barber's ( 1988 ) experiments, a judgement about the methods of outcome assessment could not be made at all due to the absence of information. Toseland and Spielberg ( 1982 ) described their judges as being independent but did not state whether or not they were aware of which intervention the student had received. For the outcome relating to empathic response, Vinton and Harrington ( 1994 ) provided no information about blinding or the independence of the outcome assessors. Potentially then, this study is also at risk of researcher allegiance bias. If, for example, the outcome assessors were part of the same institution as the instructors and the students, or of even more concern, if the assessors were the instructors, then this could pose a serious risk of bias, because potentially they have a vested interest in the findings. It was not possible to establish assessor independence, so the reviewers opted for a ‘no information’ rating for the Carkhuff scales outcome measurement in Vinton and Harrington's ( 1994 ) study.

Research suggests that if study authors play a direct role, studies are more likely to be biased in favour of the treatment intervention (Eisner,  2009 ; Maynard et al.,  2017 ; Montgomery & Belle Weisman,  2021 ). There is a distinct possibility that researchers of the included studies delivered the interventions themselves, leading to a further source of bias. VanCleave, for example, who had 19 years of teaching experience as an adjunct in the university where her research was conducted, acknowledged that ‘the researcher acted as teacher and facilitator in the intervention, which is typically not a recommended research strategy’ (VanCleave,  2007 , p. 117). The same issue is likely present in at least some of the other non‐randomised studies, although there was a lack of information from which to establish its presence or impact.

Domain 7: Bias in selection of reported results

There was no obvious bias in the reporting of results for any of the reported outcomes in the non‐randomised studies, however, there were no protocols or a priori analysis plans with which to compare the reported outcomes with the intended outcomes. Studies were not reported elsewhere hence external consistency could not be established. The ‘no information’ category was deemed most appropriate by both reviewers.

Overall risk of bias judgement

Only two studies (Ouellette et al.,  2006 ; Toseland & Spielberg,  1982 ) received an overall bias rating of moderate, reflecting a moderate rating in the confounding domain. Other studies (Barber,  1988 ; Collins,  1984 ; Keefe,  1979 ; Rawlings,  2008 ) were considered to be at serious risk of bias overall, due to receiving a serious risk of bias rating in at least one domain. For one study (Vinton & Harrington,  1994 ), the absence of information in several domains led to a ‘No information’ rating in the overall risk of bias judgement for one outcome measure but a serious risk of bias in another. Similarly, another study (VanCleave,  2007 ) also received a split rating for the overall risk of bias domain, with a moderate risk of bias for one outcome measure and a serious risk of bias for the other.

5.4. Effects of interventions

The results, as shown in Table  5 , are reported for the data that is available, relevant to answering the research question, using either the mean post‐test differences between intervention groups and control groups or the mean change score between the two groups. As outlined in Section  5.2.1 , extreme clinical heterogeneity exists between the included studies of this review, in terms of study designs, population characteristics, intervention types and features, comparators, outcomes and outcome measures. For example, in what appears to be the most promising examples of comparable situations‐empathic understanding‐the heterogeneity of the intervention is too broad to meta‐analyse data in a meaningful way. Of the four studies measuring empathic understanding (Greeno et al.,  2017 ; Keefe,  1979 ; VanCleave,  2007 ; Vinton & Harrington,  1994 ), the intervention types and characteristics, as shown in the included studies table, are vastly different. They range from 2 days of a motivational interviewing intervention consisting of live supervision with standardised clients (Greeno et al.,  2017 ), to 3 months of role‐play and 3 weeks of meditation (Keefe,  1979 ), to a multitude of components including art and music (VanCleave,  2007 ) to the use of videotapes of an unspecified amount and time period (Vinton & Harrington,  1994 ). Meta‐analysing such disparate interventions would therefore not be meaningful.

Results table of outcomes.

First author, dateOutcome measureOutcome typeEffect Size and confidence intervals
Barber,  Counselor Rating Form (non‐verbal communication only)Level 2b—Acquisition of Knowledge

Responsive interviews:

Expertness −0.82 (−1.5421 to −0.099)

Attractiveness −0.80 (−1.5223 to −0.0818)

Trustworthiness −0.82 (−1.5402 to −0.0974)

Unresponsive interviews:

Expertness −0.84 (−1.5656 to −0.1195)

Attractiveness

−1.25 (−2.0066 to −0.4916)

Trustworthiness −0.87 (−1.5897 to −0.1404)

Barber,  Counselor Rating FormLevel 2b—Acquisition of Knowledge

Responsive interviews:

Expertness −2.80 (−3.589 to −2.027)

Attractiveness −1.49 (−2.114 to −0.861)

Trustworthiness −1.45 (−2.074 to −0.828)

Unresponsive interviews:

Expertness −1.50 (−2.132 to −0.877)

Attractiveness −1.81 (−2.466 to −1.150)

Trustworthiness −1.88 (−2.5404 to −1.2102)

Collins,  Skills Acquisition MeasureLevel 2b—Acquisition of Skills

Empathy 1.21 (0.566 to 1.844)

Warmth 1.37 (0.726 to 2.023)

Genuineness 1.77 (1.090 to 2.441)

Carkhuff stemsLevel 2b—Acquisition of Skills

Empathy 0.60 (−0.069 to 1.265)

Warmth 0.78 (0.102 to 1.448)

Genuineness 1.13 (0.444 to 1.824)

AnalogueLevel 2b—Acquisition of Skills

Empathy 1.74 (1.027 to 2.455)

Warmth 1.80 (1.078 to 2.514)

Genuineness 1.88 (1.156 to 2.605)

Greeno,  Toronto Empathy Questionnaire (TEQ)Level 2a—Modification in attitudes and perceptions (Perceived empathy)−0.26 (−0.798 to 0.274)
Motivational Interviewing Treatment Integrity (MITI) questionnaireLevel 2b—Acquisition of Skills0.24 (−0.317 to 0.797)
Pecukonis,  Self‐efficacy scaleLevel 2a—Modification in attitudes and perceptions (Self‐efficacy)Insufficient data to report effect size and confidence intervals
Motivational Interviewing Treatment Integrity (MITI) questionnaireLevel 2b—Acquisition of Skills

Empathy 0.24 (−0.319 to 0.797)

MI spirit 0.12 (−0.434 to 0.680)

% MI adherent behaviours 0.34 (−0.225 to 0.896)

% Open questions 0.15 (−0.407 to 0.707)

% Complex reflections −0.25 (−0.808 to 0.308)

Reflection: Question ratio 0.04 (−0.519 to 0.594)

Hettinga,  Rosenberg Self‐Esteem Scale (RSE)Level 2a—Modification in attitudes and perceptions (Self‐esteem)

Section 1: 0.43 (−0.481 to 1.340)

Section 2: −0.86 (−2.001 to 0.2782)

Self‐Perceived Interviewing

Competence (SPIC) Questionnaire

Level 2b—Acquisition of Skills

Section 1: 1.10 (0.131 to 2.062)

Section 2: 0.64 (−0.285 to 1.561)

Keefe,  Kagan affective sensitivity scaleLevel 2a—Modification in attitudes and perceptions

Experiential training

0.02 (−0.638 to 0.671)

Experiential plus meditation

0.32 (−0.3267 to 0.9748)

Larsen, 

Index of Therapeutic Communication

Carkhuff

Level 2b—Acquisition of Skills1.51 (1.0366 to 1.9774)
Laughlin,  Carkhuff's Empathy scaleLevel 2b—Acquisition of Skills1.22 (0.4499 to 1.9894)
Enjoyment question ranked 1 to 5Level 1—Learner ReactionsEffect size and confidence intervals cannot be calculated from data available
Ouellett,  , Basic practice interviewing scaleLevel 2b—Acquisition of Skills

Total: 0.24 (−0.661 to 1.147)

Attentiveness: 0.73 (0.029 to 1.482)

Relaxed: 0.93 (0.147 to 1.710)

Satisfaction with instruction scaleLevel 1—Learner Reactions

Learning exercises well organised

−0.21 (−0.961 to 0.540)

Learning exercises sparked my interest

−0.05 (−1.224 to 0.292)

I enjoyed participating in learning exercises

−0.23 (−0.982 to 0.520)

Instructions were clear

0.46 (−2.94 to 1.223)

Rawlings,  Self‐efficacy scaleLevel 2a—Modification in attitudes and perceptions (Self‐efficacy)

Beginning 2.50 (1.5753 to 3.425)

Exploring 1.30 (0.535 to 2.060)

Contracting 2.04 (1.1898 to 2.8999)

Case Management 2.16 (1.2896 to 3.0339)

Core conditions 1.27 (0.5147 to 2.0348)

Total 2.04 (1.1881 to 2.8977)

Direct practice skillsLevel 2b—Acquisition of Skills

Beginning 1.78 (0.9627 to 2.6006)

Exploring 1.52 (0.7298 to 2.3022)

Contracting 1.69 (0.8862 to 2.5017)

Case Management 1.67 (0.8622 to 2.4708)

Core conditions 1.28 (0.5177 to 2.0385)

Total 1.85 (1.019 to 2.6741)

Schinke,  Counselor effectiveness scaleLevel 2a—Modification in attitudes and perceptions0.93 (0.0682 to 1.7903)
Videotaped interview ratingsLevel 2b—Acquisition of Skills

Eye contact 0.75 (−0.0984 to 1.594)

Smiles 0.34 (−0.4834 to 1.1647)

Nods 0.93 (0.0684 to 1.7906)

Forward trunk lean 1.36 (0.4554 to 2.2715)

Open‐ended questions 1.01 (0.1391 to 1.876)

Closed‐ended questions −0.24 (−1.0601 to 0.582)

Content summarisations 0.98 (0.1124 to 1.8436)

Affect summarisations 0.82 (−0.0317 to 1.6719)

Incongruent response −0.68 (−1.5221 to 0.1608)

Toseland,  Carkhuff Communication IndexLevel 2b—Acquisition of Skills1.40 (0.7506 to 2.0477)
Carkhuff Discrimination IndexLevel 2b—Acquisition of knowledge−1.31 (−1.9563 to −0.6694)
Counselling Skills Evaluation Part 1 (Communication)Level 2b—Acquisition of Skills1.20 (0.5588 to 1.8327)
Counselling Skills Evaluation Part 2 (Discrimination)Level 2b—Acquisition of Knowledge−0.53 (−1.1421 to 0.0799)
VanCleave,  Davis’ Interpersonal Reactivity Index (IRI)Level 2a—Modification in attitudes and perceptions0.22 (−0.3684 to 0.8041)
Carkhuff's Index for Communication scripts (CIC)Level 2b—Acquisition of Skills1.79 (1.0969 to 2.4799)
Vinton,  Questionnaire Measure of Emotional Empathy (QMEE)Level 2a—Modification in attitudes and perceptions0.21 (−0.4536 to 0.8751)
Carkhuff's empathy scaleLevel 2b—Acquisition of Skills0.88 (0.1823 to 1.5677)
Wells, 

A variant of the Carkhuff communication test

Carkhuff's empathy scale

Level 2b—Acquisition of Skills0.84 (−0.4499 to 2.1372)

Gagnier et al. ( 2013 ) identified twelve recommendations for investigating clinical heterogeneity in systematic reviews. In terms of the review team, one of us (PM) is a methodologist and the other (ERH) has significant relevant clinical expertise. ERH regularly discussed issues relating to population, intervention and measurement characteristics with the stakeholder group‐who included educators, students and people with lived experience. This provided a range of different perspectives, encouraging us to be reflective and reflexive in our approach, including recognising our own biases. In relation to planning and the rationale for the selection of clinical variables we hoped to consider, these were described a priori in the protocol. Other methods require statistical calculations for which we did not have sufficient data. For example, we had hoped to perform a subgroup analysis relating to the intensity of the interventions, but such data were not sufficiently available‐absent in four of them and described in non‐numerical terms (e.g., as ‘extensive’ or ‘one day’) in a further three. Gagnier et al. ( 2013 ) acknowledge the challenge posed by the incomplete reporting of data.

Given the extreme clinical heterogeneity, meta‐analysis was neither feasible nor meaningful. Instead, the findings are synthesised narratively and are organised according to a refined version of a classification of educational outcomes, developed by Kirkpatrick ( 1967 ); which is well‐known and widely used. It was refined by Kraiger et al. ( 1993 ) to distinguish between cognitive, affective and skill‐based outcomes, and adapted by Barr et al. ( 2000 ) followed by Carpenter ( 2005 ) for use in social work education. The refined classification comprises: Level 1—Learners’ Reaction, Level 2a—Modification in Attitudes and Perceptions, Level 2b—Acquisition of Knowledge and Skills, Level 3—Changes in Behaviour, Level 4a—Changes in Organisational Practice and Level 4b—Benefits to Users and Carers. Most of the studies reported more than one outcome, but none included level 4 outcomes. Therefore, the findings are synthesised according to an expanded version of levels 1 to 3—learner reactions; attitudes, perceptions, self‐efficacy; knowledge; skills and behaviours.

5.4.1. The importance of empathy

Reported in 9 of the 15 included studies (Collins,  1984 ; Greeno et al.,  2017 ; Keefe,  1979 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ; Pecukonis et al.,  2016 ; Toseland & Spielberg,  1982 ; VanCleave,  2007 ; Vinton & Harrington,  1994 ; Wells,  1976 ), empathy is a common topic of interest within this review. The pivotal role of empathy in social work practice is widely acknowledged (Forrester et al.,  2008 ; Gerdes & Segal,  2009 ; Lynch et al.,  2019 ), hence the need for students to develop empathic abilities is deemed critical for preparing them for social work practice (Greeno et al.,  2017 ; Zaleski,  2016 ). As a skill which can be ‘taught, increased, refined, and mediated’ (Gerdes & Segal,  2011 , p. 143), it is hardly surprising that empathy features so frequently within the empirical literature. Truax & Carkhuff ( 1967 (p. 46) describe empathy as ‘the ability to perceive accurately and sensitively the feelings, aspirations, values, beliefs and perceptions of the client, and to communicate fully this understanding to the client’. As study authors Vinton and Harrington ( 1994 , p. 71) point out, ‘these are separate but related phenomenon’. Empathy is a multifaceted phenomenon (Lietz et al.,  2011 ), often conceptualised as empathic understanding and empathic behaviour or response. Empathic understanding consists of cognitive empathy—understanding another person's thoughts or feelings and emotional empathy—the affect invoked by another person's expression of an emotion. Empathic behaviour or response is action‐based—the communicated empathic response, including verbal and non‐verbal communication, to another person's distress (based on accurate cognitive and/or emotional empathy). There is a lack of consensus regarding how empathy should be conceptualised and measured, some of which is reflected within the included studies.

5.4.2. Level 1—Learner reaction outcomes

Learner reactions include students’ satisfaction with the training and their views about the learning experience. As stated in the protocol (Reith‐Hall & Montgomery,  2019 ), learner satisfaction alone was not sufficient to be regarded as an outcome in this review, and qualitative findings were excluded. Two of the included studies gathered quantitative data on learner reactions, in addition to other outcomes. Laughlin ( 1978 ) found self‐instruction students exhibited significantly higher mean scores for enjoyment and number of optional practice items completed than students in an instructor‐led group. Laughlin ( 1978 , p. 67) suggests self‐instruction ‘creates a sense of self‐reliance, confidence, and personal responsibility for learning which promotes enjoyment and devotion to task not present under circumstances of external control’. However, there was no significant correlation between the variables of enjoyment and commitment with students’ gain scores.

Ouellette et al. ( 2006 ) issued a semester survey questionnaire, including a four‐item subscale which measured students’ perception of their satisfaction with the instruction they received—traditional classroom based versus online. Most students agreed or strongly agreed that learning exercises were clear and effective, irrespective of the type of instruction they received. There were no significant differences in their satisfaction scores. Again, there was no statistically significant correlation between students’ perceived satisfaction, perceived acquisition of interviewing skills and the independent ratings of students’ acquisition of interviewing skills, in either group.

5.4.3. Level 2a—Modification in attitudes and perceptions

Carpenter ( 2005 ,  2011 ) suggests that Level 2a outcomes relate to changes in attitudes or perceptions towards service users and carers/care‐givers, their problems and needs, circumstances, care and treatment. Motivational outcomes and self‐efficacy also comprise this level (Kraiger et al.,  1993 ).

Attitudes and perceptions towards clients

Students’ perceptions towards clients was an outcome of interest for a number of studies included in this review. Affective sensitivity (Keefe,  1979 ), emotional empathy (Vinton & Harrington,  1994 ), empathic concern and perspective taking (VanCleave,  2007 ) and perceived empathy (Greeno et al.,  2017 ) all fit under the umbrella term of empathic understanding. Within the literature, empathic understanding has been further defined as an affective process and a cognitive process. These different ways of conceptualising empathy are evident within the included studies, and in the choice of measuring instruments the researchers employed.

Affective and cognitive outcomes

To ascertain students’ abilities to detect and describe the immediate affective state of clients, Keefe ( 1979 ) employed Kagan's scale of affective sensitivity (Campbell et al.,  1971 ), which consists of multiple‐choice items used with a series of short, videotaped excerpts from actual counselling sessions. In Keefe's study, a positive and significant effect size of 0.32 was only found once the intervention group had been taught meditation in addition to the experiential training they received, correlating with blind ranked levels of meditation attainment. Keefe ( 1979 ) reported that the combined effects of both conditions produced mean empathy levels beyond those attained by master's and doctoral students. Segal et al. ( 2017 , p. 98) suggest that using meditation can promote emotional regulation, which can be considered fundamental to empathy. Dupper ( 2017 , p. 31) suggests that mindfulness is an effective strategy for ‘reducing implicit bias and fostering empathy towards members of stigmatised outgroups’. Both propositions could explain why the combined interventions in Keefe's ( 1979 ) study proved most effective.

Also viewing empathy as an affective state, Vinton and Harrington ( 1994 ) sought to assess students’ ‘emotional empathy’, which they describe as ‘the ability to be affected by the client's emotional state’ (p. 71). Vinton and Harrington ( 1994 ) employed a different outcome measure—the Questionnaire Measure of Emotional Empathy (QMEE) (Mehrabian & Epstein,  1972 ), which emphasises the affective component of empathy including emotional arousal to others’ distress. Two intervention groups received an instruction package utilising videotapes, one relying on self‐instruction, the other also receiving input from an instructor and peer group, whilst the control group received no intervention. At post‐test, we found a small effect size of 0.21 between the ‘video other and self’ and the controls, however the QMEE scores of both groups had actually declined. Despite these results, Vinton and Harrington ( 1994 ) suggested that further investigation into the use of videotape or film is warranted.

Building on the suggestion by Vinton and Harrington ( 1994 ) that film can assist the development of empathic understanding, the students in VanCleave's ( 2007 ) study watched a 2‐h commercial film, with 30 min of reflection and discussion. The self‐report measure they used comprised two subscales from the Interpersonal Reactivity Index (IRI) (Davis,  1980 ): the first, empathic concern addresses the affective component of empathy and the second, perspective taking focusses on the cognitive component of empathy. Despite using a broader conceptualisation of empathy and a more inclusive measure, which produced an effect size of 0.22, changes were not statistically significant.

Utilising a different instrument still, Greeno et al. ( 2017 ) sought to measure students’ perceived empathy using the Toronto Empathy Questionnaire (TEQ) (Spreng et al.,  2009 ), which views empathy as an emotional process, but is based on items from the QMEE and the IRI. The effect size at post‐test was −0.26, with study authors reporting no statistically significant difference between groups. Given a behavioural measure of empathy used by Greeno et al. ( 2017 ) demonstrated a statistically significant small effect size for the intervention group, ‘the lack of change across time and groups’ on the self‐reported TEQ scores was ‘unexpected’ (p. 803).

No statistically significant changes in students’ empathic understanding were identified in the studies above, irrespective of the type of self‐report measure used. The challenges of measuring empathy through self‐reports (Lietz et al.,  2011 ) are clearly evident in this review and will be discussed further in Section 6.

Perceptions of the treatment/intervention

Based on the same study reported by Greeno et al. ( 2017 ), Pecukonis et al. ( 2016 ) issued a 17‐item self‐report measure to garner students’ perceptions of Motivational Interviewing. Training for the intervention group included real‐time feedback by clinical supervisors whereas the control group received online TAU. No between group difference was identified, however perceptions of the Motivational Interviewing increased (by an average of 7 points) for both groups over time.

Self‐esteem and self‐efficacy

Self‐esteem, which reflects how people perceive themselves and includes a sense of goodness or worthiness, was an outcome measure in just one of the included studies. Hettinga ( 1978 ) argued that self‐esteem, as a critical dimension of professional self‐dependence, directly relates to the attainment of skills. However, he used The Rosenberg Self‐Esteem Scale (RSE) (1965), an instrument measuring global self‐esteem, in his study. For students in the intervention group, who experienced videotaped interview playback with instructional feedback, the self‐esteem score dropped very slightly. For the control condition, who received feedback delivered in a small group format, the self‐esteem score remained unchanged. Although we found a small effect size for Section  1 , Hettinga suggested the findings were not significant, indicating the intervention had no impact on students’ self‐esteem scores.

Parker ( 2006 ) differentiates between the global nature of self‐esteem and the context specific nature of self‐efficacy. Perceived self‐efficacy beliefs ‘influence whether people think pessimistically or optimistically and in ways that are self‐enhancing or self‐hindering’ (Bandura,  2001 , p. 10), which has implications for students’ skill development. Self‐efficacy is ‘an individual's assessment of his or her confidence in their ability to execute specific skills in a particular set of circumstances and thereby achieve a successful outcome’ (Bandura,  1986 , as quoted in Holden et al.,  2002 ). Literature in the counselling field indicates that self‐efficacy may predict performance (Larson & Daniels,  1998 ), and can thus serve as a proxy measure. The idea that self‐efficacy is a means to assess outcomes in social work education has gained traction in recent years (Holden et al.,  2002 ,  2017 ; Quinney & Parker,  2010 ).

Two of the included studies measured self‐efficacy. Pecukonis et al. ( 2016 ) found no change in students’ self‐efficacy scores, either between the brief motivational interviewing intervention group and the TAU control group, or over time. Rawlings ( 2008 ), who evaluated the impact of an entire university degree, found students exiting Bachelor of Social Work (BSW) Education had significantly higher self‐efficacy scores (mean score of 6.78) than those entering it (mean score of 4.40). Through multiple regression analysis, results showed that BSW education positively predicted self‐efficacy. However, students’ self‐efficacy ratings did not correlate with their practice skill ratings. Surprisingly, after controlling for BSW education, self‐efficacy was found to be a negative predictor of direct practice skill. Rawlings ( 2008 , p. xi) explains that ‘self‐efficacy acted as a suppressor variable in mediating the relationship between education and skill’. This unexpected finding reflects the controversy surrounding the use of self‐efficacy as an outcome measure, which will be revisited in Section  6.3 .

Schinke et al. ( 1978 ) asked students to rate their attitudes towards their own role‐played interviewing performance. A large effect size of 0.93 indicates that CST positively affected the attitudes students had about their performance.

5.4.4. Level 2b—Acquisition of knowledge and skills

The acquisition of knowledge relates to the concepts, procedures and principles of working with service users and carers. Carpenter ( 2005 ), after Kraiger et al. ( 1993 ), separated knowledge outcomes into declarative knowledge, procedural knowledge and strategic knowledge. Only procedural knowledge—‘that used in the performance of a task’ (Carpenter,  2011 , p. 126), featured as an outcome in this review, reported in three studies (two publications).

Procedural knowledge

Barber,  1988 (p. 4) anticipated that students beginning their training would have ‘little knowledge of correct interviewing behaviour’. Conversely, he expected students approaching the end of their training to be more able to judge responsive and unresponsive non‐verbal communication—displayed by actors towards simulated clients (in experiment 1) and practitioners towards real clients (in experiment 2). The anticipated enhanced judgement that microskills training was expected to elicit can be identified as what Kraiger et al. ( 1993 ) referred to as procedural knowledge. The experiments used case studies to which students were asked to respond; Carpenter ( 2011 ) suggests these are appropriate measures to assess procedural knowledge in social work education.

Contrary to his expectations, and the findings of the other studies in this review, the two experiments conducted by Barber ( 1988 ) found that the reactions of students who had received microskills training were less accurate than the reactions of untrained students. In the first experiment, the untrained comparator group rated counsellor responsiveness higher than the trained intervention group, with large effect sizes between the groups for expertness (−0.82), attractiveness (−0.80), and trustworthiness (−0.82). The same pattern emerged when rating counsellor unresponsiveness, with large effect sizes for expertness (−0.84), attractiveness (−1.25) and trustworthiness (−0.87). Flaws in the first experiment include that video segments assessed by students were just 2 min long and included non‐verbal communication only, which goes some way towards explaining the surprising results. Whilst non‐verbal communication is extremely important, the absence of the verbal accompaniment and speech tone, emphasis and pacing, does not reflect how most people communicate, either in their personal lives or in social work practice, nor does it provide students with an opportunity to identify mirroring or mimicry. Barber ( 1988 ) acknowledges that artificiality might have led to trained students being more critical than their non‐trained counterparts.

In the second of Barber's experiments, the untrained comparator group rated counsellor responsiveness higher than the trained intervention group, with very large effect sizes between the groups for expertness (−2.80), attractiveness (−1.49) and trustworthiness (−1.45). A similar trend occurred when rating counsellor unresponsiveness with large effect sizes for expertness (−1.50), attractiveness (−1.81) and trustworthiness (−1.88). Barber ( 1988 ) found untrained students performed similarly to clients’ ratings, which he perceived as evidence that the trained students were underperforming. However, it is possible that the trained students were looking out for different responses than the untrained students and clients. Barber speculated that training reduced student's capacity to empathise with the client, however, the outcomes of interest—trustworthiness, attractiveness and expertness, which is what students were asked to rate, do not measure empathy, hence the face validity of this measurement is questionable. After completing a factor analysis of a shortened version of the Counsellor Rating Form used in Barber's experiments, Tryon ( 1987 , p. 126) concluded that ‘further information about what it measures, and how, is needed’. It is hard to fathom how the conclusions Barber drew, were born out of the measures he employed and the results these measures produced.

Design limitations are also apparent, with Barber acknowledging that the first year and final year student groups may have been different to each other on variables other than the training. The experiments are important, because the findings that social work students appeared less able to judge responsive and unresponsive interviewing behaviour after training in microskills than counterparts who had yet to receive the training would suggest this teaching intervention could have an adverse, undesirable or harmful effect. However, other studies which ensured that students were matched on factors such as demographic variables and pre‐course experience (e.g., Toseland & Spielberg,  1982 ), produced more positive results. Thus, Barber's paper is an exception to the rule, such that his findings should be interpreted cautiously, with due consideration of the measurement and design issues evident within both experiments and the serious risk of bias, due to confounding.

In Toseland and Spielberg's ( 1982 ) study, two of the four measures employed also tap into the procedural knowledge outcome because students judged the ability of others to respond in a helpful way. First, a film of client vignettes was shown to students who had to select from five different responses, rating them from ‘destructive’ to ‘most helpful’ using the second part of a Counselling Skills Evaluation. Second, through the Carkhuff's Discrimination Index (Carkhuff,  1969a ), students rated the helpfulness of four counsellor responses to a set of client statements. Difference scores were generated by comparing students’ ratings with those produced by trained judges. Discrimination scores indicated that students who had received the training were better able to discriminate between effective and ineffective responses to clients’ problems, and their ratings closely matched those of trained judges. With effect sizes of −1.31 for the Carkhuff Discrimination Index and −0.53 for the Counselling Skills Evaluation part 2, and a very high confidence level of 0.001, the findings were significant.

Skills have been organised hierarchically within the literature on social work education outcomes to include initial skill acquisition, skill compilation and skill automaticity (Carpenter,  2005 ,  2011 ; Kraiger et al.,  1993 ). Skill automaticity did not feature as an outcome in this review, which possibly reflects the point made by Carpenter ( 2005 ); that ‘the measurement of the highest level of skill development, automaticity, poses significant problems’ (p. 14). To our knowledge, no valid measure of automaticity for communication skills currently exists.

Initial skills

Initial skills, which are often practised individually, in response to short statements or vignettes, were the most popular outcome reported in this review. ‘Trainee behaviour at the initial skill acquisition stage of development may be characterised as rudimentary in nature’ (Kraiger et al.,  1993 , p. 316).

The initial skills considered fundamental for demonstrating empathy were evidently interesting to the researchers of the included studies. Variations of the Carkhuff scales (Carkhuff,  1969a ,  1969b ), which are widely used in social work education (Hepworth et al.,  2010 ), were employed in seven of the included studies (Collins,  1984 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ; Toseland & Spielberg,  1982 ; VanCleave,  2007 ; Vinton & Harrington,  1994 ; Wells,  1976 ). The Carkhuff scales comprise two subsets: empathy discrimination (being able to accurately identify the level of empathy response) and empathy communication (putting that discriminated empathy into a congruent action response) (Carkhuff,  1969a ,  1969b ). The Carkhuff scales can require either a written or verbal response to a written statement or audio/video vignette, although instruction was originally mediated through audio recordings (Toukmanian & Rennie,  1975 ). Independent raters evaluate the level of empathy shown, selecting from five levels whereby level one represents low levels of empathy and level five indicates high levels. Level three is considered to be a minimally facilitative empathic response.

Using a slightly adapted version of the written statements format of the Carkhuff ( 1969b ) scale, Larsen and Hepworth ( 1978 ) assessed students’ skill levels in providing empathic responses to ‘written messages’, which they suggest was highly significant ( p  < 0.001). We calculated a large effect size (1.51), demonstrating as predicted, that the experimental groups surpassed the control groups on achieved levels of performance.

Toseland and Spielberg ( 1982 ) sought to replicate and expand on Larsen and Hepworth's ( 1978 ) study by developing and evaluating a training programme comprising core helping skills, including genuineness, warmth and empathy. Two of the measures they used capture the initial skills outcome. First, through Carkhuff's Communication Index, as described above, students were asked to act as though they were the worker and respond by writing what they would say to a set of statements. Second, through part 1 of a Counselling Skills Evaluation (CSE), students watched a film of client vignettes, and wrote what they would say if they were the worker. Student responses to both measures were rated by trained judges. Students in the control group saw a slight reduction in their skills on both measures whereas the intervention group demonstrated gains on both measures with large effect sizes of 1.40 on the Carkhuff Communication Index and 1.20 on part 1 of the Counselling Skills Evaluation. Students in receipt of the training increased their ability to communicate effectively using the ten helping skills.

Nerdrum and Lundquist ( 1995 ) suggest that because Larsen and Hepworth ( 1978 ) and Toseland and Spielberg ( 1982 ) reported ratings for total communication index rather than empathy specifically, that lower empathy scores may have been concealed. Certainly, the instructors in the study reported by Nerdrum and colleagues (Nerdrum,  1997 ; Nerdrum & Høglend,  2003 ; Nerdrum & Lundquist,  1995 ), which narrowly missed the inclusion criteria for this review, found that empathy was the most difficult of the facilitative conditions for students to grasp. In addition, methods of training and methods of measurement have been confounded in earlier studies, potentially leading to over inflated treatment effects (Nerdrum & Høglend,  2003 ).

To evaluate an interviewing skills course, Laughlin ( 1978 ), also using the Carkhuff instrument, sought to test self‐instructional methods, in which one experimental condition relied on self‐reinforcement whilst the other experimental condition received external reinforcement and feedback from an instructor. Both experimental groups produced greater learning gains after training than either of the two control groups. Interestingly, there was no significant difference between the gain scores of the two experimental groups. Laughlin ( 1978 , p. 65) suggests that ‘self‐managed behavior change can, under certain circumstances, prove to be as efficacious as externally controlled systems of behavior change’. However, students in the self‐reinforcement group rated their own empathic responses, whereas the supervisor rated the responses of students receiving the other experimental condition. As Laughlin ( 1978 ) acknowledged, ‘the self‐instruction group may be considered a product of inaccuracy in the self‐evaluation process’ (p. 68). Other studies have identified that students often over or underestimate their abilities (Kruger & Dunning,  1999 ). Based on their mean gain scores, we calculated a large effect size of 1.22 between the experimental condition who received external reinforcement and feedback and the control group who received no instruction.

Vinton and Harrington ( 1994 ) also appear interested in the role of the self in student learning, and they too used the Carkhuff scales to investigate this issue. At post‐test, a large effect size (0.88) was observed between the ‘videotape self and other’ group and the controls. At one month follow‐up, Vinton and Harrington ( 1994 ) found the majority of students in the intervention groups reached the level Carkhuff deemed to be facilitative.

To compare the effects of roleplay and using participants’ own problems for developing empathic communication skills through facilitative training, Wells ( 1976 ) used a variant of Carkhuff ( 1969a ) communication test in which students were asked to respond empathically in writing to four tape‐recorded helpee statements before training and to a different set of four statements after training. Contrary to Wells’ assertion that no differential effect between role‐play and ‘own problems’ procedures was identified and the suggestion that active experimentation of students in both groups explains their modest outcome gains, we found a large effect size of 0.84 at post‐test. This finding should be interpreted cautiously given it is based on just five students per group.

Collins ( 1984 ) used two written skills measures—the Carkhuff stems, using written client statements as stimuli and a Skills Assessment Measure (SAM), which uses an audio‐video client stimulus. Both measures seek to capture outcomes that can be categorised as initial skills. The mean scores on the Carkhuff stems at post‐test were slightly higher for lab trained students than lecture trained students. Effect sizes were 0.60, 0.78 and 1.13 for empathy, warmth and genuineness respectively. However, Collins ( 1984 ) reports that statistical significance was only reached for empathy, which he suggests might be because lecture and lab training prepare students for training on the relatively straightforward measure of producing written statements as responses to short client vignettes. Warmth and genuineness might be easier to demonstrate than empathy hence lecture‐based students could manage them satisfactorily.

Similar, but slightly higher findings were demonstrated through the Skills Acquisition Measure (SAM), wherein students were asked to respond in writing to a series of vignettes. They were advised that their responses should be based on what they would say if they were conducting the interview. Student responses to the SAM were scored by trained raters using the Carkhuff scales. The post‐test scores of lab‐trained students compared favourably with the lecture‐trained students. Large effect sizes of 1.21, 1.37 and 1.77 were found empathy, warmth and genuineness respectively. Collins ( 1984 ) concluded that findings from the Carkhuff stems and the Skills Acquisition Measure provide evidence that lab‐based training is more effective for teaching interpersonal interviewing skills for social work students than lecture‐based training.

Carkhuff ( 1969a ) suggested similarities between responses to the stimulus expressions in written form and verbal form and responses offered in an actual interview with a client. However, it should be noted that this alleged equivalency of measures has been questioned throughout the literature. VanCleave ( 2007 ) noted that making an advanced verbal empathic response is arguably more challenging than producing written statements. In her study, expert raters used the Carkhuff's Index for Communication scripts (CIC) to evaluate the videotaped responses of students to actors who verbally delivered excerpts based on the Carkhuff stems. Tapes contained vignette responses, rather than role‐played sessions in their entirety. With a large effect size of 1.79, students in the intervention group demonstrated more empathy than the students who did not receive the empathy response training.

In summary, multiple studies demonstrated an increase in social work students’ communication skills, including empathy, following training. The results for actual skill demonstration are modest yet promising.

Compilation

The compilation of skills is the term coined by Kraiger et al. ( 1993 ) to refer ‘to the grouping of skills into fluid behaviour’ (Carpenter,  2005 , p. 12). Methods for measuring the compilation of skills include students’ self‐rating of competencies and observer ratings of students’ communication skills in simulated interviews (Carpenter,  2011 ). Wilt ( 2012 ) argued that simulation fosters more in‐depth learning than discussions, case studies, and role‐plays, due to the location of the student in the role of the worker and real‐time decision‐making that includes ethical considerations.

In the study by Collins ( 1984 ), analogue interviews, which consisted of a 10‐min role‐play of a student in the worker role with a student in the client role, showed modest gains, whereby 23% of students in the lab group improved by 0.5, to a level which Carkhuff and Berenson ( 1976 ) suggested was the sign of an effective intervention. This was significantly lower than the 52% who showed 0.5 improvement on the Skills Acquisition Measure. However, Collins ( 1984 ) suggests that direct comparisons of the findings is problematic given the delay (of approximately 3 weeks) in students completing the analogue measures, which reduced the time gap between pre‐and‐post‐training scores. Despite this, improvements shown in the analogue interviews were still significant. When comparing the two interventions—lab versus lecture, the lab‐trained students demonstrated more skill than the lecture‐trained group, as demonstrated by very large effect sizes of 1.74 for empathy, 1.80 for warmth and 1.88 for genuineness.

Hettinga ( 1978 ) sought to measure the impact of videotaped interview playback with instructional feedback on student social workers interviewing skills. A tailor‐made instrument was used to measure self‐perceived interviewing competence (SPIC). At post‐test, the mean score for the combined intervention groups was 62.60 whereas for the control groups the mean score was 57.47. This finding was supported by moderate to large effect sizes of 1.10 for Section  1 and 0.64 for Section  2 , albeit with small sample sizes. The significantly higher scores for the intervention group suggest that students’ self‐perceived interviewing competence was positively impacted by videotaped interview playback with instructional feedback. Hettinga ( 1978 ) acknowledged the problem of using self‐reports as a measure of skill accomplishment. This is considered further in Section  6.3 .

Both methods (self‐ratings and observer ratings) were used in the study conducted by Schinke et al. ( 1978 ). Through 10‐min videotaped role‐play simulations at pre‐ and post‐test, expert raters assessed a range of verbal and non‐verbal communication skills demonstrated by students. The largest effect sizes were for forward trunk lean (1.36) and open‐ended questions (1.01). After completing the videoed role‐plays, students rated their own interviewing skills according to an adapted version of the Counselor Effectiveness Scale developed by Ivey and Authier ( 1971 ). The intervention group's mean change score of 37.083 was significantly higher than the control group's mean change score of 13.182, producing an effect size of 0.93.

Ouellette et al. ( 2006 ) employed similar methods‐a 10‐min videotaped role‐play simulation and student self‐rating scale‐to evaluate the actual acquisition of interviewing skills between students taught in a traditional face to face class and students using a Web‐based instructional format with no face‐to‐face contact with the instructor. Rated according to a Basic Practice Interviewing Skills scale, very few statistically significant differences were found between the traditional class and the online class. Significant differences were identified for only 2 of 21 specific interviewing skills ratings, with an effect size of 0.73 for attentiveness and 0.93 for being relaxed. The findings indicate that for two of the interviewing skills measured, the online students were slightly more proficient than their peers in the traditional class. In a semester survey questionnaire, including a four‐item subscale measuring students’ perception of their acquisition of beginning interviewing skills, Ouellette et al. ( 2006 ) found few statistical differences between the groups apart from the classroom group responded more favourably in terms of their perception of learning a lot from the pedagogical activities used to teach interviewing skills. The interviewing skills of an online class versus those taught in a traditional face‐to‐face classroom setting were ‘approximately equal’ on completion of an interviewing skills course (Ouellette et al.,  2006 , p. 68).

In the study reported by Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ), which investigated motivational interviewing, students’ empathic skills were observed and rated from low (score of 1) to high (score of 5) using the Motivational Interviewing Treatment Integrity (MITI) questionnaire. This measure, specific to the treatment modality of the intervention, provides a global empathy score, which aims to capture all of the efforts the student/practitioner makes to understand the client's perspective and convey this understanding to the client. Greeno et al. ( 2017 ) found improvements were evident for the intervention group, who received live supervision with simulated clients. At post‐test, the authors observed a small effect size of 0.24. The intervention group maintained gains at follow up, hence Greeno et al. ( 2017 ) conclude, ‘results from the study cautiously lend evidence that suggests live supervision as a promising practice for teaching MI to social work students’ (p. 803). These findings are particularly important given this is one of only two outcomes across all of the included studies to receive a low risk of bias rating.

Referring to the same study, Pecukonis et al.'s ( 2016 ) trained MITI coders produced summary scores deriving from the following behaviour counts. They found that the change scores between the start of the intervention and follow‐up were 1.39 for the live supervision group and −0.85 for the TAU group, providing support that Live Supervision was effective in teaching the early stages of MI skills. For empathy, at post‐test, a small effect size of 0.24 was observed. For the percentage of Motivational Interviewing adherent behaviours, an effect size of 0.34 was identified. Differences were less pronounced for MI specific skills. The authors observed that the intervention group displayed trends of attaining higher levels of proficiency on MI specific skills compared with the TAU group. An exception to this trend was observed at post‐test for percentage of complex reflections,—effect size −0.25, although they had lost this gain by follow‐up. Pecukonis et al. ( 2016 ) identify that statistical significance was seen only for the MI area of reflection to question ratio, acknowledging that the study may be underpowered.

Rawlings ( 2008 ) compared the performance of direct practice skills of students entering an undergraduate social work course with students exiting the same course. Students completed a 15‐min video‐taped interview with a standardised client. Students’ performance was evaluated by independent raters using an adapted version of a 14‐item instrument, developed by Chang and Scott ( 1999 ) to rate basic practice skills including beginning, exploring, contracting, case management skills, and the core conditions of genuineness, warmth, and empathy. Exiting students scored higher than entering students on each practice skill set, with a large effect size of 1.85 for the overall total score.

Studies measuring the compilation of skills demonstrated modest gains in students’ communicative abilities, including general social work interviewing skills and the demonstration of expressed empathy.

5.4.5. Level 3: Behaviour and the implementation of learning into practice

Collins ( 1984 ) was the only study in this review to include a behavioural outcome. Scores from client interviews, which consisted of tape‐recorded interviews with clients at the start of their field practicums, were compared to scores from the analogue role‐play interviews at the end of the training to investigate the transfer of skills into practice. There was a drop for lab‐trained students from their analogue role‐play scores to their client interviews—from 2.72 to 2.22 ( T  = 7.59) for empathy, 2.79 to 2.35 ( T  = 6.82) for warmth and 2.63 to 2.28 ( T  = 6.65) for genuineness. These findings suggest students did not transfer their learning from the laboratory into practice, which Collins ( 1984 ) suggests was because of measurement anxiety, problems with the measures and the fundamental differences between lab and fieldwork settings.

5.4.6. Level 4a: Changes in organisational practice

None of the included studies addressed this outcome.

5.4.7. Level 4b: Benefits to users and carers

6. discussion, 6.1. summary of main results.

The purpose of this systematic review was to identify, summarise, evaluate and synthesise the current body of evidence to establish if CST programmes for social work students are effective. Fifteen studies were included in this review. Most of the studies included in this review are dated, methodological rigour was weak, quality was poor, and the risk of bias was moderate to high/serious or had to be rated as incomplete due to limitations in reporting. Extreme heterogeneity exists between the primary studies and the interventions they evaluated, precluding the meaningful synthesis of effect sizes through meta‐analysis. The findings of this review are therefore limited and must be interpreted with caution.

The anticipated outcome of a positive change in the modification of perceptions and attitudes of students (including cognitive and affective changes) following training was not born out in the data. This may in part be a result of how these outcomes are conceptualised and measured, with self‐reports being particularly problematic. Of the 15 included studies in this review, two studies, reported in one paper (Barber,  1988 ) ( N  = 82) identified a negative outcome for the acquisition of knowledge, whereby trained students placed less value on responsive and unresponsive interviewing behaviour and were less accurate in their ability to predict clients’ reactions than their untrained counterparts. However, there was no convincing evidence to suggest that the teaching and learning of communication skills in social work education causes adverse or harmful effects.

For the outcome of skills acquisition, which featured in 12 of the included studies, reported in thirteen papers, only one study (Ouellette et al.,  2006 ) ( N  = 30), which compared face‐to‐face and online instruction, did not find a significant difference between the groups. Effect sizes in the other 11 studies measuring skills acquisition (Collins,  1984 ; Greeno et al.,  2017 ; Hettinga,  1978 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ; Pecukonis et al.,  2016 ; Rawlings,  2008 ; Schinke et al.,  1978 ; Toseland & Spielberg,  1982 ; VanCleave,  2007 ; Vinton & Harrington,  1994 ; Wells,  1976 ) ( N  = 575) indicated some identifiable improvements in the communication skills including empathy, in students who received training. This finding is in keeping with reviews about CST (Aspegren,  1999 ) and empathy training (Batt‐Rawden et al.,  2013 ) for medical students and nursing students (Brunero et al.,  2010 ).

The review identified considerable gaps within the evidence, further research is required. This is discussed in Section  7 .

6.1.1. Level 1: Learner reactions

The evidence was inconclusive as only two studies ( N  = 108) contributed data. However, the findings, whilst limited, reflect a criticism of the growing trend, in the UK at least, to rely on quality assurance templates, which collect end of course satisfaction ratings only, and fail to measure outcomes (Carpenter,  2011 ).

6.1.2. Level 2a: Modification in attitudes and perceptions

One study ( N  = 23), Schinke et al. ( 1978 ) found that students’ positive attitudes towards their skills were almost three times higher among students who had received CST than those who had not. Whilst promising, the evidence was inconclusive because too few studies contributed data. The review also highlights the challenges of using self‐reports to measure empathic understanding; no statistically significant changes were identified in three of four studies investigating empathic understanding, despite the same studies demonstrating the positive gains established when utilising other outcome measures. The challenges of measuring empathy through self‐reports (Lietz et al.,  2011 ) are well documented and discussed further in Section  6.3 .

6.1.3. Level 2b: Modification in knowledge

The evidence was inconclusive, because only three studies (reported in two publications) ( N  = 150) contributed data. In a review of empathy training evaluation research, Lam et al.,  2011 found that regardless of the training method used, individuals were able to learn about the concept of empathy. Whilst the modification of knowledge is relatively straightforward, this was evidently not an outcome reported in the studies in this review.

6.1.4. Level 2b: Modification of skills

The evidence does suggest that modest gains can be made in the interviewing skills and the demonstration of empathic abilities of student social workers following systematic CST. This was the strongest finding of this review with 12 out of the 15 studies ( N  = 605) contributing data, 11 of which reported improvements for students in the intervention groups.

6.1.5. Level 3: Changes in behaviour

The evidence was inconclusive due to the fact only one study ( N  = 67) reported this outcome.

6.1.6. Level 4: Changes in organisational practice and benefits to users and carers

The outcomes was not addressed in any of the studies included in this review.

6.1.7. Adverse effects

The evidence was inconclusive as only one paper ( N  = 82) contributed data.

6.2. Overall completeness and applicability of evidence

The included studies indicate, albeit tentatively, that interventions for teaching communication skills in social work education seem to have a positive impact, at least on demonstrable skills outcomes, and in the short‐term. Only Barber ( 1988 ) based on his own empirical research, questioned whether microskills were worth teaching. Perhaps the starkest finding of the review is the paucity of high quality and rigorously designed studies intended to present evidence for the outcomes of teaching communication skills to social work students, particularly given that pedagogic practices in the teaching and learning of communication skills are well established in social work education across the globe. Many of the included studies are quite dated and the majority were conducted in the United States. The picture provided by the existing body of evidence is incomplete‐it does not reflect the involvement of people with lived experience, or the newer innovations or technological advances used in social work education today‐limiting the applicability of the evidence.

In terms of publication bias, we recognise that there will be some PhD theses and trials containing negative results which we have not located in this review, and we acknowledge that publication bias could potentially be an issue. We took steps to minimise the risks including a wide reaching and extensive search (excluding outcomes) and contacting subject experts to identify any publications we might have missed through our search strategy. Strategies typically used to assess publication bias, such as funnel plots, were not feasible due to their small size and number, and lack of power.

Extreme levels of heterogeneity and moderate to high/serious risk of bias ratings in the studies included in the review, meant meta‐analysis was not feasible, and consequently a narrative review was undertaken. Outcomes were analysed and structured according to the outcomes framework for social work education developed by Carpenter ( 2005 ), after Kirkpatrick ( 1967 ), Kraiger et al. ( 1993 ) and Barr et al. ( 2000 ). Although data exists for some outcomes in levels 1–3, none of the included studies addressed outcomes at level 4a—changes in organisational practice or level 4b—benefits to users and carers, therefore significant gaps in the evidence base remain.

6.3. Quality of the evidence

Whilst there was overall consistency in the direction of mean change for the development of communication skills of social work students following training, we must acknowledge that the body of evidence is small in terms of eligible studies and that rigour across this body of evidence is low. The assessment of methodological quality and the risk of bias, examined using the ROB 2 tool for randomised trials and the ROBINS‐I tool for non‐randomised study, was judged to be moderate to high/serious, or incomplete, in all but one of the included studies. Confounders such as differences at baseline, missing data and the failure to address missingness appropriately, and the knowledge outcome assessors had about the intervention and its recipients were the most significant detractors from the internal validity of the studies reviewed.

Empathy has featured in skills training for more than 50 years, however as the studies in this review indicate, ‘evidence of empathy training in the social work curriculum, remains scarce and sketchy’ (Gerdes & Segal,  2011 , p. 142). As Gair ( 2011 , p. 791) maintains, ‘comprehensive discussion about how to specifically cultivate, teach and learn empathy is not common in the social work literature’, and the evidence that does exist is fairly limited. The same criticisms have been levied against research into the teaching and learning of communication skills in social work education more generally (Dinham,  2006 ; Trevithick et al.,  2004 ). Given the range and extent of bias identified within this body of evidence, caution should be exercised in judging the efficacy of the interventions for improving the communicative abilities of social work students.

6.3.1. Concerns about definitions and conceptualisations

One of the challenges evident in this review is the considerable variation in the way the study authors define key constructs, particularly in relation to empathy. Defining empathy remains problematic (Batt‐Rawden et al.,  2013 ) because the construct of empathy lacks clarity and consensus (Gerdes et al.,  2010 ) and conceptualisations have changed over time. Whilst cognitive, neurobiological, behavioural, and emotional components are now recognised (Lietz et al.,  2011 ), earlier conceptualisations were more unidimensional, depicting empathy as a trait, emotion or skill. As a result, there is no consistency in the way operational definitions of empathy are used between the studies in this review, which has further implications for how outcomes are measured and restricts what the body of evidence can confidently tell us. The issue is not unique to social work; referring to a health context, Robieux et al.,  2018 , p. 59) suggest that ‘research faces a challenge to find a shared, adequate and scientific definition of empathy’.

6.3.2. Concerns about measures

Communication skills, including empathy, can be measured from different perspectives including self‐rating (first person assessment), service user/patient‐rating (second person assessment) and observer rating (third person assessment) (Hemmerdinger et al.,  2007 ). Ratings from service users were absent from the included studies, possibly because of geographical factors. Most of the included studies were conducted in North America where the inclusion of service users and carers in social work education is less prominent than in the UK, for example. Many of the included studies used validated scales whereas others developed their own measures. However, even with validated scales, measurement problems were encountered by the study authors.

Self‐rating

Much of the outcome data in social work education has relied on self‐report, a trend reflected in this review. Self‐reports appeared appropriate for measuring satisfaction with teaching and practice interventions in Laughlin ( 1978 ) and Ouellette et al.'s ( 2006 ) studies, although these outcomes did not correlate to student's improvement in skills. Self‐efficacy scales are another type of self‐report, one which has been adapted for research into the teaching and learning of communication skills of social work students specifically (e.g., Koprowska,  2010 ; Lefevre,  2010 ; Tompsett, Henderson, Gaskell Mew, et al.,  2017b ). They are inexpensive and easy to administer and analyse. However, the limitations of using self‐efficacy as an outcome measure are widely acknowledged (Drisko,  2014 ). Response‐shift bias is one limitation of self‐efficacy scales discussed in the literature, whereby some individuals may change their understanding of the concept being measured during the intervention. Such ‘contamination’ of self‐efficacy scores (Howard & Dailey,  1979 ) can mask the positive effects of the intervention. This may explain why no change was identified by Pecukonis et al. ( 2016 ); however since a retrospective pre‐test was not issued to the students in their study, neither the presence nor impact of response‐shift bias can be established. Alternatively, the scales themselves may have contributed to the surprising results found by Rawlings ( 2008 ) and Pecukonis et al. ( 2016 ) since neither were properly validated. The subjectivity of self‐efficacy scales has been identified as another area of concern. Previous research has found that students’ self‐ratings do not necessarily correlate with those of field instructors/practice educators (Fortune et al.,  2005 ; Vitali,  2011 ), lecturers or service user‐actors (Koprowska,  2010 ). In this review, self‐efficacy scores and externally rated direct practice scores did not correlate in Rawlings ( 2008 ) study.

Self‐report instruments are still the most common way to measure empathy (Ilgunaite et al.,  2017 ; Segal et al.,  2017 ). However, the challenges associated with measuring perceived empathy through self‐reports (Lietz et al.,  2011 ; Robieux et al.,  2018 ) was clearly demonstrated in this review. Study authors anticipated that students’ perceived empathy levels would increase following training, but this expectation did not come to fruition in at least three studies, despite the study authors using different self‐report measures (including the IRI, QMEE and the TEQ), and even where other measures in the same studies did indicate skill gains. High and perhaps inflated ratings at pre‐test mask the improvements researchers anticipated. Greeno et al. ( 2017 ) acknowledged that training may impact more on behaviours and skills than self‐perception and identified that students’ TEQ scores were affected by high levels of perceived empathy at pre‐test. They suggested social desirability, meaning social work students want to be regarded as empathic, could compound this further, resulting in high rating scores at pre‐test. This ‘ceiling and testing effect’ (Greeno et al.,  2017 , p. 803) has been identified elsewhere (Gockel & Burton,  2014 ) and might result in a lack of significant changes in students’ level of reported empathy over time. Ilgunaite et al. ( 2017 , p. 14) also warn of social desirability, highlighting the controversy associated with asking people with poor empathic skills to self‐evaluate their own empathic abilities.

Concerns have been raised about what self‐reports actually measure, reflecting one type of conceptualisation at the expense of others. For example, the Toronto Empathy Questionnaire used in Greeno et al.'s ( 2017 ) study views empathy primarily as an emotional process but leaves the cognitive components of perspective taking and self/other awareness unaccounted for. This reflects wider concerns regarding the validity of self‐report questionnaires as an accurate measure of outcomes.

The finding that self‐report scores did not significantly correlate with other measures that were used alongside them lends support to the claim that empathic attitudes are not ‘a proxy for actions’ (Lietz et al.,  2011 , p. 104). It is possible that skills training has more impact on students’ behaviours than their attitudes, a point that was made by Barber ( 1988 ). Regardless of the varying explanations, self‐report measures of empathy tell us very little about empathic accuracy (Gerdes et al.,  2010 , p. 2334). The problems are not specific to the studies in this review or social work education in general. In an evaluation of empathy measurement tools used in nursing research, Yu and Kirk ( 2009 ) suggested that of the 12 measures they reviewed, none of them were ‘psychometrically and conceptually satisfactory’ (p. 1790).

Schinke et al.'s ( 1978 ) study bucked the trend, finding students’ positive attitudes towards their skills were almost three times higher among those who had received CST compared to those who did not. Interestingly, the self‐report instrument used in this study measured clearly specified counselling skills, and thus did not suffer from the conceptual confusion faced by those seeking to measure empathy.

Observer ratings

Observer ratings, conducted by independent raters, are often considered to be more valid and reliable measures of communication skills than the aforementioned subjective self‐report measures. Observation measures enable third party assessment of non‐verbal and verbal behaviours to be undertaken. As Keefe ( 1979 , p. 31) suggests, ‘accurate’ empathy when measured against a set of observer rating scales has been the basis for much valuable research and training in social work, particularly when combined with other variables. Observation measures were the primary instrument employed by the researchers of the included studies and produced the clearest demonstration of the effects of CST.

Studies using objective measures showed positive change, suggesting empathy training is effective. Studies using both self‐report and objective measures reported no significant changes in empathy using self‐report but found higher levels of behavioural empathy when using objective measures. The same pattern was identified in a review of empathy training by Teding van Berkhout and Malouff ( 2015 ). As Greeno et al. ( 2017 , p. 804) explain, perceived empathy is not correlated to actual empathic behaviours as scored by observers . Observation measures also posed some challenges for the studies included in this review, for example the repeated use of scales in training and assessment creates the problem of test‐retest artefacts (Nerdrum & Lundquist,  1995 ).

The Carkhuff ( 1969a ,  1969b ) scales have been frequently used in social work education (Hepworth et al.,  2010 ). The Carkhuff communication index is a written skills test measure used to assess the level of facilitative communication or core condition responses in relation to client statements of standardised vignettes. Carkhuff ( 1969a ) reported that there is a close relation between responses to the stimulus expressions in written form and verbal form and responses offered in an actual interview with a client. Thus, Carkhuff concludes that ‘both written and verbal responses to helpee stimulus expressions are valid indexes of assessments of the counselor in the actual helping role’ (Carkhuff,  1969a , p. 108). However, mastery of accurate discrimination has not been sufficient to guarantee congruent empathic responding within a given verbal interaction. Providing verbal empathic responses is arguably more challenging than producing written statements, hence in VanCleave's ( 2007 ) study, trained raters used the Carkhuff's Index for Communication to score the empathic responses of students to the Carkhuff stems, which were delivered by trained actors. Through comparing the findings produced by different methods of measurement, Collins ( 1984 ) found, ‘students were significantly better at writing minimally facilitative skill responses than demonstrating them orally as measured in a role‐play interview (p. 124). Noting ‘a lack of equivalence between written and oral modes of responding’, the validity of the Carkhuff stems is challenged by Collins’ study (Collins,  1984 , p. 148). Schinke et al. ( 1978 ) acknowledge similar concerns. Written skills test measures are not generalisable to, or indicative of, students’ behavioural responses in real life settings, threatening the ecological validity of such measures.

Vinton and Harrington ( 1994 ) also used the Carkhuff scale to measure expressed empathy and encountered measurement issues, which they suggest could have been caused by the validity of the measure, the additional statement they included in the questionnaire or other variables such as personality characteristics or background experiences.

The challenge of measuring empathy is apparent both within and across the included studies. Studies of empathy within social work have adopted a range of disparate methods to measure empathy depending on how it has been conceptualised (Lynch et al.,  2019 ; Pedersen,  2009 ), often focusing on one component of empathy at the expense of another. As Gerdes and Segal ( 2009 , p. 115) explain, ‘semantic fuzziness, conceptualizations and measurement techniques for empathy vary so much that it has been difficult to engage in meaningful comparisons or make significant conclusions about how we define empathy, measure it, and effectively cultivate it’.

6.3.3. Concerns about outcomes

The paucity of evidence‐supported outcome measures in social work education has been apparent for some time (Holden et al.,  2017 ), an issue we see reflected in this review.

Self‐efficacy

Self‐efficacy has been introduced as one means of assessing outcomes in social work education (Bell et al.,  2005 ; Holden et al.,  1997 ,  2002 ,  2005 ; Unrau & Grinnell,  2005 ). Self‐efficacy is deemed to be an important component of learning because ‘unless people believe they can produce desired effects by their actions, they have little incentive to act’ (Bandura,  1986 , p. 3). However, the use of self‐efficacy as an outcome measure in social work education is not without controversy, with some people recommending that ‘change in actual behaviours should be assessed where possible’ (Doyle et al.,  2011 , p. 105). Rawlings ( 2008 ) cautions against the use of self‐efficacy as a proxy measure for skill; ‘measures of social work self‐efficacy are limited to student beliefs or perception regarding skill and do not measure actual performance’ (pp. 7–8).

6.3.4. Concerns about research designs

The research designs used to investigate the effectiveness of interventions in social work education lack rigour, with few adhering to all the key features constituting a true experimental design. As Carpenter ( 2005 , p. 4) suggests, ‘the poor quality of research design of many studies, together with the limited information provided in the published accounts are major problems in establishing an evidence base for social work education’ (Carpenter,  2005 , p. 4). Identifying a dearth of writing which addressed the challenging issues of evaluating the learning and teaching of communication skills in social work education, Trevithick et al. ( 2004 , p. 28), in a UK‐based review, point out that ‘without robust evaluative strategies and studies the risks of fragmented and context restricted learning are heightened’. Similar issues arise in educational research more generally.

6.3.5. Concerns about researcher allegiance, positionality and confirmation bias

The study authors are predominantly social work academics conducting research within their own institutions. It is highly likely that they will have a vested interest in wanting the teaching of communication skills to be successful, particularly if they have been involved in the development of the intervention(s) under investigation. Researcher allegiance bias, and the challenges this presents are increasingly being recognised (Grundy et al.,  2020 ; Montgomery & Belle Weisman,  2021 ; Uttley & Montgomery,  2017 ; Yoder et al.,  2019 ). Whilst some risks of bias have been reduced within the included studies, they have not been eliminated. The relationships between students, academics and researchers, and the impact these dynamics may have on study findings, are largely under‐explored.

The studies included in this review are not large multi‐team trials, rather the study authors are working in small groups or alone, which hampers the resources available to them to mitigate bias in data collection and analysis procedures. Using an independent statistician to facilitate the blinding of outcome measures would have enabled study authors to overcome the inability to blind the participants or the experimenters.

Reviewers are no more immune from conflicts of interest or unconscious bias than the triallists and researchers of the included studies. Both reviewers are social work academics, and the first author (ERH) teaches communication skills to social work students, which is why it became the topic of her PhD. Whilst neither reviewer have a vested interest in any of the authors, institutions or interventions under investigation in the included studies, the first author acknowledges that she believes, or at least hopes, that students’ communication skills and their development of empathy, will be enhanced through taught interventions. ERH has had to be very mindful throughout the review of the potential for unconscious confirmation bias, and the need to remain as objective and impartial as possible. She also recognises that her own positionality, influenced by pedagogic experiences and social work values, have led her to believe in the importance of the educator's teaching style, the positive contribution of service user and carer involvement, and the added value of involving students in curriculum delivery and design, especially for developing social work skills (Reith‐Hall,  2020 ). These components were largely absent from the included studies, a source of frustration to the first author, who frequently has to remind herself that constructs of teaching and learning have changed considerably from when the majority of the included studies were undertaken, and that her views on such matters might be partly cultural and highly personal. Whilst unlikely to have affected the conduct or findings of the review itself, ERH recognises her beliefs have a bearing on the gaps identified in the research and potential policy and practice implications.

6.4. Potential biases in the review process

We performed a comprehensive search of a wide range of electronic databases and grey literature followed by the hand searching of key journals and reference searching of relevant studies. Both members of the review team screened all records and assessed all included studies against the inclusion criteria set out in the protocol, increasing consistency and rigour and minimising potential biases in the review process.

We sought to locate all publicly available studies on the effect of teaching and learning of communication skills in social work education during the review process, however it is difficult to establish if our endeavours were successful. It was a surprise to the first author that one of the included studies, which very clearly met the inclusion criteria, was obtained through reference searching, rather than through the electronic database search. As predicted by the second author, the age and style of the publication meant no key words were used, a search function upon which the electronic databases rely. Whilst this study came to light through reference searching, we cannot be entirely sure that other similar studies were surfaced in this way. Therefore publication bias cannot be entirely ruled out.

Our search was not limited to records written in English; indeed, one of the two unobtainable studies was written in Afrikaans, however, the rest of the studies were written in English. Rather than indicating a limitation of the way the review was conducted, it is likely that the location of the studies is responsible for the language bias—all of the included studies were conducted in English‐speaking countries, with the majority from the United States. Evidence‐based practice is well established in the United States, contributing to the use of study designs that increase the likelihood of them being included in systematic reviews.

Uncertainties and differences of opinion were resolved through contacting study authors for further information and through further reading and discussion, without recourse for a third‐party adjudicator. Both reviewers independently screened and assessed the studies. We are not aware of other potential biases or limitations inherent within the review process.

6.5. Agreements and disagreements with other studies or reviews

Findings from the included studies indicate that communication skills including empathy can be learned, and that the systematic training of student social workers produces improvements in their communication skills (Greeno et al.,  2017 ; Larsen & Hepworth,  1978 ; Laughlin,  1978 ; Pecukonis et al.,  2016 ; Schinke et al.,  1978 ; VanCleave,  2007 ), at least in the short term.

The findings of this systematic review broadly agree with the knowledge reviews about communication skills produced for the Social Care Institute of Excellence (Luckock et al.,  2006 ; Trevithick et al.,  2004 ). The knowledge reviews highlight that despite a lack of evidence, weak study designs, and a low level of rigour, study findings for the teaching and learning of communication skills in social work education are promising. Reviews of communication skills and empathy training in medical education (Aspegren,  1999 ; Batt‐Rawden et al.,  2013 ), where RCTs and validated outcome measures prevail, also suggest that CST leads to demonstrable improvements for students.

The findings from our review identified the same gaps as those found in the UK‐based social work knowledge and practice reviews for social work education, suggesting that little has changed. Trevithick et al. ( 2004 ) suggest that interventions are under‐theorised and the issue of whether students transfer their skills from the classroom to the workplace is unclear. Our findings concur with these observations. Diggins ( 2004 ) and Dinham ( 2006 ) identified the existence of far greater expertise and more examples of good practice than that reflected in the literature. Regrettably, our review suggests little has changed in almost 20 years.

7. AUTHORS’ CONCLUSIONS

7.1. implications for practice.

This review aimed to examine effects on a range of outcomes in social work education. With the exception of skill acquisition, there was insufficient evidence available to offer firm conclusions on other outcomes. It is unclear whether an issue with measurement or something to do with how students learn, or a combination of the two, is responsible for such uncertainty. Our understanding of how communication skills and empathy are learnt and taught remain limited, due to a lack of empirical research and comprehensive discussion. Discussing pedagogical explorations of empathy, Zaleski ( 2016 , p. 48) points out, ‘there lacks a sufficient exploration of specific teaching strategies’. Our review echoes and amplifies this view, within the context of social work education specifically. Disagreement remains within social work academia as to what empathy consists of. Segal et al. ( 2017 ) draw on cognitive neuroscience, and the role of mirror neurones, to underpin the teaching of empathy in social work education and practice. Eriksson and Englander ( 2017 , p. 607) take ‘a critical, phenomenological stance towards Gerdes and Segal's work’, exploring how empathy is conveyed in a context where practitioners are unlikely to be able to relate personally to the experiences of their client group. Given the continuing debate about the role of walking in someone else's shoes, it is hardly surprising that the studies in this review conceptualise and measure different aspects of empathy in a variety of ways producing incomplete and inconsistent results. Due to the clinical heterogeneity of populations and interventions, low methodological rigour and high risk of bias within the included studies, caution should be exercised when interpreting the findings for practice and policy.

Despite the limitations and variations in educational culture, the findings are still useful, and indicate that CST is likely to be beneficial. One important implication for practice appears to be that the teaching and learning of communication skills in social work education should provide opportunities for students to practice skills in a simulated (or real) environment. Toseland and Spielberg ( 1982 ) suggest that skills diminish gradually if not reinforced. They suggest that students should be exposed to the effective application of interpersonal helping skills in several different courses and be encouraged to practice these skills in a variety of case situations role‐played in classroom and laboratory settings, as well as in field settings. Larsen and Hepworth ( 1978 ) and Pecukonis et al. ( 2016 ) also suggest that CST must be better integrated with practice settings, where students can demonstrate communicative and interviewing abilities with actual clients in real‐world practice settings, ‘the ultimate test of any social work practice skill’ (Schinke et al.,  1978 , p. 400).

Technology is widely used in the teaching and learning of communication skills in social work education, and whilst technological advances have been considerable in recent years, current practice is not captured in the studies featuring in this review. The further sharing of good practice between students and educators continues to be necessary. The Australian Association of Social Workers identifies that face‐to‐face teaching remains the standard approach for teaching professional practice skills, whilst acknowledging that online technologies and blended learning are also encouraged (Australian Association of Social Workers,  2020 ). Barriers preventing the further uptake of technology throughout social work education have been identified. In a review of the literature into key issues with web‐based learning in human services, Moore ( 2005 ) discovered that some social work educators believe traditional instruction to be superior to web‐based instruction, especially for courses focused on micro practice and clinical skills. Similar findings have been reproduced more recently, especially for practice‐oriented competencies (Levin et al.,  2018 ). Despite such reservations, reviews into technology‐based pedagogical methods in social work education have indicated that students’ competencies were largely equitable between online and face‐to‐face modalities (Afrouz & Crisp,  2021 ; Wretman & Macy,  2016 ). The extent to which this applies to outcomes of communication skills and empathy remains unknown. In this review the studies that compared face‐to‐face interventions with online interventions did not reach a consensus, since Ouellette et al. ( 2006 ) found there was no difference in outcomes between online and face‐to‐face teaching, whilst Greeno et al. ( 2017 ) and Pecukonis et al. ( 2016 ) found the outcomes of students who received live supervision were greater than those who engaged in self‐directed study online. However, we do not know whether student outcomes were affected by the presence or absence of an educator. Differences might not be attributable to the interventions themselves, for as Levin et al. ( 2018 , p. 777) remark, ‘the role of an instructor in online learning cannot be underestimated’.

Certainly, the proliferation of online social work courses is evident across Australia (Australian Association of Social Workers,  2018 ) and the USA (Council on Social Work Education,  2020 ). The global coronavirus/Covid‐19 pandemic has led to exponential growth of online teaching and learning in social work education, hence ‘we can be nearly certain that the ‘new normal’ will include the use of information technology’ (Tedam & Tedam,  2020 , p. 3). Therefore, it is imperative that we investigate the impact of online learning and web‐based instruction and the role of the educator in different contexts on the development of social work students’ communicative and empathic abilities.

7.2. Implications for research

There is much to be done to improve the outcome studies in social work education generally and for the teaching and learning of communication skills in social work education specifically. Robust study designs that support causal inferences through the random allocation to intervention and control groups is a necessity. Steps to reduce threats to the internal validity of case‐controlled studies should also be exercised to reduce the impact of test–retest artefacts identified by Nerdrum and Lundquist ( 1995 ) in some of the other studies. More work is needed on defining and measuring outcomes (Diggins,  2004 ). Validated measures which can be used consistently across future studies would make comparisons easier and enable future synthesis to be more meaningful.

The review found that relying solely on self‐report measures was problematic, particularly given that the findings from these did not correlate with the findings produced from other measures. Vinton and Harrington ( 1994 ) found there was no statistically significant correlation between students’ perceptions of their learning experience and self‐assessment of their skill acquisition with the independent evaluator's rating of the students’ acquisition of interviewing skills. Methodological triangulation should be considered in future studies.

Other study authors advise researchers to use objective measures of communication skills including behavioural measures of empathy (Greeno et al.,  2017 ; Pecukonis et al.,  2016 ), a recommendation also made by Teding van Berkhout and Malouff ( 2015 ) in a review of empathy training. Collins ( 1984 ) recommended that more research is required on the equivalency of measures, given the different results the measures in his study produced. Carpenter ( 2005 ,  2011 ) provides guidance on how research designs and outcome measures can be further developed in social work education. This review highlights the need for research that utilises follow‐up studies, which would help determine the extent to which training benefits endure after the end of training (Schinke et al.,  1978 ; VanCleave,  2007 ). Rawlings ( 2008 ) advises that a longitudinal design, testing the same students over time, is required. The need to investigate whether or not students were able to transfer their skills into practice has also been firmly stated (Carpenter,  2005 ).

In addition to outcome studies, VanCleave ( 2007 ) recommends the inclusion of qualitative data in researching the teaching and learning of communication skills in social work education. Building a qualitative strand into the research design would facilitate exploration and explanation of the quantitative outcomes. It would also enable the voices of the intended beneficiaries of the interventions under investigation to be heard and acted upon. As a values‐based profession, a focus on stakeholder participation and contribution should be at the forefront of research in social work education. The benefits of involving service users and carers in social work education are well rehearsed, and examples of their input in the teaching and learning of communication skills are plentiful within the wider literature. However, the value of service users and carers is not evident within the included studies, thus gap‐mending strategies need to be established across the realms of social work education, practice and research, to prevent certain types of social work knowledge receiving more preferential status than others. As Carpenter ( 2005 , p. 7) points out, since the purpose of the whole exercise is to benefit service users and/or carers, a comprehensive evaluation should ask whether training has made any difference to their lives’.

Finally, the theory of change appears to be assumed rather than clearly defined. Research that identifies the relevant substantive theories on which the teaching and learning of communication skills is based would provide a good starting point. Moreover, whilst the studies in the review indicated that CST encourages some improvement, particularly in terms of the skills outcomes measured, clarity on the mechanisms involved in positive effects requires additional research. The role of reflection, whilst briefly mentioned in some of the included studies, has been largely overlooked. The role of context is almost completely absent in the existing body of literature. Zaleski ( 2016 ) suggest the teaching style of the educator can influence students’ ability to learn empathy, yet they acknowledge that literature into the educational environment is lacking. A realist synthesis would support the theoretical development of the teaching and learning of communication skills in social work education. Realist synthesis is an interpretive theory‐driven methodological approach to reviewing quantitative, qualitative and mixed methods research evidence about complex social interventions to provide an explanatory analysis of how and why they work in particular contexts or settings. This research approach would support the theoretical development of the teaching and learning of communication skills in social work education, complementing that of this systematic review (Reith‐Hall,  2022 ).

CONTRIBUTIONS OF AUTHORS

  • Content: Emma Reith‐Hall
  • Systematic review methods: Emma Reith‐Hall and Paul Montgomery
  • Statistical analysis: Paul Montgomery
  • Information retrieval: Emma Reith‐Hall
  • Write up: Emma Reith‐Hall and Paul Montgomery

DECLARATIONS OF INTEREST

Emma Reith‐Hall is a social work academic who has been involved in the teaching and learning of communication skills in social work education in a number of higher education institutions. The author acknowledges she holds a position whereby she believes that communication skills can, and should, be taught, learnt, and refined. Paul Montgomery is primarily a methodologist and systematic reviewer who considers his position on the issue of communication skills to be equivocal. Neither author has a financial conflict of interest.

DIFFERENCES BETWEEN PROTOCOL AND REVIEW

Sources of support.

Internal sources

No internal sources of support.

External sources

ERH is undertaking the systematic review as part of her PhD research, for which she receives ESRC DTP funding (Grant number: ES/P000711/1).

  • Paul Montgomery, UK

No sources of support

Supporting information

Supporting information.

ACKNOWLEDGEMENTS

We are particularly grateful to our stakeholders—the students, practitioners, people with lived experience, social work academics and social work organisations who gave their input into the development of this systematic review. The contribution of two research‐minded social work students—Ryan Barber and Fee Steane is particularly appreciated.

Thank you to the editorial team at Campbell.

Emma Reith‐Hall is in receipt of an ESRC studentship, for which she receives ESRC funding.

Reith‐Hall, E. , & Montgomery, P. (2023). Communication skills training for improving the communicative abilities of student social workers . Campbell Systematic Reviews , 19 , e1309. 10.1002/cl2.1309 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

INCLUDED STUDIES

  • *Barber, J. (1988). Are microskills worth teaching? Journal of social work education , 24 ( 1 ), 3–12. [ Google Scholar ]
  • *Collins, D. (1984). A study of the transfer of interviewing skills from the classroom to the field [Doctoral dissertation, University of Toronto].
  • Greeno, E. J. , Ting, L. , Pecukonis, E. , Hodorowicz, M. , & Wade, K. (2017). The role of empathy in training social work students in motivational interviewing . Social Work Education , 36 ( 7 ), 794–808. [ Google Scholar ]
  • Hettinga, P. (1978). The impact of videotaped interview playback with instructional feedback on social work student self‐perceived interviewing competence and self‐esteem [Doctoral dissertation, University of Minnesota].
  • Keefe, T. (1979). The development of empathic skill: A study . Journal of Education for Social Work , 15 ( 2 ), 30–37. [ Google Scholar ]
  • Larsen, J. (1975). A comparative study of traditional and competency‐based methods of teaching interpersonal skills in social work education [Doctoral Dissertation, The University of Utah].
  • *Larsen, J. , & Hepworth, D. H. (1978). Skill development through competency‐based education . Journal of Education for Social Work , 14 ( 1 ), 73–81. [ Google Scholar ]
  • *Laughlin, S. G. (1978). Use of self‐instruction in teaching empathic responding to social work students [Doctoral Dissertation, University of California].
  • *Ouellette, P. M. , Westhuis, D. , Marshall, E. , & Chang, V. (2006). The acquisition of social work interviewing skills in a web‐based and classroom instructional environment: results of a study . Journal of Technology in Human Services , 24 ( 4 ), 53–75. [ Google Scholar ]
  • Pecukonis, E. , Greeno, E. , Hodorowicz, M. , Park, H. , Ting, L. , Moyers, T. , Burry, C. , Linsenmeyer, D. , Strieder, F. , Wade, K. , & Wirt, C. (2016). Teaching motivational interviewing to child welfare social work students using live supervision and standardized clients: A randomized controlled trial . Journal of the Society for Social Work and Research , 7 ( 3 ), 479–505. [ Google Scholar ]
  • *Rawlings, M. A. (2008). Assessing direct practice skill performance in undergraduate social work education using standardized clients and self‐reported self‐efficacy [Doctoral dissertation, Case Western Reserve University].
  • *Schinke, S. P. , Smith, T. E. , Gilchrist, L. D. , & Wong, S. E. (1978). Interviewing‐skills training: An empirical evaluation . Journal of Social Service Research , 1 ( 4 ), 391–401. [ Google Scholar ]
  • *Toseland, R. , & Spielberg, G. (1982). The development of helping skills in undergraduate social work education: Model and evaluation . Journal of Education for Social Work , 18 ( 1 ), 66–73. [ Google Scholar ]
  • *VanCleave, D. (2007). Empathy training for master's level social work students facilitating advanced empathy responding [Doctoral dissertation: Capella University].
  • *Vinton, L. , & Harrington, P. (1994). An evaluation of the use of videotape in teaching empathy . Journal of Teaching in Social Work , 9 ( 1–2 ), 71–84. [ Google Scholar ]
  • *Wells, R. A. (1976). A comparison of role‐play and “own‐problem” procedures in systematic facilitative training . Psychotherapy: Theory, Research & Practice , 13 ( 3 ), 280–281. [ Google Scholar ]

EXCLUDED STUDIES

  • *Andrews, P. , & Harris, S. (2017). Using live supervision to teach counselling skills to social work students . Social Work Education , 36 ( 3 ), 299–311. [ Google Scholar ]
  • *Bakx, A. W. E. A. , Van Der Sanden, J. M. M. , Sijtsma, K. , Croon, M. A. , & Vermetten, Y. J. M. (2006). The role of students’ personality characteristics, self‐perceived competence and learning conceptions in the acquisition and development of social communicative competence: A longitudinal study . Higher Education , 51 ( 1 ), 71–104. [ Google Scholar ]
  • *Barclay, B. (2012). Undergraduate social work students: Learning interviewing skills in a hybrid practice class [Doctoral dissertation, Colorado State University].
  • *Bogo, M. , Regehr, C. , Baird, S. , Paterson, J. , & LeBlanc, V. R. (2017). Cognitive and affective elements of practice confidence in social work students and practitioners . British Journal of Social Work , 47 ( 3 ), 701–718. [ Google Scholar ]
  • *Bolger, J. (2014). Video self‐modelling and its impact on the development of communication skills within social work education . Journal of Social Work , 14 ( 2 ), 196–212. [ Google Scholar ]
  • Bolger, J. (2014). Video self‐modelling and its impact on the development of communication skills within social work education . Journal of Social Work , 14 ( 2 ), 196–212. [ Google Scholar ]
  • *Carrillo, D. F. , & Thyer, B. A. (1994). Advanced standing and two‐year program MSW students: An empirical investigation of foundation interviewing skills . Journal of Social Work Education , 30 ( 3 ), 377–387. [ Google Scholar ]
  • *Carillo, D. , Gallart, J. , & Thyer, B. (1993). Training MSW students in interviewing skills: An empirical assessment . Arete , 18 ( 1 ), 12–19. [ Google Scholar ]
  • *Carter, K. , Swanke, J. , Stonich, J. , Taylor, S. , Witzke, M. , & Binetsch, M. (2018). Student assessment of self‐efficacy and practice readiness following simulated instruction in an undergraduate social work program . Journal of Teaching in Social Work , 38 ( 1 ), 28–42. [ Google Scholar ]
  • *Cartney, P. (2006). Using video interviewing in the assessment of social work communication skills . British Journal of Social Work , 36 ( 5 ), 827–844. [ Google Scholar ]
  • *Cetingok, M. (1988). Simulation group exercises and development of interpersonal skills: Social work administration students’ assessment in a simple time‐series design framework . Small Group Behavior , 19 ( 3 ), 395–404. [ Google Scholar ]
  • *Collins, D. , Gabor, P. , & Ing, C. (1987). Communication skill training in child‐care: The effects of preservice and inservice training . Child & Youth Care Quarterly , 16 ( 2 ), 106–115. [ Google Scholar ]
  • *Corcoran, J. , Stuart, S. , & Schultz, J. (2019). Teaching interpersonal psychotherapy (IPT) in an MSW clinical course . Journal of Teaching in Social Work , 39 ( 3 ), 226–236. [ Google Scholar ]
  • *Domakin, A. (2013). Can online discussions help student social workers learn when studying communication? Social Work Education , 32 ( 1 ), 81–99. [ Google Scholar ]
  • *Gockel, A. , & Burton, D. L. (2014). An evaluation of prepracticum helping skills training for graduate social work students . Journal of social work education , 50 ( 1 ), 101–119. [ Google Scholar ]
  • *Hansen, F. C. B. , Resnick, H. , & Galea, J. (2002). Better listening: paraphrasing and perception checking—A study of the effectiveness of a multimedia skills training program . Journal of Technology in Human Services , 20 ( 3–4 ), 317–331. [ Google Scholar ]
  • *Hodorowicz, M. (2018). Teaching and learning motivational interviewing: Examining the efficacy of two training methods for social work students [Doctoral dissertation, University of Maryland, Baltimore].
  • *Hodorowicz, M. T. , Barth, R. , Moyers, T. , & Strieder, F. (2020). A randomized controlled trial of two methods to improve motivational interviewing training . Research on Social Work Practice , 30 ( 4 ), 382–391. [ Google Scholar ]
  • *Hohman, M. , Pierce, P. , & Barnett, E. (2015). Motivational interviewing: An evidence‐based practice for improving student practice skills . Journal of social work education , 51 ( 2 ), 287–297. [ Google Scholar ]
  • *Kopp, J. (1982). Changes in graduate social work students’ use of interviewing skills from training to practicum [Doctoral Dissertation, Washington University in St. Louis].
  • *Kopp, J. , & Butterfield, W. (1985). Changes in graduate students’ use of interviewing skills from the classroom to the field . Journal of Social Service Research , 9 ( 1 ), 65–88. [ Google Scholar ]
  • *Kopp, J. (1990). The transfer of interviewing skills to practicum by students with high and low pre‐training skill levels . Journal of Teaching in Social Work , 4 ( 1 ), 31–52. [ Google Scholar ]
  • *Koprowska, J. (2010). Are student social workers’ communication skills improved by university‐based learning. In Burgess H., & Carpenter J. (Eds.), The outcomes of social work education: Developing evaluation methods (pp. 73–97). The Higher Education Academy; Social Policy and Social Work. [ Google Scholar ]
  • *Lefevre, M. (2010). Evaluating the teaching and learning of communication skills for use with children and young people. In Burgess H., & Carpenter J. (Eds.), The outcomes of social work education: Developing evaluation methods (pp. 96–110). The Higher Education Academy; Social Policy and Social Work. [ Google Scholar ]
  • *Magill, J. , & Werk, A. (1985). Classroom training as preparation for the social work practicum: An evaluation of a skills laboratory training program . The Clinical Supervisor , 3 ( 3 ), 69–76. [ Google Scholar ]
  • *Mishna, F. , Tufford, L. , Cook, C. , & Bogo, M. (2013). Research note—A pilot cyber counseling course in a graduate social work program . Journal of Social Work Education , 49 ( 3 ), 515–524. [ Google Scholar ]
  • *Nerdrum, P. (1997). Maintenance of the effect of training in communication skills: A controlled follow‐up study of level of communicated empathy . British Journal of Social Work , 27 ( 5 ), 705–722. [ Google Scholar ]
  • *Nerdrum, P. , & Høglend, P. (2003). Short and long‐term effects of training in empathic communication: Trainee personality makes a difference . The Clinical Supervisor , 21 ( 2 ), 1–19. [ Google Scholar ]
  • *Nerdrum, P. , & Lundquist, K. (1995). Does participation in communication skills training increase student levels of communicated empathy? A controlled outcome study . Journal of Teaching in Social Work , 11 ( 1–2 ), 139–157. [ Google Scholar ]
  • *Patton, T. (2019). Engaging methods to teach empathy: A successful journey to transformation [Doctoral dissertation, Union University].
  • *Rogers, A. , & Welch, B. (2009). Using standardized clients in the classroom: An evaluation of a training module to teach active listening skills to social work students . Journal of Teaching in Social Work , 29 ( 2 ), 153–168. [ Google Scholar ]
  • *Scannapieco, M. , Bolen, R. M. , & Connell, K. K. (2000). Professional social work education in child welfare: Assessing practice knowledge and skills . The International Journal of Continuing Social Work Education , 3 ( 1 ), 44–56. [ Google Scholar ]
  • *Tompsett, H. , Henderson, K. , Mathew Byrne, J. , Gaskell Mew, E. , & Tompsett, C. (2017). Self‐efficacy and outcomes: Validating a measure comparing social work students’ perceived and assessed ability in core pre‐placement skills . The British Journal of Social Work , 47 ( 8 ), 2384–2405. [ Google Scholar ]
  • *Wodarski, J. S. , Pippin, J. A. , & Daniels, M. (1988). The effects of graduate social work education on personality, values and interpersonal skills . Journal of social work education , 24 ( 3 ), 266–277. [ Google Scholar ]
  • ADDITIONAL REFERENCES
  • Afrouz, R. , & Crisp, B. R. (2021). Online education in social work, effectiveness, benefits, and challenges: A scoping review . Australian Social Work , 74 ( 1 ), 55–67. [ Google Scholar ]
  • Askheim, O. P. , Beresford, P. , & Heule, C. (2017). Mend the gap—Strategies for user involvement in social work education . Social Work Education , 36 ( 2 ), 128–140. [ Google Scholar ]
  • Aspegren, K. (1999). BEME Guide No. 2: Teaching and learning communication skills in medicine—A review with quality grading of articles . Medical Teacher , 21 ( 6 ), 563–570. [ PubMed ] [ Google Scholar ]
  • Australian Association of Social Workers . (2018). AASW accredited courses . http://www.aasw.asn.au/careers-study/accredited-courses
  • Australian Association of Social Workers . (2020). Australian social work education and accreditation standards . https://www.aasw.asn.au/document/item/12845
  • Ayling, P. (2012). Learning through playing in higher education: promoting play as a skill for social work students . Social Work Education , 31 ( 6 ), 764–777. [ Google Scholar ]
  • Banach, M. , Rataj, A. , Ralph, M. , & Allosso, L. (2020). Learning social work through role play: Developing more confident and capable social workers . The Journal of Practice Teaching and Learning , 17 ( 1 ), 42–60. [ Google Scholar ]
  • Bandura, A. (1971). Social learning theory . General Learning Press. [ Google Scholar ]
  • Bandura, A. (1976). Self‐reinforcement: theoretical and methodological considerations . Behaviorism , 4 , 135–155. [ Google Scholar ]
  • Bandura, A. (1982). Self‐efficacy mechanism in human agency . American Psychologist , 37 , 122–147. [ Google Scholar ]
  • Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory . Prentice‐Hall. [ Google Scholar ]
  • Bandura, A. (1997). Self‐efficacy: Thought control of action . W.H. Freeman and Company. [ Google Scholar ]
  • Bandura, A. (2001). Social cognitive theory: An agentic perspective . Annual Review of Psychology , 52 , 1–26. [ PubMed ] [ Google Scholar ]
  • Barak, A. , & LaCrosse, M. B. (1975). Multidimensional perception of counselor behavior . Journal of Counseling Psychology , 22 ( 6 ), 471–476. [ Google Scholar ]
  • Barr, H. , Freeth, D. , Hammick, M. , Koppel, I. , & Reeves, S. (2000). Evaluating interprofessional education: A United Kingdom review for health and social care . Centre for the Advancement of Interprofessional Education. https://www.caipe.org
  • Batt‐Rawden, S. A. , Chisolm, M. S. , Anton, B. , & Flickinger, T. E. (2013). Teaching empathy to medical students: An updated, systematic review . Academic Medicine , 88 ( 8 ), 1171–1177. [ PubMed ] [ Google Scholar ]
  • Beesley, P. , Watts, M. , & Harrison, M. (2018). Developing your communication skills in social work . Sage. [ Google Scholar ]
  • Bell, S. A. , Rawlings, M. , & Johnson, B. (2005). Assessing skills, attitudes, and knowledge in gerontology: The results of an infused curriculum project . Journal of Baccalaureate Social Work , 11 ( sp1 ), 26–37. [ Google Scholar ]
  • Beresford, P. , Croft, S. , & Adshead, L. (2008). ‘We don't see her as a social worker’: A service user case study of the importance of the social worker's relationship and humanity . British Journal of Social Work , 38 ( 7 ), 1388–1407. [ Google Scholar ]
  • Boutron, I. , Page, M. J. , Higgins, J. P. T. , Altman, D. G. , Lundh, A. , & Hróbjartsson, A. (2021). Chapter 7: Considering bias and conflicts of interest among the included studies. In Higgins J. P. T., Thomas J., Chandler J., Cumpston M., Li T., Page M. J., & Welch V. A. (Eds.), Cochrane handbook for systematic reviews of interventions version 6.2 (updated February 2021) . Cochrane. [ Google Scholar ]
  • British Association of Social Workers . (2018). Professional capabilities framework for social work in England: The 2018 refreshed PCF . https://www.basw.co.uk/system/files/resources/BASW%20PCF.%20Detailed%20level%20descriptors%20for%20all%20domains.25.6.18%20final.pdf
  • Brunero, S. , Lamont, S. , & Coates, M. (2010). A review of empathy education in nursing . Nursing Inquiry , 17 ( 1 ), 65–74. [ PubMed ] [ Google Scholar ]
  • Campbell, D. T. , & Stanley, J. (1963). Experimental and quasi‐experimental designs for research on teaching. In Gage N. L. (Ed.), Handbook of research on teaching . (Vol. 5 , pp. 171–246). Rand McNally. [ Google Scholar ]
  •  Campbell, R. J. , Kagan, N. , & Krathwohl, D. R. (1971). The development and validation of a scale to measure affective sensitivity (empathy) . Journal of Counseling Psychology , 18 ( 5 ), 407–412. [ Google Scholar ]
  • Carkhuff, R. R. , & Truax, C. B. (1965). Training in counseling and psychotherapy: An evaluation of an integrated didactic and experiential approach . Journal of Consulting Psychology , 29 ( 4 ), 333–336. [ PubMed ] [ Google Scholar ]
  • Carkhuff, R. R. (1969a). Helping and human relations. Vol. I: Selection and training . Holt, Rinehart and Winston. [ Google Scholar ]
  • Carkhuff, R. R. (1969b). Helping and human relations. Vol. II: Practice and research . Holt, Rinehart and Winston. [ Google Scholar ]
  • Carkhuff, R. R. (1969c). Helping and human relations: A primer for lay and professional helpers . Holt, Rhinehart & Winston. [ Google Scholar ]
  • Carkhuff, R. R. , & Berenson, B. G. (1976). Teaching as treatment: An introduction to counseling & psychotherapy . Human Resource Development Press. [ Google Scholar ]
  • Carpenter, J. (2005). Evaluating outcomes in social work education: Evaluation and evidence (Discussion Paper 1). SCIE. [ Google Scholar ]
  • Carpenter, J. (2011). Evaluating social work education: A review of outcomes, measures, research designs and practicalities . Social Work Education , 30 ( 2 ), 122–140. [ Google Scholar ]
  • Carpenter, J. (2016). Evaluating the outcomes of social work education. In Taylor I., Bogo M., Lefevre M., & Teater B. (Eds.), Routledge international handbook of social work education . Routledge. [ Google Scholar ]
  • Cartney, P. (2006). Using video interviewing in the assessment of social work communication skills . British Journal of Social Work , 36 , 827–844. [ Google Scholar ]
  • Chang, V. , & Scott, S. T. (1999). Basic interviewing skills: A workbook for practitioners . Nelson‐Hall Publishers. [ Google Scholar ]
  • Cochrane Effective Practice and Organisation of Care . (2017). What study designs can be considered for inclusion in an EPOC review and what should they be called? EPOC Resources for review authors . http://epoc.cochrane.org/resources/epoc-resources-review-authors
  • Council on Social Work Education . (2015). Education policy and accreditation standards . https://www.cswe.org/getattachment/Accreditation/Accreditation-Process/2015-EPAS/2015EPAS_Web_FINAL.pdf.aspx
  • Council on Social Work Education . (2020). Statistics on social work education in the United States: Summary of the CSWE annual survey of social work programs . https://www.cswe.org/getattachment/Research-Statistics/2019-Annual-Statisticson-Social-Work-Education-in-the-United-States-Final-(1).pdf.aspx
  • Cournoyer, B. (2016). The social work skills workbook (8th ed.). Cengage. [ Google Scholar ]
  • Davis, M. H. (1980). A multidimensional approach to individual differences in empathy . JSAS Catalog of Selected Documents of Psychology , 10 , 85. [ Google Scholar ]
  • Department of Health . (2002). Focus on the future: Key messages from focus groups about the future of social work education . Department of Health. [ Google Scholar ]
  • Diggins, M. (2004). Teaching and learning communication skills in social work education . SCIE Guide , 5 , 1–77. [ Google Scholar ]
  • Dinham, A. (2006). A review of practice of teaching and learning of communication skills in social work education in England . Social Work Education , 25 ( 8 ), 838–850. [ Google Scholar ]
  • Doyle, D. , Copeland, H. L. , Bush, D. , Stein, L. , & Thompson, S. (2011). A course for nurses to handle difficult communication situations. A randomized controlled trial of impact on self‐efficacy and performance . Patient Education and Counseling , 82 ( 1 ), 100–109. [ PubMed ] [ Google Scholar ]
  • Drisko, J. W. (2014). Competencies and their assessment . Journal of social work education , 50 , 414–426. [ Google Scholar ]
  • Dupper, D. (2017). Strengthening empathy training programs for undergraduate social work students . Journal of Baccalaureate Social Work , 22 ( 1 ), 31–41. [ Google Scholar ]
  • Edwards, J. B. , & Richards, A. (2002). Relational teaching: A view of relational teaching in social work education . Journal of Teaching in Social Work , 22 , 33–48. [ Google Scholar ]
  • Eisner, M. (2009). No effects in independent prevention trials: Can we reject the cynical view? Journal of Experimental Criminology , 5 ( 2 ), 163–183. [ Google Scholar ]
  • Elliott, R. , Bohart, A. C. , Watson, J. C. , & Murphy, D. (2018). Therapist empathy and client outcome: An updated meta‐analysis . Psychotherapy , 55 ( 4 ), 399–410. [ PubMed ] [ Google Scholar ]
  • Eraut, M. (1994). Developing professional knowledge and competence . Falmer. [ Google Scholar ]
  • Eriksson, K. , & Englander, M. (2017). Empathy in social work . Journal of social work education , 53 ( 4 ), 607–621. [ Google Scholar ]
  • Ferguson, H. (2016). What social workers do in performing child protection work: evidence from research into face‐to‐face practice . Child & Family Social Work , 21 ( 3 ), 283–294. [ Google Scholar ]
  • Forrester, D. , Kershaw, S. , Moss, H. , & Hughes, L. (2008). Communication skills in child protection: how do social workers talk to parents? Child and Family Social Work , 38 , 1302–1319. [ Google Scholar ]
  • Fortune, A. E. , Lee, M. , & Cavazos, A. (2005). Achievement motivation and outcome in social work field education . Journal of Social Work Education , 41 ( 1 ), 115–129. [ Google Scholar ]
  • Gagnier, J. J. , Morgenstern, H. , Altman, D. G. , Berlin, J. , Chang, S. , McCulloch, P. , Sun, X. , & Moher, D. (2013). Consensus‐based recommendations for investigating clinical heterogeneity in systematic reviews . BMC Medical Research Methodology , 13 ( 1 ), 106. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gair, S. (2011). Creating spaces for critical reflection in social work education: Learning from a classroom‐based empathy project . Reflective Practice , 12 ( 6 ), 791–802. [ Google Scholar ]
  • Gerdes, K. E. , & Segal, E. A. (2009). A social work model of empathy . Advances in Social Work , 10 ( 2 ), 114–127. [ Google Scholar ]
  • Gerdes, K. E. , & Segal, E. (2011). Importance of empathy for social work practice: integrating new science . Social Work , 56 ( 2 ), 141–148. [ PubMed ] [ Google Scholar ]
  • Gerdes, K. E. , Segal, E. A. , & Lietz, C. A. (2010). Conceptualising and measuring empathy . British Journal of Social Work , 40 ( 7 ), 2326–2343. [ Google Scholar ]
  • Grant, S. , Mayo‐Wilson, E. , Montgomery, P. , Macdonald, G. , Michie, S. , Hopewell, S. , & Moher, D. (2018). CONSORT‐SPI 2018 explanation and elaboration: Guidance for reporting social and psychological intervention trials . Trials , 19 ( 1 ), 406. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Grundy, Q. , Mayes, C. , Holloway, K. , Mazzarello, S. , Thombs, B. D. , & Bero, L. (2020). Conflict of interest as ethical shorthand: Understanding the range and nature of “non‐financial conflict of interest” in biomedicine . Journal of Clinical Epidemiology , 120 , 1–7. [ PubMed ] [ Google Scholar ]
  • Handley, G. , & Doyle, C. (2014). Ascertaining the wishes and feelings of young children: social workers' perspectives on skills and training: Ascertaining children's views . Child & Family Social Work , 19 ( 4 ), 443–454. [ Google Scholar ]
  • Hargie, O. (2006). The handbook of communication skills . Routledge. [ Google Scholar ]
  • Hargie, O. (2017). Skilled interpersonal communication: Research, theory and practice (6th ed.). Routledge. [ Google Scholar ]
  • Harms, L. (2015). Working with people: Communication skills for reflective practice (2nd ed.). Oxford University Press. [ Google Scholar ]
  • Healy, K. (2018). The skilled communicator in social work: The art and science of communication in practice . Palgrave. [ Google Scholar ]
  • Hemmerdinger, J. M. , Stoddart, S. D. , & Lilford, R. J. (2007). A systematic review of tests of empathy in medicine . BMC Medical Education , 7 ( 1 ), 24. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hepworth, D. H. , Rooney, R. H. , Rooney, G. D. , Strom‐Gottfried, K. , & Larsen, J. (2010). Direct social work practice: Theory and skills (8th ed.). Brooks/Cole. [ Google Scholar ]
  • Higgins, J. P. T. , Savović, J. , Page, M. J. , & Sterne, J. A. C. (2019). The Revised Cochrane risk‐of‐bias tool for randomized trials (RoB 2) . https://drive.google.com/open?id=19R9savfPdCHC8XLz2iiMvL_71lPJERWK
  • Higgins, J. P. T. , Li, T. , & Deeks, J. J. (Eds.). (2022). Chapter 6: Choosing effect measures and computing estimates of effect. In Higgins, J. P. T. , Thomas, J. , Chandler, J. , Cumpston, M. , Li, T. , Page, M. J. & Welch, V. A. (Eds.), Cochrane handbook for systematic reviews of Interventions version 6.3 . Cochrane. Retrieved June 16, 2022, from www.training.cochrane.org/handbook
  • Higgins, J. P. T. , Thomas, J. , Chandler, J. , Cumpston, M. , Li, T. , Page, M. J. , & Welch, V. A. (2021). Cochrane handbook for systematic reviews of interventions version 6.2 . www.training.cochrane.org/handbook [ PMC free article ] [ PubMed ]
  • Holden, G. , Cuzzi, L. , Spitzer, W. , Rutter, S. , Chernack, P. , & Rosenberg, G. (1997). The hospital social work self‐efficacy scale: A partial replication and extension . Health & Social Work , 22 ( 4 ), 256–263. [ PubMed ] [ Google Scholar ]
  • Holden, G. , Meenaghan, T. , Anastas, J. , & Metrey, G. (2002). Outcomes of social work education: The case for social work self‐efficacy . Journal of Social Work Education , 38 ( 1 ), 115–133. [ Google Scholar ]
  • Holden, G. , Anastas, J. , & Meenaghan, T. (2005). Research notes:EPAS objectives and foundation practice self‐efficacy: A replication . Journal of Social Work Education , 41 ( 3 ), 559–570. [ Google Scholar ]
  • Holden, G. , Barker, K. , Kuppens, S. , & Rosenberg, G. (2017). Self‐efficacy regarding social work competencies . Research on Social Work Practice , 27 ( 5 ), 594–606. [ Google Scholar ]
  • Howard, G. S. , & Dailey, P. R. (1979). Response‐shift bias: A source of contamination of self‐report measures . Journal of Applied Psychology , 64 ( 2 ), 144–150. [ Google Scholar ]
  • Huerta‐Wong, J. E. , & Schoech, R. (2010). Experiential learning and learning environments: The case of active listening skills . Journal of Social Work Education , 46 ( 1 ), 85–101. [ Google Scholar ]
  • Ilgunaite, G. , Giromini, L. , & Di Girolamo, M. (2017). Measuring empathy: A literature review of available tools . BPA—Applied Psychology Bulletin , 65 , 280. [ Google Scholar ]
  • Ingram, R. (2013). Locating emotional intelligence at the heart of social work practice . British Journal of Social Work , 43 ( 5 ), 987–1004. [ Google Scholar ]
  • Ivey, A. E. , & Authier, J. (1971). Microcounseling: Innovation in interviewing training . Charles C. Thomas. [ Google Scholar ]
  • Ivey, A. E. , Normington, C. J. , Miller, C. D. , Morrill, W. H. , & Haase, R. F. (1968). Microcounseling and attending behavior: An approach to prepracticum counselor training . Journal of Counseling Psychology , 15 , 1–12. [ Google Scholar ]
  • Kadushin, A. , & Kadushin, G. (2013). The social work interview (5th ed.). Columbia University Press. [ Google Scholar ]
  • Kam, P. K. (2020). ‘Social work is not just a job’: The qualities of social workers from the perspective of service users . Journal of Social Work , 20 ( 6 ), 775–796. [ Google Scholar ]
  • Kirkpatrick, D. L. (1967). Evaluation of training. In Craig R. L., & Bittel L. R. (Eds.), Training and development handbook (pp. 87–112). McGraw‐Hill. [ Google Scholar ]
  • Knowles, M. S. (1972). Innovations in teaching styles and approaches based upon adult learning . Journal of Education for Social Work , 8 , 32–39. [ Google Scholar ]
  • Knowles, M. (1998). The adult learner . Gulf Publishing Company. [ Google Scholar ]
  • Kolb, D. (1984). Experiential learning: Experience as the source of learning and development . Prentice‐Hall. [ Google Scholar ]
  • Koprowska, J. (2003). The right kind of telling? Locating the teaching of interviewing skills within a systems framework . British Journal of Social Work , 33 , 291–308. [ Google Scholar ]
  • Koprowska, J. (2010). The outcomes of social work education: Developing evaluation methods . Higher Education Academy, SWAP. [ Google Scholar ]
  • Koprowska, J. (2020). Communication and interpersonal skills in social work (5th ed.). Learning Matters: Sage. [ Google Scholar ]
  • Kraiger, K. , Ford, J. K. , & Salas, E. (1993). Application of cognitive, skill‐based, and affective theories of learning outcomes to new methods of training evaluation . Journal of Applied Psychology , 78 ( 2 ), 311–328. [ Google Scholar ]
  • Kruger, J. , & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self‐assessments . Journal of Personality and Social Psychology , 77 ( 6 ), 1121–1134. [ PubMed ] [ Google Scholar ]
  • Kugley, S. , Wade, A. , Thomas, J. , Mahood, Q. , Jørgensen, A. M. K. , Hammerstrøm, K. , & Sathe, N. (2017). Searching for studies: A guide to information retrieval for Campbell . Campbell Systematic Reviews , 13 ( 1 ), 1–73. [ Google Scholar ]
  • Lam, T. C. M. , Kolomitro, K. , & Alamparambil, F. C. (2011). Empathy training: Methods, evaluation practices, and validity . Journal of Multidisciplinary Evaluation , 7 ( 16 ), 162–200. [ Google Scholar ]
  • Laming, H. (2003). The Victoria Climbie Inquiry: Report of an inquiry by Lord Laming . https://www.gov.uk/government/publications/the-victoria-climbie-inquiry-report-of-an-inquiry-by-lord-laming
  • Laming, H. (2009). The protection of children in England: A progress report . https://www.gov.uk/government/publications/the-protection-of-children-in-england-a-progress-report
  • Larson, L. M. , & Daniels, J. A. (1998). Review of the counseling self‐efficacy literature . The Counseling Psychologist , 26 ( 2 ), 179–218. [ Google Scholar ]
  • Lefevre, M. , Tanner, K. , & Luckock, B. (2008). Developing social work students’ communication skills with children and young people: A model for the qualifying level curriculum . Child & Family Social Work , 13 , 166–176. [ Google Scholar ]
  • Lefevre, M. (2010). The outcomes of social work education: Developing evaluation methods . Higher Education Academy, SWAP. [ Google Scholar ]
  • Levin, S. , Fulginiti, A. , & Moore, B. (2018). The perceived effectiveness of online social work education: Insights from a national survey of social work educators . Social Work Education , 37 ( 6 ), 775–789. [ Google Scholar ]
  • Lietz, C. A. , Gerdes, K. E. , Sun, F. , Geiger, J. M. , Wagaman, M. A. and Segal, E. A. (2011). The Empathy Assessment Index (EAI): A confirmatory factor analysis of a multidimensional model of empathy . Journal of the Society for Social Work and Research , 2 ( 2 ), 1–202. [ Google Scholar ]
  • Lishman, J. (2009). Communication in social work (2nd ed.). Palgrave Macmillan. [ Google Scholar ]
  • Luckock, B. , Lefevre, M. , Orr, D. , Jones, M. , Marchant, R. , & Tanner, K. (2006). Teaching, learning and assessing communication skills with children and young people in social work education . Knowledge Review , 1–202. [ Google Scholar ]
  • Lynch, A. , Newlands, F. , & Forrester, D. (2019). What does empathy sound like in social work communication? A mixed‐methods study of empathy in child protection social work practice . Child & Family Social Work , 24 , 139–147. [ Google Scholar ]
  • Maynard, B. R. , Solis, M. R. , Miller, V. L. , & Brendel, K. E. (2017). Mindfulness‐based interventions for improving cognition, academic achievement, behavior, and socioemotional functioning of primary and secondary school students . Campbell Systematic Reviews , 13 ( 1 ), 1–144. 10.4073/2017.5 [ CrossRef ] [ Google Scholar ]
  • Mehrabian, A. , & Epstein, N. (1972). A measure of emotional empathy . Journal of Personality , 40 , 523–543. [ PubMed ] [ Google Scholar ]
  • Montgomery, P. , & Belle Weisman, C. (2021). Non‐financial conflict of interest in social intervention trials and systematic reviews: An analysis of the issues with case studies and proposals for management . Children and Youth Services Review , 120 , 105642. [ Google Scholar ]
  • Moon, J. (1999). Reflection in learning and professional development . Kogan. [ Google Scholar ]
  • Moore, B. (2005). Key issues in web‐based education in the human services: A review of the literature . Journal of Technology in Human Services , 23 , 11–28. [ Google Scholar ]
  • Moss, B. R. , Dunkerly, M. , Price, B. , Sullivan, W. , Reynolds, M. , & Yates, B. (2007). Skills laboratories and the new social work degree: One small step towards best practice? Service users’ and carers’ perspectives . Social Work Education , 26 ( 7 ), 708–722. [ Google Scholar ]
  • Munford, R. , & Sanders, J. (2015). Understanding service engagement: Young people's experience of service use . Journal of Social Work 16 ( 3 ), 283–302. [ Google Scholar ]
  • Munro, E. (2011). The Munro review of child protection: Final report, a child‐centred system . https://www.gov.uk/government/publications/munro-review-of-child-protection-final-report-a-child-centred-system
  • Murphy, J. , Gray, C. M. , & Cox, S. (2007). Communication and dementia: how talking mats can help people with dementia to express themselves . Joseph Rowntree Foundation. [ Google Scholar ]
  • Narey, M. (2014). Making the education of social workers consistently effective: Report of Sir Martin Narey's independent review of the education of children's social workers . https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/287756/Making_the_education_of_social_workers_consistently_effective.pdf
  • Nerdrum, P. , & Høglend, P. (2003). Short and long‐term effects of training in empathic communication: trainee personality makes a difference . The Clinical Supervisor , 21 ( 2 ), 1–19. [ Google Scholar ]
  • Nerdrum, P. , & Lundquist, K. (1995). Does participation in communication skills training increase student levels of communicated empathy? A controlled outcome study . Journal of Teaching in Social Work , 11 ( 1–2 ), 139–157. [ Google Scholar ]
  • Nerdrum, P. (1997). Maintenance of the effect of training in communication skills: A controlled follow‐up study of level of communicated empathy . British Journal of Social Work , 27 ( 5 ), 705–722. [ Google Scholar ]
  • Page, M. J. , McKenzie, J. E. , Bossuyt, P. M. , Boutron, I. , Hoffmann, T. C. , Mulrow, C. D. , Shamseer, L. , Tetzlaff, J. M. , Akl, E. A. , Brennan, S. E. , Chou, R. , Glanville, J. , Grimshaw, J. M. , Hróbjartsson, A. , Lalu, M. M. , Li, T. , Loder, E. W. , Mayo‐Wilson, E. , McDonald, S. , … Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews . BMJ , 372 , n71. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Papageorgiou, A. , Loke, Y. K. , & Fromage, M. (2017). Communication skills training for mental health professionals working with people with severe mental illness . Cochrane Database of Systematic Reviews , 2017 ( 6 ), Art. No. CD010006. 10.1002/14651858.CD010006.pub2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Parker, J. (2005). Developing perceptions of competence during practice learning . British Journal of Social Work , 36 ( 6 ), 1017–1036. [ Google Scholar ]
  • Pedersen, R. (2009). Empirical research on empathy in medicine—A critical review . Patient Education and Counseling , 76 ( 3 ), 307–322. [ PubMed ] [ Google Scholar ]
  • Petracchi, H. E. , & Collins, K. S. (2006). Utilizing actors to simulate clients in social work student role plays . Journal of Teaching in Social Work , 26 ( 1‐2 ), 223–233. [ Google Scholar ]
  • Quinney, A. , & Parker, J. (2010). Monograph: The outcomes of social work education: Developing evaluation methods . Higher Education Academy, SWAP. [ Google Scholar ]
  • Reith‐Hall, E. , & Montgomery, P. (2019). PROTOCOL: Communication skills training for improving the communicative abilities of student social workers—A systematic review . Campbell Systematic Reviews 15 ( 3 ), 1–9. 10.1002/cl2.1038 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Reith‐Hall, E. (2020). Using creativity, co‐production and the common third in a communication skills module to identify and mend gaps between the stakeholders of social work education . International Journal of Social Pedagogy , 9 ( 3 ), 1–12. [ Google Scholar ]
  • Reith‐Hall, E. (2022). The teaching and learning of communication skills for social work students: a realist synthesis protocol . Systematic Reviews , 11 ( 1 ), 266. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Robieux, L. , Karsenti, L. , Pocard, M. , & Flahault, C. (2018). Let's talk about empathy! Patient Education and Counseling , 101 , 59–66. [ PubMed ] [ Google Scholar ]
  • Rogers, C. R. (1957). The necessary and sufficient conditions of therapeutic personality change . Journal of Consulting Psychology , 21 ( 2 ), 95–103. [ PubMed ] [ Google Scholar ]
  • Rowland, A. , & McDonald, L. (2009). Evaluation of social work communication skills to allow people with aphasia to be part of the decision‐making process in healthcare . Social Work Education , 28 ( 2 ), 128–144. [ Google Scholar ]
  • Schön, D. (1983). The reflective practitioner: How professionals think in action . Temple Smith. [ Google Scholar ]
  • Schön, D. A. (1987). Educating the reflective practitioner: Toward a new design for teaching and learning in the profession . Jossey‐Bass. [ Google Scholar ]
  • Segal, E. A. , Gerdes, K. E. , Lietz, C. A. , Wagaman, M. A. , & Geiger, J. M. (2017). Assessing empathy . Columbia University Press. [ Google Scholar ]
  • Sidell, N. , & Smiley, D. (2008). Professional communication skills in social work . Allyn & Bacon/Pearson. [ Google Scholar ]
  • Sinclair, S. , Beamer, K. , Hack, T. F. , McClement, S. , Raffin Bouchal, S. , Chochinov, H. M. , & Hagen, N. A. (2017). Sympathy, empathy, and compassion: A grounded theory study of palliative care patients’ understandings, experiences, and preferences . Palliative Medicine , 31 ( 5 ), 437–447. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Smith, J. (2002). Requirements for social work training . https://www.scie.org.uk/publications/guides/guide04/files/requirements-for-social-work-training.pdf
  • Social Care Institute for Excellence . (2000). Teaching and learning communication skills: An introduction to those new to higher education . https://www.scie.org.uk/publications/misc/rg03intro.pdf
  • Spreng*, R. N. , McKinnon*, M. C. , Mar, R. A. , & Levine, B. (2009). The Toronto Empathy Questionnaire: Scale development and initial validation of a factor‐analytic solution to multiple empathy measures . Journal of Personality Assessment , 91 ( 1 ), 62–71. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sterne, J. A. C. , Higgins, J. P. T. , Elbers, R. G. , Reeves, B. C. , & The Development Group for ROBINS‐I . (2016) Risk Of Bias In Non‐randomized Studies of Interventions (ROBINS‐I): Detailed guidance, updated 12 October 2016 . http://www.riskofbias.info
  • Sterne, J. A. , Hernán, M. A. , Reeves, B. C. , Savović, J. , Berkman, N. D. , Viswanathan, M. , Henry, D. , Altman, D. G. , Ansari, M. T. , Boutron, I. , Carpenter, J. R. , Chan, A. W. , Churchill, R. , Deeks, J. J. , Hróbjartsson, A. , Kirkham, J. , Jüni, P. , Loke, Y. K. , Pigott, T. D. , … Higgins, J. P. (2016). ROBINS‐I: A tool for assessing risk of bias in non‐randomised studies of interventions . BMJ , 355 , i4919. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sterne, J. A. C. , Savović, J. , Page, M. J. , Elbers, R. G. , Blencowe, N. S. , Boutron, I. , Cates, C. J. , Cheng, H.‐Y. , Corbett, M. S. , Eldridge, S. M. , Emberson, J. R. , Hernán, M. A. , Hopewell, S. , Hróbjartsson, A. , Junqueira, D. R. , Jüni, P. , Kirkham, J. J. , Lasserson, T. , Li, T. , … Higgins, J. P. T. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials . BMJ , 366 , l4898. [ PubMed ] [ Google Scholar ]
  • Tanner, D. (2019). ‘The love that dare not speak its name’: The role of compassion in social work practice . The British Journal of Social Work , bcz127 , 1688–1705. [ Google Scholar ]
  • Tedam, P. (2020). Editorial . The Journal of Practice Teaching and Learning , 17 ( 2 ), 3–5. [ Google Scholar ]
  • Teding van Berkhout, E. , & Malouff, J. M. (2016). The efficacy of empathy training: A meta‐analysis of randomized controlled trials . Journal of Counseling Psychology , 63 ( 1 ), 32–41. [ PubMed ] [ Google Scholar ]
  • The Campbell Collaboration . (2014). Campbell systematic reviews: Policies and guidelines (Campbell Policies and Guidelines Series No. 1).
  • Thompson, N. (2003). Communication and language: A handbook of theory and practice . Palgrave Macmillan. [ Google Scholar ]
  • Tompsett, H. , Henderson, L. , Mathew Byrne, J. , Gaskell Mew, E. , & Tompsett, C. (2017). On the learning journey: What helps and hinders the development of social work students’ core pre‐placement skills? Social Work Education , 36 ( 1 ), 6–25. [ Google Scholar ]
  • Tompsett, H. , Henderson, K. , Gaskell Mew, E. , Mathew Byrne, J. , & Tompsett, C. (2017). Self‐efficacy and outcomes: Validating a measure comparing social work students’ perceived and assessed ability in core pre‐placement skills . British Journal of Social Work , 47 ( 8 ), 2384–2405. [ Google Scholar ]
  • Toukmanian, S. G. , & Rennie, D. L. (1975). Microcounseling versus human relations training: Relative effectiveness with undergraduate trainees . Journal of Counseling Psychology , 22 ( 4 ), 345–352. [ Google Scholar ]
  • Trevithick, P. , Richards, S. , Ruch, G. , Moss, B. , Lines, L. , & Manor, O. (2004). Knowledge review: Learning and teaching communication skills on social work qualifying courses/training programmes . Policy Press. [ Google Scholar ]
  • Trevithick, P. (2012). Social work skills and knowledge: A practice handbook . Policy Press, Open University Press. [ Google Scholar ]
  • Truax, C. B. , & Carkhuff, R. R. (1967). Toward effective counselling and psychotherapy: Training and practice . Aldine. [ Google Scholar ]
  • Tryon, G. S. (1987). The Counselor Rating Form—Short Version: A factor analysis . Measurement and Evaluation in Counseling and Development , 20 ( 3 ), 122–126. [ Google Scholar ]
  • Unrau, Y. A. , & Grinnell, R. M., Jr. (2005). The impact of social work research courses on research self‐efficacy for social work students . Social Work Education , 24 ( 6 ), 639–651. [ Google Scholar ]
  • Uttley, L. , & Montgomery, P. (2017). The influence of the team in conducting a systematic review . Systematic Reviews , 6 , 149. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Vitali, S. (2011). The acquisition of professional social work competencies . Social Work Education , 30 ( 2 ), 236–246. [ Google Scholar ]
  • Wilt, K. (2012). Simulation‐based learning in ethics education [Doctoral dissertation, Duquesne University].
  • Woodcock Ross, J. (2016). Specialist communication skills for social workers . Palgrave Macmillan. [ Google Scholar ]
  • Wretman, C. J. , & Macy, R. J. (2016). Technology in social work education: A systematic review . Journal of Social Work Education , 52 ( 4 ), 409–421. [ Google Scholar ]
  • Yoder, W. R. , Karyotaki, E. , Cristea, I.‐A. , van Duin, D. , & Cuijpers, P. (2019). Researcher allegiance in research on psychosocial interventions: Meta‐research study protocol and pilot study . BMJ Open , 9 ( 2 ), e024622. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Yu, J. , & Kirk, M. (2009). Evaluation of empathy measurement tools in nursing: Systematic review . Journal of Advanced Nursing , 65 ( 9 ), 1790–1806. [ PubMed ] [ Google Scholar ]
  • Zaleski, K. (2016). Empathy in social work . Contemporary Behavioral Health Care , 2 ( 1 ), 48–53. [ Google Scholar ]
  • Search Menu
  • Sign in through your institution
  • Advance Articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • Reasons to publish with us
  • About Health Education Research
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Communication: concepts, practice and challenges †.

Disclaimer: Views expressed here are only those of the author and not those of World Health Organization (WHO).

  • Article contents
  • Figures & tables
  • Supplementary Data

Davison Munodawafa, Communication: concepts, practice and challenges, Health Education Research , Volume 23, Issue 3, June 2008, Pages 369–370, https://doi.org/10.1093/her/cyn024

  • Permissions Icon Permissions

Communication involves transmission of verbal and non-verbal messages. It consists of a sender, a receiver and channel of communication. In the process of transmitting messages, the clarity of the message may be interfered or distorted by what is often referred to as barriers.

Health communication seeks to increase knowledge gain. This is the minimum expectation and acceptable requirement to demonstrate that learning has taken place following an intervention using communication. Once knowledge gain is established, it is assumed that the individual will use the knowledge when the need arises or at an opportune time. There is evidence in several school-based health interventions demonstrating that young people who got exposed to specific information, e.g. against smoking or engaging in harmful practice, tended to posses decision or refusal skills.

Communication requires full understanding of behaviors associated with the sender and receiver and the possible barriers that are likely to exist. There are also challenges with establishing the source of what is to be communicated since this is a pre-requisite for program success. Often, communication (i.e. messages) originated from professionals or the government and ignore involving the intended beneficiaries. As a result, those communication activities seeking to impart knowledge and skills and/or behavior change often fail to realize the ultimate goal of behavior change because the beneficiaries find no relevancy in the activities.

Communication processes can be classified into two categories namely (i) mass media and (ii) Group media. Mass media focuses on reaching a wide audience while the group media reaches a specific group with clearly defined characteristics. Radio, television and Internet are examples of mass media channels while drama, storytelling, music and dance fall under group media.

Selecting a communication channel requires a complete understanding of the strengths, limitations and possible solutions related to each potential channel. Those entrusted with developing health education interventions that require communication need to be aware of the limitations in order to identify other complimentary activities to be able to achieve desired results. The context in which communication takes place is a major determinant to achieving the desired results. First, there should be a situational analysis conducted which includes also an audience analysis and this could be a rapid or comprehensive assessment. The findings of a situational analysis are then fed into decisions regarding the appropriate messages and channels to be applied. The situational analysis presents opportunities for implementing multiple communications where necessary.

In order to succeed in establishing effective health interventions using communication, the participation of intended beneficiaries throughout the programme phases is a pre-requisite. In other words, the intended beneficiaries should participate in setting objectives, selecting activities as well as monitoring of the effectiveness of the activities and participate in the planning and implementation. The beneficiaries should also be a part of establishing an environment that is conducive to delivering effective communication activities. In order to realize this goal in programme terms, policies and legislations that promote communication are required at national level. In many countries, mass media outlets such as television, radio, internet and newspapers are either a State monopoly or are under the ownership of private companies thereby making it hard for public service organizations to easily access them. The high fees levied for using these information outlets is quite prohibitive to most public health services organizations particularly those operating at community level.

Communication is not a panacea for all public health concerns and therefore expectations should be realistic. To guarantee that communication is being applied appropriately, the situational analysis findings should inform the next steps as discussed earlier. In this regard, it is essential to distinguish whether the problem or concern is not linked to lack of policy or legislation and not necessarily communication. Communication has been considered a failure in certain situations when, in fact, the problem required a policy or legislative remedies and not communication. The identification of predisposing, enabling and reinforcing factors to knowledge acquisition and behavior change should guide communication processes.

In some cases, public health problems encountered by the community are policy, economic or political related, and no amount of communication would influence change because there is need for a policy or political decision.

Communication approaches that provide opportunities for interpersonal interaction are likely to yield desired behavior change. These interpersonal group communications include drama, song, story telling and debate among others. The interpersonal communication can take into consideration social, cultural and behavioral factors that influence health outcomes unlike with mass media.

Communication conveys complex, sensitive and controversial information. It is critical that those responsible for facilitating information dissemination receive training in handling sensitive or controversial issues in order not to diminish the possible gains from communication.

Ultimately, credibility of the source of information is highly correlated with achievement of desired behavior outcomes. Those involved in communicating vital health information should ascertain that they are credible sources of information among the public. All content to be communicated should be thoroughly verified in order to avoid misinformation or sending conflict misinformation or sending conflict messages because once something is communicated, it can not be recalled ‘ uncommunicated’. In other words, a retraction of a statement or any apology does not mean that communication did not take place or what was communicated has been erased. It remains as a record despite the retraction. Guarantee freedom to communicate by not allowing any form of put-down or unconstructive criticism before, during and after communication.

Last but not least, listening is part of communication. Unfortunately, it is rarely taught formally and also neither is it acknowledged during development of communication interventions. In order for one to listen effectively, it is a must that one does not appear to be impatient or in a hurry. Both persons should allow each other to freely communicate without interference.

Author notes

Month: Total Views:
November 2016 36
December 2016 13
January 2017 57
February 2017 196
March 2017 229
April 2017 185
May 2017 170
June 2017 75
July 2017 71
August 2017 336
September 2017 408
October 2017 552
November 2017 649
December 2017 2,140
January 2018 2,089
February 2018 2,260
March 2018 2,937
April 2018 3,236
May 2018 3,026
June 2018 2,116
July 2018 2,041
August 2018 2,619
September 2018 2,897
October 2018 2,588
November 2018 2,690
December 2018 2,001
January 2019 1,610
February 2019 1,642
March 2019 2,712
April 2019 2,426
May 2019 1,683
June 2019 1,739
July 2019 1,515
August 2019 1,754
September 2019 1,580
October 2019 1,657
November 2019 1,215
December 2019 835
January 2020 1,058
February 2020 1,123
March 2020 1,109
April 2020 1,286
May 2020 691
June 2020 1,084
July 2020 1,257
August 2020 1,633
September 2020 3,152
October 2020 3,253
November 2020 2,683
December 2020 2,541
January 2021 2,508
February 2021 3,096
March 2021 3,740
April 2021 3,023
May 2021 2,409
June 2021 2,000
July 2021 1,987
August 2021 2,308
September 2021 2,491
October 2021 2,604
November 2021 2,698
December 2021 1,937
January 2022 1,498
February 2022 2,012
March 2022 2,090
April 2022 1,642
May 2022 2,382
June 2022 1,822
July 2022 1,970
August 2022 2,460
September 2022 3,079
October 2022 3,550
November 2022 3,905
December 2022 3,393
January 2023 3,215
February 2023 3,284
March 2023 2,991
April 2023 2,090
May 2023 1,962
June 2023 1,396
July 2023 1,044
August 2023 1,341
September 2023 1,634
October 2023 1,547
November 2023 1,500
December 2023 1,166
January 2024 1,195
February 2024 967
March 2024 985
April 2024 970
May 2024 911
June 2024 635
July 2024 584
August 2024 646
September 2024 99

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1465-3648
  • Print ISSN 0268-1153
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

8 Ways You Can Improve Your Communication Skills

Your guide to establishing better communication habits for success in the workplace.

Mary Sharp Emerson

  

A leader’s ability to communicate clearly and effectively with employees, within teams, and across the organization is one of the foundations of a successful business.

And in today’s complex and quickly evolving business environment, with hundreds of different communication tools, fully or partially remote teams, and even multicultural teams spanning multiple time zones, effective communication has never been more important — or more challenging.

Thus, the ability to communicate might be a manager’s most critical skill. 

The good news is that these skills can be learned and even mastered. 

These eight tips can help you maximize your communication skills for the success of your organization and your career.

1. Be clear and concise

Communication is primarily about word choice. And when it comes to word choice, less is more.

The key to powerful and persuasive communication — whether written or spoken — is clarity and, when possible, brevity. 

Before engaging in any form of communication, define your goals and your audience. 

Outlining carefully and explicitly what you want to convey and why will help ensure that you include all necessary information. It will also help you eliminate irrelevant details. 

Avoid unnecessary words and overly flowery language, which can distract from your message.

And while repetition may be necessary in some cases, be sure to use it carefully and sparingly. Repeating your message can ensure that your audience receives it, but too much repetition can cause them to tune you out entirely. 

2. Prepare ahead of time

Know what you are going to say and how you are going to say before you begin any type of communication.

However, being prepared means more than just practicing a presentation. 

Preparation also involves thinking about the entirety of the communication, from start to finish. Research the information you may need to support your message. Consider how you will respond to questions and criticisms. Try to anticipate the unexpected.

Before a performance review, for instance, prepare a list of concrete examples of your employee’s behavior to support your evaluation.

Before engaging in a salary or promotion negotiation, know exactly what you want. Be ready to discuss ranges and potential compromises; know what you are willing to accept and what you aren’t. And have on hand specific details to support your case, such as relevant salaries for your position and your location (but be sure that your research is based on publicly available information, not company gossip or anecdotal evidence). 

Before entering into any conversation, brainstorm potential questions, requests for additional information or clarification, and disagreements so you are ready to address them calmly and clearly.

3. Be mindful of nonverbal communication

Our facial expressions, gestures, and body language can, and often do, say more than our words. 

Nonverbal cues can have between 65 and 93 percent more impact than the spoken word. And we are more likely to believe the nonverbal signals over spoken words if the two are in disagreement. 

Leaders must be especially adept at reading nonverbal cues. 

Employees who may be unwilling to voice disagreements or concerns, for instance, may show their discomfort through crossed arms or an unwillingness to make eye contact. If you are aware of others’ body language, you may be able to adjust your communication tactics appropriately.

At the same time, leaders must also be able to control their own nonverbal communications. 

Your nonverbal cues must, at all times, support your message. At best, conflicting verbal and nonverbal communication can cause confusion. At worst, it can undermine your message and your team’s confidence in you, your organization, and even in themselves. 

4. Watch your tone

How you say something can be just as important as what you say. As with other nonverbal cues, your tone can add power and emphasis to your message, or it can undermine it entirely.

Tone can be an especially important factor in workplace disagreements and conflict. A well-chosen word with a positive connotation creates good will and trust. A poorly chosen word with unclear or negative connotations can quickly lead to misunderstanding. 

When speaking, tone includes volume, projection, and intonation as well as word choice. In real time, it can be challenging to control tone to ensure that it matches your intent. But being mindful of your tone will enable you to alter it appropriately if a communication seems to be going in the wrong direction.

Tone can be easier to control when writing. Be sure to read your communication once, even twice, while thinking about tone as well as message. You may even want to read it out loud or ask a trusted colleague to read it over, if doing so does not breach confidentiality. 

And when engaging in a heated dialogue over email or other written medium, don’t be too hasty in your replies. 

If at all possible, write out your response but then wait for a day or two to send it. In many cases, re-reading your message after your emotions have cooled allows you to moderate your tone in a way that is less likely to escalate the conflict.

Browse our Communication programs.

5. Practice active listening

Communication nearly always involves two or more individuals.

Therefore, listening is just as important as speaking when it comes to communicating successfully. But listening can be more challenging than we realize. 

In her blog post Mastering the Basics of Communication , communication expert Marjorie North notes that we only hear about half of what the other person says during any given conversation. 

The goal of active listening is to ensure that you hear not just the words the person is saying, but the entire message. Some tips for active listening include:

  • Giving the speaker your full and undivided attention
  • Clearing your mind of distractions, judgements, and counter-arguments. 
  • Avoiding the temptation to interrupt with your own thoughts.
  • Showing open, positive body language to keep your mind focused and to show the speaker that you are really listening
  • Rephrase or paraphrase what you’ve heard when making your reply
  • Ask open ended questions designed to elicit additional information

6. Build your emotional intelligence

Communication is built upon a foundation of emotional intelligence. Simply put, you cannot communicate effectively with others until you can assess and understand your own feelings. 

“If you’re aware of your own emotions and the behaviors they trigger, you can begin to manage these emotions and behaviors,” says Margaret Andrews in her post, How to Improve Your Emotional Intelligence .

Leaders with a high level of emotional intelligence will naturally find it easier to engage in active listening, maintain appropriate tone, and use positive body language, for example.  

Understanding and managing your own emotions is only part of emotional intelligence. The other part — equally important for effective communication — is empathy for others.

Empathizing with an employee can, for example, make a difficult conversation easier. 

You may still have to deliver bad news, but (actively) listening to their perspective and showing that you understand their feelings can go a long way toward smoothing hurt feelings or avoiding misunderstandings.

7. Develop a workplace communication strategy

Today’s workplace is a constant flow of information across a wide variety of formats. Every single communication must be understood in the context of that larger flow of information.

Even the most effective communicator may find it difficult to get their message across without a workplace communication strategy.

A communication strategy is the framework within which your business conveys and receives information. It can — and should — outline how and what you communicate to customers and clients, stakeholders, and managers and employees. 

Starting most broadly, your strategy should incorporate who gets what message and when. This ensures that everyone receives the correct information at the right time. 

It can be as detailed as how you communicate, including defining the type of tools you use for which information. For example, you may define when it’s appropriate to use a group chat for the entire team or organization or when a meeting should have been summarized in an email instead. 

Creating basic guidelines like this can streamline the flow of information. It will help ensure that everyone gets the details they need and that important knowledge isn’t overwhelmed by extraneous minutia. 

8. Create a positive organizational culture

The corporate culture in which you are communicating also plays a vital role in effective communication. 

In a positive work environment — one founded on transparency, trust, empathy, and open dialogue — communication in general will be easier and more effective. 

Employees will be more receptive to hearing their manager’s message if they trust that manager. And managers will find it easier to create buy-in and even offer constructive criticism if they encourage their employees to speak up, offer suggestions, and even offer constructive criticisms of their own. 

“The most dangerous organization is a silent one,” says Lorne Rubis in a blog post, Six Tips for Building a Better Workplace Culture . Communication, in both directions, can only be effective in a culture that is built on trust and a foundation of psychological safety.

Authoritative managers who refuse to share information, aren’t open to suggestions, and refuse to admit mistakes and accept criticism are likely to find their suggestions and criticisms met with defensiveness or even ignored altogether. 

Without that foundation of trust and transparency, even the smallest communication can be misconstrued and lead to misunderstandings and unnecessary conflict.

Communicating with co-workers and employees is always going to present challenges. There will always be misunderstandings and miscommunications that must be resolved and unfortunately, corporate messages aren’t always what we want to hear, especially during difficult times.

But building and mastering effective communication skills will make your job easier as a leader, even during difficult conversations. Taking the time to build these skills will certainly be time well-spent. 

Want to build your skills? Find the program that’s right for you.

Browse all Professional & Executive Development programs.

About the Author

Digital Content Producer

Emerson is a Digital Content Producer at Harvard DCE. She is a graduate of Brandeis University and Yale University and started her career as an international affairs analyst. She is an avid triathlete and has completed three Ironman triathlons, as well as the Boston Marathon.

Harvard Professional Development Participant Success Stories

Read about how these skilled professionals used the knowledge and skills they learned in a Harvard PDP to further their career development.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

IMAGES

  1. PPT

    communication skills in research definition

  2. PPT

    communication skills in research definition

  3. Communication Research

    communication skills in research definition

  4. 66 Communication Skills Examples (A to Z List) (2024)

    communication skills in research definition

  5. Communication is A Skill We Must Learn!

    communication skills in research definition

  6. The Perfect Complete Study Guide, Tips & Tricks For Business

    communication skills in research definition

VIDEO

  1. Communication Skills

  2. Communication Skills

  3. Advanced Communication Skills

  4. How to introduce yourself and others

  5. Communication skills// definition of communication skills,,,, types of communication

  6. 4. Research Skills

COMMENTS

  1. Developing Effective Communication Skills

    Developing Effective Communication Skills - PMC

  2. (PDF) Communication Skills in Practice

    modifying or even changing in behaviour. Specifically, communication is held to. share feelings and thoughts for several purposes that aim to connect with others. such as: inspiring, motivati ng ...

  3. Sage Research Methods

    Communication Skills. A communication skill is defined as the ability to effectively achieve one's communicative goals or the proficiency with which one engages in particular communication behaviors. That is, individuals are considered to possess a communication skill when they are able to effectively produce or process messages in a ...

  4. The Teaching and Learning of Communication Skills in Social Work

    Purpose: This article presents a systematic review of research into the teaching and learning of communication skills in social work education.Methods: We conducted a systematic review, adhering to the Cochrane Handbook of Systematic Reviews for Interventions and PRISMA reporting guidelines for systematic reviews and meta-analyses.Results: Sixteen records reporting on fifteen studies met the ...

  5. Soft Skills In Research: Effective Communication And Teamwork

    05/12/2024. In the context of research, innovation and discovery cannot solely rely on technical expertise. Soft skills, including communication, teamwork, adaptability, and ethical awareness, guide researchers through scientific inquiry. Beyond the confines of laboratory experiments and data analysis, these interpersonal skills foster ...

  6. Communication Skills

    Communication Skills - PMC

  7. Definitions and Concepts of Communication

    Communication Theory 9.2:119-161. DOI: 10.1111/j.1468-2885.1999.tb00355.x. Conceptualizes communication theory as a field of "metadiscursive practice" in which diverse theoretical concepts of communication are engaged with each other and with ordinary (nontheoretical) concepts in ongoing debates about practical communication problems.

  8. What is Research Communication?

    Improved communication skills. experiencing the real world of researching as well as presenting . To the general public: Improve the quality of life. Help with miscommunication and misconceptions. Increase interest and participation in the research field especially in the underrepresented social groups. To the research community: Increase ...

  9. What Are Research Skills? Definition, Examples and Tips

    Research skills are the ability to find an answer to a question or a solution to a problem. They include your ability to gather information about a topic, review that information and analyze and interpret the details in a way to support a solution. Having research skills is necessary to advance your career as they directly relate to your ...

  10. Performance-based assessment of students' communication skills

    Referred to as "performance-based" or "competence-oriented tests" within the field of competence research, such tests seek to represent holistically the individual's capabilities to act (Blömeke et al., 2015; Shavelson et al., 2018). Thus, even the designation of a "competence-oriented examination" of communication skills, for ...

  11. Communication Skill

    Communication skills. Communication skills are central to interpersonal interactions and social relationships. In general, communication occurs when a message from one individual (i.e., the "sender") influences the behavior of another individual (i.e., the "receiver;" Buck & VanLear, 2002 ).

  12. Communication Skills: Definition, Examples, & Activities

    Communication Skills: Definition, Examples, & Activities

  13. Communication skills training for improving the communicative abilities

    Selection Criteria. Study selection was based on the following characteristics: Participants were social work students on generic (as opposed to client specific) qualifying courses; Interventions included any form of communication skills training; eligible studies were required to have an appropriate comparator such as no intervention or an alternative intervention; and outcomes included ...

  14. What Is Effective Communication? Skills for Work, School, and Life

    What Is Effective Communication? Skills for Work, School, ...

  15. (PDF) What is Communication?

    (PDF) What is Communication?

  16. Effective Communication

    Effective Communication - an overview

  17. Communication: concepts, practice and challenges

    Communication: concepts, practice and challenges. †. Communication involves transmission of verbal and non-verbal messages. It consists of a sender, a receiver and channel of communication. In the process of transmitting messages, the clarity of the message may be interfered or distorted by what is often referred to as barriers.

  18. Assessing Written Communication in Higher Education: Review and

    Such an assessment should be based on a precise definition of the written communication construct, which is supported by and consistent with current empirical research on writing in higher education. Although there is general agreement that effective communication skills (both oral and written) are important, there is some ambiguity about how ...

  19. What is Scientific Research and How is it Conducted?

    Qualitative research, on the other hand, answers research questions using non-numeric data - frequently text, but also observations, videos and other media. It often focuses on a deeper understanding and description of the issues - for instance, how nurses experience stress - and less on the ability to replicate the results of the study.

  20. Communication Skills among University Students

    There are many types of communication skills, but generally it. involves oral and written skills. Mohd Helmi (2005) proposes that there are essentially three types of. communication, which are ...

  21. Important Communication Skills and How to Improve Them

    Important Communication Skills and How to Improve Them

  22. 8 Ways You Can Improve Your Communication Skills

    8 Ways You Can Improve Your Communication Skills

  23. Oral Communication Skills and Pedagogy

    Communication among all participants is a key factor in creating a positive school climate, and the communication skills of the school principal are largely crucial. View Show abstract

  24. Encouraging the Use of "Parentese" to Support Infant and Toddler

    Research has increasingly shown us the importance of exposing babies to a large amount of language from birth. In addition to building language skills, talking with babies supports the development of cognitive and social-emotional skills. From turn-taking to narration of daily routines, there are many strategies that caregivers can use to ...