U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Healthcare (Basel)
  • PMC10094672

Logo of healthcare

A Systematic Literature Review of Health Information Systems for Healthcare

Ayogeboh epizitone.

1 ICT and Society Research Group, Durban University of Technology, Durban 4001, South Africa

Smangele Pretty Moyane

2 Department of Information and Corporate Management, Durban University of Technology, Durban 4001, South Africa

Israel Edem Agbehadji

3 Centre for Transformative Agricultural and Food Systems, School of Agricultural, Earth and Environmental Sciences, University of KwaZulu-Natal, Pietermaritzburg 3209, South Africa

Associated Data

Not applicable.

Health information system deployment has been driven by the transformation and digitalization currently confronting healthcare. The need and potential of these systems within healthcare have been tremendously driven by the global instability that has affected several interrelated sectors. Accordingly, many research studies have reported on the inadequacies of these systems within the healthcare arena, which have distorted their potential and offerings to revolutionize healthcare. Thus, through a comprehensive review of the extant literature, this study presents a critique of the health information system for healthcare to supplement the gap created as a result of the lack of an in-depth outlook of the current health information system from a holistic slant. From the studies, the health information system was ascertained to be crucial and fundament in the drive of information and knowledge management for healthcare. Additionally, it was asserted to have transformed and shaped healthcare from its conception despite its flaws. Moreover, research has envisioned that the appraisal of the current health information system would influence its adoption and solidify its enactment within the global healthcare space, which is highly demanded.

1. Introduction

Health information systems (HIS) are critical systems deployed to help organizations and all stakeholders within the healthcare arena eradicate disjointed information and modernize health processes by integrating different health functions and departments across the healthcare arena for better healthcare delivery [ 1 , 2 , 3 , 4 , 5 , 6 ]. Over time, the HIS has transformed significantly amidst several players such as political, economic, socio-technical, and technological actors that influence the ability to afford quality healthcare services [ 7 ]. The unification of health-related processes and information systems in the healthcare arena has been realized by HIS. HIS has often been contextualized as a system that improves healthcare services’ quality by supporting management and operation processes to afford vital information and a unified process, technology, and people [ 7 , 8 ]. Several authors assert this disposition of HIS, alluding to its remarkable capabilities in affording seamless healthcare [ 9 ]. Haux [ 10 ] modestly chronicled HIS as a system that handles data to convey knowledge and insights in the healthcare environment. Almunawar and Anshari [ 7 ] incorporated this construed method to describe HIS to be any system within the healthcare arena that processes data and affords information and knowledge. Malaquias and Filho [ 11 ] accentuated the importance of HIS in the same light, highlighting its emergence to tackle the need to store, process, and extract information from the system data for the optimization of processes, enhancing services provided and supporting decision making.

HIS’s definition was popularized by Lippeveld [ 12 ], and reported to be an “integrated effort to collect, process, report and use health information and knowledge to influence policy-making, programme action and research”. Over the course of time, this definition has been adopted and contextualized countlessly by many authors and the World Health Organization (WHO) [ 3 , 8 , 13 , 14 , 15 ]. Although Haule, Muhanga [ 8 ] claimed the definition of HIS varies globally, in actuality, the definition has never changed from its inception, but on the contrary, it has been conceptualized over various contexts. Malaquias and Filho [ 11 ] reiterated this definition in the extant literature. These scholars affirmed HIS as “a set of interrelated components that collect, process, store and distribute information to support the decision-making process and assist in the control of health organizations” [ 11 ]. The same definition is adopted in this paper, and HIS is construed as “a system of interrelated constituents that collect, process, store and distribute data and information to support the decision-making process, assist in the control of health organizations and enhance healthcare applications”. However, it is paramount to note that HIS is broad. In many instances, the definition is of minimal relevance due to its associated incorporation with external applications related to health developments and policy making [ 16 ]. Hence, emphasis should not be placed on the definition but on its contribution to all facets of health development.

The current state of HIS is considered to be inadequate despite its numerus deployment of HIS that has been driven by its potential benefit to uplift healthcare and revolutionize its processes [ 17 , 18 ]. The persistence of many constraints and resistance to technology has resulted to the incapacitation of HIS in the attainment of its objectives. The extant literature reveals several challenges in different categories, such as the inadequacy of human resources and technological convergence within the healthcare [ 18 ], highlighting the evidence of limitations of HIS that restrict their utilization and deployment within the healthcare. Although several authors identified the unique disposition of HIS in integrating care and unifying the health process, these perspectives seems to be marred by the presence of barriers [ 17 , 19 ]. Garcia, De la Vega [ 17 ] alleged that the current HIS deployment is characterized by fragmentation, update instability, and lack of standardization that limit its potential to aid healthcare. Congruently, several authors associated the lack of awareness of HIS potential, the underuse HIS, inadequate communication network, and security and confidentiality concerns among the barriers limiting HIS [ 20 ]. Thus, the need for this paper is set forth: to uncover current and pertinent insights on HIS deployment as a concerted effort to strengthen it and augment its healthcare delivery capabilities. This paper comprehensively explores the extant literature systematically with respect to the overarching objective: to ascertain value insights pertaining to HIS holistically from literature synthesis. To achieve this goal, the following research questions are investigated: What has been the development of the HIS since its conception? How has HIS been deployed? Finally, how does HIS enable information and knowledge management in healthcare?

In this paper, an overview HIS from the extant literature in relation to the health sector is presented with associated related work. It is essential to point out that in spite of the surplus of research work conducted on health information systems, there are still many challenges confronting it within the healthcare area that necessitate the need for this study [ 5 ]. Therefore, the extant literature is explored in this paper systematically to uncover current and pertinent insights surrounding the deployment of the HIS, an integrated information system (IS) for healthcare. This paper is structured into five sections. The paper commences with an introductory background that presents the contextualization of HIS for healthcare, followed by a methodology that details the method and material used in this study. The next section, which is the discussion, presents the discourse of HIS evolution that highlights its progress to date, its structural deployment, and the information system and knowledge management within the healthcare arena as mediated by HIS. The last part of this study focuses on the conclusion that summarizes the discussion presented in this paper.

2. Material and Method

In this paper, a systematic review is conducted to synthesize the extant literature and analyze the content to ascertain the value disposition of HIS in relation to healthcare delivery. Preceding this review, the used of search engines was employed to retrieve related research publications that fit the study scope and contexts. The main database used was the Web of Science . Other databases such as SCOPUS and Google Scholar were also used to obtain additional relevant work associated with the context. For inclusion criteria, only articles containing references to the keywords HIS, information, healthcare, and related healthcare systems were analyzed scrupulously. Research work that did not have these references, did not constitute a journal or conference-proceeding work, and were not written in the English language were excluded. Figure 1 , the PRISMA flow statement, illustrates the methodological phases of this research along with the exclusion and inclusion criteria that were implemented for the study synthesis.

An external file that holds a picture, illustration, etc.
Object name is healthcare-11-00959-g001.jpg

Prisma flow Statement.

3. Discussion

3.1. the evolution of health information systems.

The concept of enhancing healthcare applications has always been the foundation of HIS, which posits that the intercession of information systems with business processes affords better healthcare services [ 7 , 21 ]. According to Almunawar and Anshari [ 7 ], many determinants, such as technological, political, social and economic, have enormously influenced the nature of the healthcare industry. The technological determinant, particularly the computerized component, is thought to be deeply ingrained in the enactment and functioning of HIS. According to Panerai [ 16 ], this single attribute can be held solely responsible for HIS letdowns rather than its accomplishment.

The ownership of HIS has been contested in the literature, with some authors claiming that HIS belongs to the IT industries [ 22 ]. While IT has enabled many developments in various industries, it has also resulted in many dissatisfactions. Recently, there has been an insurgence from many industries, particularly the healthcare industries, who acknowledge the role of IT in optimizing and enhancing health initiatives but want appropriation of their integrated IS. However, according to the definition of HIS, it is presented as “a set of interconnected components that collect, process, store, and distribute information to support decision-making and aid in the control of health organizations”; thus, the disposition of HIS was established. Without bias, the development of HIS was conceived due to unavoidable changes and transformations within the global space.

A good representation and consolidation of this dispute are within the realization that there is a co-existence of different related and non-related components in a system. In this case, the HIS is an entrenched system with several features, including technologies. Panerai [ 16 ] supported this notion and theorized HIS to be broad, stating that the relevance of its definition is contextual. In the study, HIS was reiterated as any kind of “structured repository of data, information, or knowledge” that can be used to support health care delivery or promote health development [ 16 ]. Thus, maintaining a rigid definition is of minimal practical use because many HIS instances are not directly associated with health development, such as the financial and human resource modules. Moreover, several different HIS examples are categorized according to the functions they are dedicated to serving within the healthcare arena. They highlight the instances of the existence of outliers that are not regarded as the normal HIS even though they contain health determinants data, such as socioeconomic and environmental, which can be used to formulate health policies.

The development of HIS over the years has led many to believe they are solely computer technology. This notion has contributed dramatically to the misconception of the origin of HIS and the lack of peculiarity between the HIS conceptual structure and implemented HIS technology. The literature dates back the origin of HIS, which can be associated with the first record of mortality in the 18th century, revealing their existence to be 200 years or older than the invention of computers [ 16 ]. This demonstrates the emergence of digitalized HIS from the availability of commercialized episodes of “electronic medical records” EMR records in the 1970s [ 23 ]. Namageyo-Funa, Aketch [ 24 ] commended the advancement of technologies in the healthcare arena, recounting the implementation of digitalized HIS that significantly revolutionized the recording and accessing of health information. A study by Lindberg, Venkateswaran [ 25 ] highlighted an instance of HIS transition from paper based to digitally based, revealing a streamlined workflow that revolutionized health care applications in the healthcare arena. This HIS transition over the course of time has led to increased adoption of it within the health care arena. Tummers, Tekinerdogan [ 26 ] highlighted the landmark of HIS from its transition to digitalization and reported a current trend in healthcare that has now been extended with the inclusion of block chain technology within the healthcare arena. Malik, Kazi [ 27 ] assessed HIS adoption in terms of technological, organizational, human, and environmental determinants and reported a variation of different degrees of utilization. Despite these facts, the extant literature maintains the need for a resilient and sustainable HIS for health care applications within the healthcare arena at all levels [ 18 , 27 , 28 ].

Figure 2 illustrates the successful adoption of HIS amidst the significant determinants of its effectiveness. From the Figure 2 , the technological, organizational, human, and environmental determinants are the defining concepts along with individual sub-determinants in each domain that influence HIS adoption. At the technological level, the need for digitalization drives HIS adoption, especially for stakeholders such as clinicians and decision makers. The administrative, management, and planning functions are the driving actors within the organization level that endorse the implementation of HIS. The environmental and human determinants are more concerned with the socio-technical components that have been regarded as complex drivers for HIS adoptions. Perceptions, literacy, and usability are known forces within these categories that necessitate the adoption of HIS in many healthcare arenas.

An external file that holds a picture, illustration, etc.
Object name is healthcare-11-00959-g002.jpg

Effective health information system associations with the driving adoption determinants. Source: [ 27 ].

3.2. HIS Structural Deployment

HIS’s unified front is geared toward assimilating and disseminating health gen to enhance healthcare delivery. HIS consists of different sub-systems that serve several actors within the healthcare arena [ 29 ]. These sub-systems are dedicated to specific tasks that perform various functions such as civil registrations, disease surveillance, outbreak notices, interventions, and health information sharing within the healthcare arena. It also supports and links many functions and activities within the healthcare environment, such as recording various data and information for stakeholders, scheduling, billing, and managing. Stakeholders are furnished with health information from diverse HIS scenarios. These include but are not limited to information systems for hospitals and patients, health institution systems, and Internet information systems. Sligo, Gauld [ 30 ] regarded HIS as a panacea within the healthcare ground that improves health care applications. Despite all the limitless capabilities of HIS, it has been reported to be asymmetrical, lacking interactions within subsystems [ 1 , 18 ]. Many decision making methods and policies rely on good health information [ 31 ]. According to Suresh and Singh [ 32 ], the HIS enables stakeholders such as the government and all other players in the healthcare arena to have access to health information, which influences the delivery of healthcare. The sundry literature further reveals accurate health information to be the foundation of decision making and highlights the decisive role of the human constituent [ 29 , 31 , 33 , 34 ].

Furthermore, HIS can be classified into two cogs in today’s era: the computer-related constituent that employs ICT-related tools and the non-computer component, which both operate at different levels. These levels include strategic, tactical, and operational. The deployment of HIS at the strategic level offers intelligence functions such as intelligent decision support, financial estimation, performance assessment, and simulation systems [ 3 , 35 ]. At the tactical level, managerial functions are performed within the system, while at the operational level, functions including recording, invoicing, scheduling, administrative, procurement, automation, and even payroll are carried out. Figure 3 shows the three levels within the healthcare system where HIS deployment is utilized.

An external file that holds a picture, illustration, etc.
Object name is healthcare-11-00959-g003.jpg

Levels of HIS deployment: source authors.

3.3. Health Information Systems Benefits

HIS, as an interrelated system, houses several core processes and branches in the healthcare arena, affording many benefits. Among these are the ease of access to patients and medical records, reduction of costs and time, and evidence-based health policies and interventions [ 8 , 21 , 36 , 37 , 38 ]. Several authors revealed the benefits of HIS to be widely known and influential within the healthcare domain [ 38 ]. Furthermore, many health organizations are drawn to HIS because of these numerous advantages [ 22 , 39 ]. Moreover, investment in HIS has enabled effective decision making, real-time comprehensive health information for quality health care applications, effective policies in the healthcare arena, scaled-up monitoring and evaluation, health innovations, resource allocations, surveillance services, and enhanced governance and accountability [ 36 , 40 , 41 , 42 ]. Ideally, HIS is pertinent for data, information, and broad knowledge sharing in the healthcare environment. HIS critical features are now cherished due to their incorporation with diverse technology [ 16 , 43 ]. The extant literature reveals the role of HIS to extend beyond its reimbursement. Table 1 presents a summarized extract of various HIS benefits as captured in the literature and some of its core enabling components or instances.

HIS core enabling components and its benefits.

Source: Authors Core Enabling HIS Components Benefits
Malaquias and Filho [ ]Health ER
eHealth
mHealth
Ease of access to patient and medical information from records;
Cost reduction;
Enhance efficiency in patients’ data recovery and management;
Enable stakeholders’ health information centralization and remote access.
Ammenwerth, Duftschmid [ ]eHealthUpsurge in care efficacy and quality and condensed costs for clinical services;
Lessen the health care system’s administrative costs;
Facilitates novel models of health care delivery.
Tummers, Tobi [ ]HISPatient information management;
Enable communication within the healthcare arena;
Afford high-quality and efficient care.
Steil, Finas [ ]HISEnable inter- and multidisciplinary collaboration between humans and machines;
Afford autonomous and intelligent decision capabilities for health care applications.
Nyangena, Rajgopal [ ]HISEnable seamless information exchange within the healthcare arena.
Sik, Aydinoglu [ ]HISSupport precision medicine approaches and decision support.

3.4. Information System and Knowledge Management in the Healthcare Arena

The presence of modernized information systems (IS) in the healthcare arena is alleged by scholars to be a congested domain that seldom fosters stakeholders’ multifaceted and disputed relationships [ 48 ]. On the other hand, it is believed that a significant amount of newly acquired knowledge in the field of healthcare is required for the improvement of health care [ 49 ]. Ascertaining and establishing the role of IS and knowledge management is an important step in the development of HIS for healthcare. Flora, Margaret [ 5 ] posited that efficient IS and data usage are crucial for an effective healthcare system. Bernardi [ 50 ] alleged that the underpinning inkling of a “robust and efficient” HIS enables healthcare stakeholders such as managers and providers to leverage health information to commendably plan and regulate healthcare, which could result in enhanced survival rates. As a result, it is imperative to ground these ideas within the context of the healthcare industry to provide a foundation for developing a robust and sustainable HIS for use in the context of health care applications.

3.4.1. Information System

The assimilation and dissimilation of health information and data within the healthcare system is an important task that influences healthcare outcome. Within the healthcare setting, IS plays a significant role in the assimilation and dissimilation of health information needed by healthcare stakeholders. Many continents endorse the deployment of IS mainly to consolidate mutable information from different sources within the systems. The primary objective for these systems’ deployment has been centered on bringing together unique and different components such as institutions, people, processes, and technology in the system under one umbrella [ 5 , 51 ]. An overview of the extant literature reveals that this has rarely been easy, as integration within this system has always been difficult in many contexts. In the context of HIS, many reported the integration phenomena to be problematic, attributing this to the global transformation within the healthcare arena [ 52 , 53 ]. This revolution, coupled with the advancement of the healthcare arena, has resulted in the need for robust allied health IS systems that incorporates different IS and information technology [ 5 , 22 ]. These allied health information systems are necessary to consolidate independent information systems within their healthcare arena use to enhance healthcare applications [ 54 , 55 ]. Organizations in the healthcare arena expect these systems to be sustainable and resilient; however, in order to satisfy these requirements, an integrated information system is needed to unify all independent, agile, and flexible health IS to mitigate challenges for HIS [ 56 ].

An aligned HIS that is allied is essential, as it supports health information networks (HIN) that subsequently enhance and improve healthcare applications [ 44 , 57 ]. Thus, many organizations within the healthcare settings are fine-tuning their HIS to be resilient and sustainable. However, the realization of a robust information system within the healthcare arena is challenging and depends on the flow of information as a crucial constituent for suave and efficient functioning [ 58 , 59 ].

3.4.2. Knowledge Management

The process of constructing value and generating a maintainable edge for an industry with capitalization on building, communicating, and knowledge applications procedures to realize set aspirations is denoted as knowledge management [ 60 ]. The literature reveals knowledge management as an important contributor to organizational performance through its knowledge-sharing capabilities [ 61 ]. In the healthcare industry, there is a high demand for knowledge to enhance healthcare applications [ 49 , 62 ]. Several studies reported that the deployment of knowledge management in the healthcare arena is set to enhance healthcare treatment effectiveness [ 49 , 58 , 61 ]. Many stakeholders such as governments, World Health Organization (WHO), and healthcare workers rely on the management of healthcare knowledge to complement healthcare applications. According to Kim, Newby-Bennett [ 61 ], the focus of knowledge management is to efficaciously expedite knowledge sharing. However, integrating knowledge from different sources is challenging and requires an enabler [ 61 ].

The HIS is an indispensable enabler of health knowledge generated from amalgamated health information within the healthcare arena [ 63 , 64 , 65 ]. Dixon, McGowan [ 66 ] asserted that efficacious modifications in the healthcare arena are made possible by knowledge codification and collaboration from information technologies. Similarly, some authors have pinpointed information and communication technologies within the healthcare arena to be a major determinant in the attainment of a sustainable health system development [ 58 ]. The knowledge management relationship with HIS is considered complementary and balanced, as it enables the availability of knowledge that can be shared. The importance of knowledge management is relevant for the realization of an enhanced healthcare application via HIS. Soltysik-Piorunkiewicz and Morawiec [ 58 ] claimed that the information society effectively uses HIS as an information system for management, patient knowledge, health knowledge, healthcare unit knowledge, and drug knowledge. The authors herein demonstrated how HIS facilitates knowledge management in the healthcare sector to improve healthcare applications.

The role of HIS as an integrated IS and key enabler of healthcare knowledge management highlights its potential within the healthcare arena. From the conception of HIS and the records of its evolution, significant achievements have been attained that are demonstrated at different levels of its structural deployment. HIS deployment in several settings of healthcare have positively influenced clinical processes and patients’ outcomes [ 17 ]. Globally, the need for HIS within the healthcare system is critical in the enhancement of healthcare. Many healthcare actions are dependent on the use of HIS [ 67 , 68 , 69 ]. This demand is substantiated by the offerings of HIS in tackling the transformation and digitalization confronting the healthcare system. However, despite the need for HIS and its potential within healthcare, several barriers limit its optimization. Some authors posited the role and involvement of healthcare professionals such as physicians to be important measure that is paramount to decreasing the technical and personal barriers sabotaging HIS deployment [ 20 ]. Nonetheless, the design of HIS is accentuated on augmenting health and is considered to be lagging behind in attaining quality healthcare [ 70 ].

Although there are equal blessings as well as challenges with HIS deployment, this study appraisal of HIS highlights its capabilities and attributes that enhance healthcare in many ways. From its conception, HIS has evolved significantly to enable the digitalization of many healthcare processes. Its deployment structurally has facilitated many healthcare applications at all levels within the health system where it has been implemented. Many benefits such as ease of access to medical records, cost reduction, data and information management, precision medicine, and autonomous and intelligent decisions have been enabled by HIS deployment. Primarily, HIS is the core enabler of the healthcare information system and knowledge management within the healthcare arena. Ascertaining the attributes and development of HIS is a paramount to driving its implementation and realizing its potential. Many deployments of HIS can be anchored on this study as a reference for planning and executing HIS implementation. The extant literature points out the need for the role of technology such HIS to be ascertained, as little is known in this regard, which as a result has adversely influenced healthcare coordination [ 19 ]. Additionally, among the barriers of HIS, the presence of inadequate planning that fails to cater to the needs of those adopting it hinders the optimization of these systems within the healthcare arena [ 71 ]. Cawthon, Mion [ 72 ] associated the lack of health literacy incorporation in deployed HIS to increased cost and poorer health outcomes. Hence, the insight from this study can be incorporated and associated with HIS initiatives to mitigate these issues. Thus, the findings of this study can be employed to strategize HIS deployment and plans as well as augment its potential to enhance healthcare. Furthermore, the competency of healthcare stakeholders such as patients can be enhanced with the findings of this study that accentuate the holistic representation of HIS in the dissimilation and assimilation of health data and information.

4. Conclusions

In the healthcare information and knowledge arena, assimilation and dissemination is a facet that influences healthcare delivery. The conception and evolution of HIS has positioned this system within the healthcare arena to arbitrate information interchange for its stakeholders. HIS deployment within healthcare has not only enabled information and knowledge management, but it has also enabled and driven many healthcare agendas and continues to maintain a solidified presence within the healthcare space. However, its deployment and enactment globally has been marred and plagued with several challenges that hinder its optimization and defeat its purpose. Phenomena such as the occurrences of pandemics such as COVID-19, which are uncertain, and the advancement of technology that cannot be controlled have caused disputed gradients regarding the positioning of HIS. These phenomena have not only influenced the adoption of HIS but have also limited its ability to be fully utilized. Although much research on HIS has been conducted, the presence of these phenomena and many other inherent challenges such as fragmentation and cost still maintain a constant, prominent presence, which has led to the need for this study.

Consequently, the starting point for this study was to provide insight and expertise regarding the discourse of HIS for healthcare applications. This paper presents current and pertinent insights regarding the deployment of the HIS that, when adopted, can positively aid its employment. This paper investigated the existing HIS literature to accomplish the objective set forth in the introduction. This study’s synthesis derived key insights relevant to the holistic view of HIS through a thorough systematic review of the various extant literature on HIS and healthcare. According to the study’s findings, HIS are critical and foundational in the drive of information and knowledge management for healthcare. The contribution of HIS to healthcare has been and continues to be groundbreaking since its conception and through its consequent evolution. Nevertheless, despite the presence of some limitations that are external and inherent, it is claimed to have transformed and changed healthcare from the start. Similarly, the evaluation of the current HIS is expected to impact its adoption and strengthen its implementation within the global healthcare space, which is greatly desired. These findings are of great importance to the healthcare stakeholders that directly and indirect interact with HIS. Additionally, scholars and healthcare researchers can benefit from this study by incorporating the findings in future works that plan HIS for healthcare.

Funding Statement

This research received no external funding.

Author Contributions

Conceptualization, A.E.; methodology, A.E.; software, A.E.; validation, A.E.; formal analysis, A.E.; investigation, A.E.; resources, A.E.; data curation, A.E.; writing—original draft preparation, A.E.; writing—review and editing, A.E.; visualization, A.E.; supervision, S.P.M. and I.E.A.; project administration, A.E., S.P.M. and I.E.A.; funding acquisition, A.E., S.P.M. and I.E.A. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Informed consent statement, data availability statement, conflicts of interest.

The authors declare there are no conflict of interest.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Help | Advanced Search

Computer Science > Computation and Language

Title: the prompt report: a systematic survey of prompting techniques.

Abstract: Generative Artificial Intelligence (GenAI) systems are being increasingly deployed across all parts of industry and research settings. Developers and end users interact with these systems through the use of prompting or prompt engineering. While prompting is a widespread and highly researched concept, there exists conflicting terminology and a poor ontological understanding of what constitutes a prompt due to the area's nascency. This paper establishes a structured understanding of prompts, by assembling a taxonomy of prompting techniques and analyzing their use. We present a comprehensive vocabulary of 33 vocabulary terms, a taxonomy of 58 text-only prompting techniques, and 40 techniques for other modalities. We further present a meta-analysis of the entire literature on natural language prefix-prompting.
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as: [cs.CL]
  (or [cs.CL] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

NeurIPS 2024, the Thirty-eighth Annual Conference on Neural Information Processing Systems, will be held at the Vancouver Convention Center

Monday Dec 9 through Sunday Dec 15. Monday is an industry expo.

research paper of information systems

Registration

Pricing » Registration 2024 Registration Cancellation Policy » Certificate of Attendance

Our Hotel Reservation page is currently under construction and will be released shortly. NeurIPS has contracted Hotel guest rooms for the Conference at group pricing, requiring reservations only through this page. Please do not make room reservations through any other channel, as it only impedes us from putting on the best Conference for you. We thank you for your assistance in helping us protect the NeurIPS conference.

Announcements

  • The call for High School Projects has been released
  • The Call For Papers has been released
  • See the Visa Information page for changes to the visa process for 2024.

Latest NeurIPS Blog Entries [ All Entries ]

Jun 19, 2024
Jun 04, 2024
May 17, 2024
May 07, 2024
Apr 17, 2024
Apr 15, 2024
Mar 03, 2024
Dec 11, 2023
Dec 10, 2023
Dec 09, 2023

Important Dates

Mar 15 '24 11:46 AM PDT *
Apr 05 '24 (Anywhere on Earth)
Apr 21 '24 (Anywhere on Earth)
Main Conference Paper Submission Deadline May 22 '24 01:00 PM PDT *
May 22 '24 01:00 PM PDT *
Jun 14 '24 (Anywhere on Earth)
Jun 27 '24 01:00 PM PDT *
Aug 02 '24 06:00 PM PDT *
Sep 05 '24 (Anywhere on Earth)
Main Conference Author Notification Sep 25 '24 06:00 PM PDT *
Datasets and Benchmarks - Author Notification Sep 26 '24 (Anywhere on Earth)
Workshop Accept/Reject Notification Date Sep 29 '24 (Anywhere on Earth)
Oct 30 '24 (Anywhere on Earth)
Nov 15 '24 11:00 PM PST *

Timezone:

If you have questions about supporting the conference, please contact us .

View NeurIPS 2024 exhibitors » Become an 2024 Exhibitor Exhibitor Info »

Organizing Committee

General chair, program chair, workshop chair, workshop chair assistant, tutorial chair, competition chair, data and benchmark chair, affinity chair, diversity, inclusion and accessibility chair, ethics review chair, communication chair, social chair, journal chair, creative ai chair, workflow manager, logistics and it, mission statement.

The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research advances in Artificial Intelligence and Machine Learning, principally by hosting an annual interdisciplinary academic conference with the highest ethical standards for a diverse and inclusive community.

About the Conference

The conference was founded in 1987 and is now a multi-track interdisciplinary annual meeting that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas.

More about the Neural Information Processing Systems foundation »

NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies.

ChatGPT is bullshit

  • Original Paper
  • Open access
  • Published: 08 June 2024
  • Volume 26 , article number  38 , ( 2024 )

Cite this article

You have full access to this open access article

research paper of information systems

  • Michael Townsen Hicks   ORCID: orcid.org/0000-0002-1304-5668 1 ,
  • James Humphries 1 &
  • Joe Slater 1  

259k Accesses

1498 Altmetric

15 Mentions

Explore all metrics

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

Similar content being viewed by others

research paper of information systems

A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness

research paper of information systems

Assessing the Strengths and Weaknesses of Large Language Models

research paper of information systems

Foundation and large language models: fundamentals, challenges, opportunities, and social impacts

Avoid common mistakes on your manuscript.

Introduction

Large language models (LLMs), programs which use reams of available text and probability calculations in order to create seemingly-human-produced writing, have become increasingly sophisticated and convincing over the last several years, to the point where some commentators suggest that we may now be approaching the creation of artificial general intelligence (see e.g. Knight, 2023 and Sarkar, 2023 ). Alongside worries about the rise of Skynet and the use of LLMs such as ChatGPT to replace work that could and should be done by humans, one line of inquiry concerns what exactly these programs are up to: in particular, there is a question about the nature and meaning of the text produced, and of its connection to truth. In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002 , 2005 ). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

We think that this is worth paying attention to. Descriptions of new technology, including metaphorical ones, guide policymakers’ and the public’s understanding of new technology; they also inform applications of the new technology. They tell us what the technology is for and what it can be expected to do. Currently, false statements by ChatGPT and other large language models are described as “hallucinations”, which give policymakers and the public the idea that these systems are misrepresenting the world, and describing what they “see”. We argue that this is an inapt metaphor which will misinform the public, policymakers, and other interested parties.

The structure of the paper is as follows: in the first section, we outline how ChatGPT and similar LLMs operate. Next, we consider the view that when they make factual errors, they are lying or hallucinating: that is, deliberately uttering falsehoods, or blamelessly uttering them on the basis of misleading input information. We argue that neither of these ways of thinking are accurate, insofar as both lying and hallucinating require some concern with the truth of their statements, whereas LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing. This, we suggest, is very close to at least one way that Frankfurt talks about bullshit. We draw a distinction between two sorts of bullshit, which we call ‘hard’ and ‘soft’ bullshit, where the former requires an active attempt to deceive the reader or listener as to the nature of the enterprise, and the latter only requires a lack of concern for truth. We argue that at minimum, the outputs of LLMs like ChatGPT are soft bullshit: bullshit–that is, speech or text produced without concern for its truth–that is produced without any intent to mislead the audience about the utterer’s attitude towards truth. We also suggest, more controversially, that ChatGPT may indeed produce hard bullshit: if we view it as having intentions (for example, in virtue of how it is designed), then the fact that it is designed to give the impression of concern for truth qualifies it as attempting to mislead the audience about its aims, goals, or agenda. So, with the caveat that the particular kind of bullshit ChatGPT outputs is dependent on particular views of mind or meaning, we conclude that it is appropriate to talk about ChatGPT-generated text as bullshit, and flag up why it matters that – rather than thinking of its untrue claims as lies or hallucinations – we call bullshit on ChatGPT.

What is ChatGPT?

Large language models are becoming increasingly good at carrying on convincing conversations. The most prominent large language model is OpenAI’s ChatGPT, so it’s the one we will focus on; however, what we say carries over to other neural network-based AI chatbots, including Google’s Bard chatbot, AnthropicAI’s Claude (claude.ai), and Meta’s LLaMa. Despite being merely complicated bits of software, these models are surprisingly human-like when discussing a wide variety of topics. Test it yourself: anyone can go to the OpenAI web interface and ask for a ream of text; typically, it produces text which is indistinguishable from that of your average English speaker or writer. The variety, length, and similarity to human-generated text that GPT-4 is capable of has convinced many commentators to think that this chatbot has finally cracked it: that this is real (as opposed to merely nominal) artificial intelligence, one step closer to a human-like mind housed in a silicon brain.

However, large language models, and other AI models like ChatGPT, are doing considerably less than what human brains do, and it is not clear whether they do what they do in the same way we do. The most obvious difference between an LLM and a human mind involves the goals of the system. Humans have a variety of goals and behaviours, most of which are extra-linguistic: we have basic physical desires, for things like food and sustenance; we have social goals and relationships; we have projects; and we create physical objects. Large language models simply aim to replicate human speech or writing. This means that their primary goal, insofar as they have one, is to produce human-like text. They do so by estimating the likelihood that a particular word will appear next, given the text that has come before.

The machine does this by constructing a massive statistical model, one which is based on large amounts of text, mostly taken from the internet. This is done with relatively little input from human researchers or the designers of the system; rather, the model is designed by constructing a large number of nodes, which act as probability functions for a word to appear in a text given its context and the text that has come before it. Rather than putting in these probability functions by hand, researchers feed the system large amounts of text and train it by having it make next-word predictions about this training data. They then give it positive or negative feedback depending on whether it predicts correctly. Given enough text, the machine can construct a statistical model giving the likelihood of the next word in a block of text all by itself.

This model associates with each word a vector which locates it in a high-dimensional abstract space, near other words that occur in similar contexts and far from those which don’t. When producing text, it looks at the previous string of words and constructs a different vector, locating the word’s surroundings – its context – near those that occur in the context of similar words. We can think of these heuristically as representing the meaning of the word and the content of its context. But because these spaces are constructed using machine learning by repeated statistical analysis of large amounts of text, we can’t know what sorts of similarity are represented by the dimensions of this high-dimensional vector space. Hence we do not know how similar they are to what we think of as meaning or context. The model then takes these two vectors and produces a set of likelihoods for the next word; it selects and places one of the more likely ones—though not always the most likely. Allowing the model to choose randomly amongst the more likely words produces more creative and human-like text; the parameter which controls this is called the ‘temperature’ of the model and increasing the model’s temperature makes it both seem more creative and more likely to produce falsehoods. The system then repeats the process until it has a recognizable, complete-looking response to whatever prompt it has been given.

Given this process, it’s not surprising that LLMs have a problem with the truth. Their goal is to provide a normal-seeming response to a prompt, not to convey information that is helpful to their interlocutor. Examples of this are already numerous, for instance, a lawyer recently prepared his brief using ChatGPT and discovered to his chagrin that most of the cited cases were not real (Weiser, 2023 ); as Judge P. Kevin Castel put it, ChatGPT produced a text filled with “bogus judicial decisions, with bogus quotes and bogus internal citations”. Similarly, when computer science researchers tested ChatGPT’s ability to assist in academic writing, they found that it was able to produce surprisingly comprehensive and sometimes even accurate text on biological subjects given the right prompts. But when asked to produce evidence for its claims, “it provided five references dating to the early 2000s. None of the provided paper titles existed, and all provided PubMed IDs (PMIDs) were of different unrelated papers” (Alkaissi and McFarland, 2023 ). These errors can “snowball”: when the language model is asked to provide evidence for or a deeper explanation of a false claim, it rarely checks itself; instead it confidently producesmore false but normal-sounding claims (Zhang et al. 2023 ). The accuracy problem for LLMs and other generative Ais is often referred to as the problem of “AI hallucination”: the chatbot seems to be hallucinating sources and facts that don’t exist. These inaccuracies are referred to as “hallucinations” in both technical (OpenAI, 2023 ) and popular contexts (Weise & Metz, 2023 ).

These errors are pretty minor if the only point of a chatbot is to mimic human speech or communication. But the companies designing and using these bots have grander plans: chatbots could replace Google or Bing searches with a more user-friendly conversational interface (Shah & Bender, 2022 ; Zhu et al., 2023 ), or assist doctors or therapists in medical contexts (Lysandrou, 2023 ). In these cases, accuracy is important and the errors represent a serious problem.

One attempted solution is to hook the chatbot up to some sort of database, search engine, or computational program that can answer the questions that the LLM gets wrong (Zhu et al., 2023 ). Unfortunately, this doesn’t work very well either. For example, when ChatGPT is connected to Wolfram Alpha, a powerful piece of mathematical software, it improves moderately in answering simple mathematical questions. But it still regularly gets things wrong, especially for questions which require multi-stage thinking (Davis & Aaronson, 2023 ). And when connected to search engines or other databases, the models are still fairly likely to provide fake information unless they are given very specific instructions–and even then things aren’t perfect (Lysandrou, 2023 ). OpenAI has plans to rectify this by training the model to do step by step reasoning (Lightman et al., 2023 ) but this is quite resource-intensive, and there is reason to be doubtful that it will completely solve the problem—nor is it clear that the result will be a large language model, rather than some broader form of AI.

Solutions such as connecting the LLM to a database don’t work is because, if the models are trained on the database, then the words in the database affect the probability that the chatbot will add one or another word to the line of text it is generating. But this will only make it produce text similar to the text in the database; doing so will make it more likely that it reproduces the information in the database but by no means ensures that it will.

On the other hand, the LLM can also be connected to the database by allowing it to consult the database, in a way similar to the way it consults or talks to its human interlocutors. In this way, it can use the outputs of the database as text which it responds to and builds on. Here’s one way this can work: when a human interlocutor asks the language model a question, it can then translate the question into a query for the database. Then, it takes the response of the database as an input and builds a text from it to provide back to the human questioner. But this can misfire too, as the chatbots might ask the database the wrong question, or misinterpret its answer (Davis & Aaronson, 2023 ). “GPT-4 often struggles to formulate a problem in a way that Wolfram Alpha can accept or that produces useful output.” This is not unrelated to the fact that when the language model generates a query for the database or computational module, it does so in the same way it generates text for humans: by estimating the likelihood that some output “looks like’’ the kind of thing the database will correspond with.

One might worry that these failed methods for improving the accuracy of chatbots are connected to the inapt metaphor of AI hallucinations. If the AI is misperceiving or hallucinating sources, one way to rectify this would be to put it in touch with real rather than hallucinated sources. But attempts to do so have failed.

The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text. So when they are provided with a database of some sort, they use this, in one way or another, to make their responses more convincing. But they are not in any real way attempting to convey or transmit the information in the database. As Chirag Shah and Emily Bender put it: “Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc. To the extent that they sometimes get the right answer to such questions is only because they happened to synthesize relevant strings out of what was in their training data. No reasoning is involved […] Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language” (Shah & Bender, 2022 ). These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.

Lies, ‘hallucinations’ and bullshit

Frankfurtian bullshit and lying.

Many popular discussions of ChatGPT call its false statements ‘hallucinations’. One also might think of these untruths as lies. However, we argue that this isn’t the right way to think about it. We will argue that these falsehoods aren’t hallucinations later – in Sect. 3.2.3. For now, we’ll discuss why these untruths aren’t lies but instead are bullshit.

The topic of lying has a rich philosophical literature. In ‘Lying’, Saint Augustine distinguished seven types of lies, and his view altered throughout his life. At one point, he defended the position that any instance of knowingly uttering a false utterance counts as a lie, so that even jokes containing false propositions, like –

I entered a pun competition and because I really wanted to win, I submitted ten entries. I was sure one of them would win, but no pun in ten did.

– would be regarded as a lie, as I have never entered such a competition (Proops & Sorensen, 2023 : 3). Later, this view is refined such that the speaker only lies if they intend the hearer to believe the utterance. The suggestion that the speaker must intend to deceive is a common stipulation in literature on lies. According to the “traditional account” of lying:

To lie =  df . to make a believed-false statement to another person with the intention that the other person believe that statement to be true (Mahon, 2015 ).

For our purposes this definition will suffice. Lies are generally frowned upon. But there are acts of misleading testimony which are criticisable, which do not fall under the umbrella of lying. Footnote 1 These include spreading untrue gossip, which one mistakenly, but culpably, believes to be true. Another class of misleading testimony that has received particular attention from philosophers is that of bullshit. This everyday notion was analysed and introduced into the philosophical lexicon by Harry Frankfurt. Footnote 2

Frankfurt understands bullshit to be characterized not by an intent to deceive but instead by a reckless disregard for the truth. A student trying to sound knowledgeable without having done the reading, a political candidate saying things because they sound good to potential voters, and a dilettante trying to spin an interesting story: none of these people are trying to deceive, but they are also not trying to convey facts. To Frankfurt, they are bullshitting.

Like “lie”, “bullshit” is both a noun and a verb: an utterance produced can be a lie or an instance of bullshit, as can the act of producing these utterances. For an utterance to be classed as bullshit, it must not be accompanied by the explicit intentions that one has when lying, i.e., to cause a false belief in the hearer. Of course, it must also not be accompanied by the intentions characterised by an honest utterance. So far this story is entirely negative. Must any positive intentions be manifested in the utterer?

Throughout most of Frankfurt’s discussion, his characterisation of bullshit is negative. He notes that bullshit requires “no conviction” from the speaker about what the truth is ( 2005 : 55), that the bullshitter “pays no attention” to the truth ( 2005 : 61) and that they “may not deceive us, or even intend to do so, either about the facts or what he takes the facts to be” (2005: 54). Later, he describes the “defining feature” of bullshit as “ a lack of concern with truth, or an indifference to how things really are [our emphasis]” (2002: 340). These suggest a negative picture; that for an output to be classed as bullshit, it only needs to lack a certain relationship to the truth.

However, in places, a positive intention is presented. Frankfurt says what a bullshitter ….

“…does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to” (2005: 54).

This is somewhat surprising. It restricts what counts as bullshit to utterances accompanied by a higher-order deception. However, some of Frankfurt’s examples seem to lack this feature. When Fania Pascal describes her unwell state as “feeling like a dog that has just been run over” to her friend Wittgenstein, it stretches credulity to suggest that she was intending to deceive him about how much she knew about how run-over dogs felt. And given how the conditions for bullshit are typically described as negative, we might wonder whether the positive condition is really necessary.

Bullshit distinctions

Should utterances without an intention to deceive count as bullshit? One reason in favour of expanding the definition, or embracing a plurality of bullshit, is indicated by Frankfurt’s comments on the dangers of bullshit.

“In contrast [to merely unintelligible discourse], indifference to the truth is extremely dangerous. The conduct of civilized life, and the vitality of the institutions that are indispensable to it, depend very fundamentally on respect for the distinction between the true and the false. Insofar as the authority of this distinction is undermined by the prevalence of bullshit and by the mindlessly frivolous attitude that accepts the proliferation of bullshit as innocuous, an indispensable human treasure is squandered” (2002: 343).

These dangers seem to manifest regardless of whether there is an intention to deceive about the enterprise a speaker is engaged in. Compare the deceptive bullshitter, who does aim to mislead us about being in the truth-business, with someone who harbours no such aim, but just talks for the sake of talking (without care, or indeed any thought, about the truth-values of their utterances).

One of Frankfurt’s examples of bullshit seems better captured by the wider definition. He considers the advertising industry, which is “replete with instances of bullshit so unmitigated that they serve among the most indisputable and classic paradigms of the concept” ( 2005 :22). However, it seems to misconstrue many advertisers to portray their aims as to mislead about their agendas. They are expected to say misleading things. Frankfurt discusses Marlboro adverts with the message that smokers are as brave as cowboys ( 2002 : 341). Is it reasonable to suggest that the advertisers pretended to believe this?

Frankfurt does allow for multiple species of bullshit ( 2002 : 340). Footnote 3 Following this suggestion, we propose to envisage bullshit as a genus, and Frankfurt’s intentional bullshit as one species within this genus. Other species may include that produced by the advertiser, who anticipates that no one will believe their utterances Footnote 4 or someone who has no intention one way or another about whether they mislead their audience. To that end, consider the following distinction:

Bullshit (general)

Any utterance produced where a speaker has indifference towards the truth of the utterance.

Hard bullshit

Bullshit produced with the intention to mislead the audience about the utterer’s agenda.

Soft bullshit

Bullshit produced without the intention to mislead the hearer regarding the utterer’s agenda.

The general notion of bullshit is useful: on some occasions, we might be confident that an utterance was either soft bullshit or hard bullshit, but be unclear which, given our ignorance of the speaker’s higher-order desires. Footnote 5 In such a case, we can still call bullshit.

Frankfurt’s own explicit account, with the positive requirements about producer’s intentions, is hard bullshit, whereas soft bullshit seems to describe some of Frankfurt’s examples, such as that of Pascal’s conversation with Wittgenstein, or the work of advertising agencies. It might be helpful to situate these distinctions in the existing literature. On our view, hard bullshit is most closely aligned with Cassam ( 2019 ), and Frankfurt’s positive account, for the reason that all of these views hold that some intention must be present, rather than merely absent, for the utterance to be bullshit: a kind of “epistemic insouciance” or vicious attitude towards truth on Cassam’s view, and (as we have seen) an intent to mislead the hearer about the utterer’s agenda on Frankfurt’s view. In Sect. 3.2 we consider whether ChatGPT may be a hard bullshitter, but it is important to note that it seems to us that hard bullshit, like the two accounts cited here, requires one to take a stance on whether or not LLMs can be agents, and so comes with additional argumentative burdens.

Soft bullshit, by contrast, captures only Frankfurt’s negative requirement – that is, the indifference towards truth that we have classed as definitional of bullshit (general) – for the reasons given above. As we argue, ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users’) agenda.

It’s important to note that even this more modest kind of bullshitting will have the deleterious effects that concern Frankfurt: as he says, “indifference to the truth is extremely dangerous…by the mindlessly frivolous attitude that accepts the proliferation of bullshit as innocuous, an indispensable human treasure is squandered” (2002, p343). By treating ChatGPT and similar LLMs as being in any way concerned with truth, or by speaking metaphorically as if they make mistakes or suffer “hallucinations” in pursuit of true claims, we risk exactly this acceptance of bullshit, and this squandering of meaning – so, irrespective of whether or not ChatGPT is a hard or a soft bullshitter, it does produce bullshit, and it does matter.

With this distinction in hand, we’re now in a position to consider a worry of the following sort: Is ChatGPT hard bullshitting, soft bullshitting, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft bullshitting. However, the question of whether these chatbots are hard bullshitting is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions. We canvas a few ways in which ChatGPT can be understood to have the requisite intentions in Sect. 3.2.

ChatGPT is a soft bullshitter

We are not confident that chatbots can be correctly described as having any intentions at all, and we’ll go into this in more depth in the next Sect. (3.2). But we are quite certain that ChatGPT does not intend to convey truths, and so is a soft bullshitter. We can produce an easy argument by cases for this. Either ChatGPT has intentions or it doesn’t. If ChatGPT has no intentions at all, it trivially doesn’t intend to convey truths. So, it is indifferent to the truth value of its utterances and so is a soft bullshitter.

What if ChatGPT does have intentions? In Sect. 1, we argued that ChatGPT is not designed to produce true utterances; rather, it is designed to produce text which is indistinguishable from the text produced by humans. It is aimed at being convincing rather than accurate. The basic architecture of these models reveals this: they are designed to come up with a likely continuation of a string of text. It’s reasonable to assume that one way of being a likely continuation of a text is by being true; if humans are roughly more accurate than chance, true sentences will be more likely than false ones. This might make the chatbot more accurate than chance, but it does not give the chatbot any intention to convey truths. This is similar to standard cases of human bullshitters, who don’t care whether their utterances are true; good bullshit often contains some degree of truth, that’s part of what makes it convincing. A bullshitter can be more accurate than chance while still being indifferent to the truth of their utterances. We conclude that, even if the chatbot can be described as having intentions, it is indifferent to whether its utterances are true. It does not and cannot care about the truth of its output.

Presumably ChatGPT can’t care about conveying or hiding the truth, since it can’t care about anything. So, just as a matter of conceptual necessity, it meets one of Frankfurt’s criteria for bullshit. However, this only gets us so far – a rock can’t care about anything either, and it would be patently absurd to suggest that this means rocks are bullshitters Footnote 6 . Similarly books can contain bullshit, but they are not themselves bullshitters. Unlike rocks – or even books – ChatGPT itself produces text, and looks like it performs speech acts independently of its users and designers. And while there is considerable disagreement concerning whether ChatGPT has intentions, it’s widely agreed that the sentences it produces are (typically) meaningful (see e.g. Mandelkern and Linzen 2023 ).

ChatGPT functions not to convey truth or falsehood but rather to convince the reader of – to use Colbert’s apt coinage – the truthiness of its statement, and ChatGPT is designed in such a way as to make attempts at bullshit efficacious (in a way that pens, dictionaries, etc., are not). So, it seems that at minimum, ChatGPT is a soft bullshitter: if we take it not to have intentions, there isn’t any attempt to mislead about the attitude towards truth, but it is nonetheless engaged in the business of outputting utterances that look as if they’re truth-apt. We conclude that ChatGPT is a soft bullshitter.

ChatGPT as hard bullshit

But is ChatGPT a hard bullshitter ? A critic might object, it is simply inappropriate to think of programs like ChatGPT as hard bullshitters, because (i) they are not agents, or relatedly, (ii) they do not and cannot intend anything whatsoever.

We think this is too fast. First, whether or not ChatGPT has agency, its creators and users do. And what they produce with it, we will argue, is bullshit. Second, we will argue that, regardless of whether it has agency, it does have a function; this function gives it characteristic goals, and possibly even intentions, which align with our definition of hard bullshit.

Before moving on, we should say what we mean when we ask whether ChatGPT is an agent. For the purposes of this paper, the central question is whether ChatGPT has intentions and or beliefs. Does it intend to deceive? Can it, in any literal sense, be said to have goals or aims? If so, does it intend to deceive us about the content of its utterances, or merely have the goal to appear to be a competent speaker? Does it have beliefs—internal representational states which aim to track the truth? If so, do its utterances match those beliefs (in which case its false statements might be something like hallucinations) or are its utterances not matched to the beliefs—in which case they are likely to be either lies or bullshit? We will consider these questions in more depth in Sect. 3.2.2.

There are other philosophically important aspects of agenthood that we will not be considering. We won’t be considering whether ChatGPT makes decisions, has or lacks autonomy, or is conscious; we also won’t worry whether ChatGPT is morally responsible for its statements or its actions (if it has any of those).

ChatGPT is a bullshit machine

We will argue that even if ChatGPT is not, itself, a hard bullshitter, it is nonetheless a bullshit machine. The bullshitter is the person using it, since they (i) don’t care about the truth of what it says, (ii) want the reader to believe what the application outputs. On Frankfurt’s view, bullshit is bullshit even if uttered with no intent to bullshit: if something is bullshit to start with, then its repetition “is bullshit as he [or it] repeats it, insofar as it was originated by someone who was unconcerned with whether what he was saying is true or false” ( 2022 , p340).

This just pushes the question back to who the originator is, though: take the (increasingly frequent) example of the student essay created by ChatGPT. If the student cared about accuracy and truth, they would not use a program that infamously makes up sources whole-cloth. Equally, though, if they give it a prompt to produce an essay on philosophy of science and it produces a recipe for Bakewell tarts, then it won’t have the desired effect. So the idea of ChatGPT as a bullshit machine seems right, but also as if it’s missing something: someone can produce bullshit using their voice, a pen or a word processor, after all, but we don’t standardly think of these things as being bullshit machines, or of outputting bullshit in any particularly interesting way – conversely, there does seem to be something particular to ChatGPT, to do with the way that it operates, which makes it more than a mere tool, and which suggests that it might appropriately be thought of as an originator of bullshit. In short, it doesn’t seem quite right either to think of ChatGPT as analogous to a pen (can be used for bullshit, but can create nothing without deliberate and wholly agent-directed action) nor as to a bullshitting human (who can intend and produce bullshit on their own initiative).

The idea of ChatGPT as a bullshit machine is a helpful one when combined with the distinction between hard and soft bullshit. Reaching again for the example of the dodgy student paper: we’ve all, I take it, marked papers where it was obvious that a dictionary or thesaurus had been deployed with a crushing lack of subtlety; where fifty-dollar words are used not because they’re the best choice, nor even because they serve to obfuscate the truth, but simply because the author wants to convey an impression of understanding and sophistication. It would be inappropriate to call the dictionary a bullshit artist in this case; but it would not be inappropriate to call the result bullshit. So perhaps we should, strictly, say not that ChatGPT is bullshit but that it outputs bullshit in a way that goes beyond being simply a vector of bullshit: it does not and cannot care about the truth of its output, and the person using it does so not to convey truth or falsehood but rather to convince the hearer that the text was written by a interested and attentive agent.

ChatGPT may be a hard bullshitter

Is ChatGPT itself a hard bullshitter? If so, it must have intentions or goals: it must intend to deceive its listener, not about the content of its statements, but instead about its agenda. Recall that hard bullshitters, like the unprepared student or the incompetent politician, don’t care whether their statements are true or false, but do intend to deceive their audience about what they are doing. If so, it must have intentions or goals: it must intend to deceive its listener, not about the content of its statements, but instead about its agenda. We don’t think that ChatGPT is an agent or has intentions in precisely the same way that humans do (see Levenstein and Herrmann ( forthcoming ) for a discussion of the issues here). But when speaking loosely it is remarkably easy to use intentional language to describe it: what is ChatGPT trying to do? Does it care whether the text it produces is accurate? We will argue that there is a robust, although perhaps not literal, sense in which ChatGPT does intend to deceive us about its agenda: its goal is not to convince us of the content of its utterances, but instead to portray itself as a ‘normal’ interlocutor like ourselves. By contrast, there is no similarly strong sense in which ChatGPT confabulates, lies, or hallucinates.

Our case will be simple: ChatGPT’s primary function is to imitate human speech. If this function is intentional, it is precisely the sort of intention that is required for an agent to be a hard bullshitter: in performing the function, ChatGPT is attempting to deceive the audience about its agenda. Specifically, it’s trying to seem like something that has an agenda, when in many cases it does not. We’ll discuss here whether this function gives rise to, or is best thought of, as an intention. In the next Sect. (3.2.3), we will argue that ChatGPT has no similar function or intention which would justify calling it a confabulator, liar, or hallucinator.

How do we know that ChatGPT functions as a hard bullshitter? Programs like ChatGPT are designed to do a task, and this task is remarkably like what Frankfurt thinks the bullshitter intends, namely to deceive the reader about the nature of the enterprise – in this case, to deceive the reader into thinking that they’re reading something produced by a being with intentions and beliefs.

ChatGPT’s text production algorithm was developed and honed in a process quite similar to artificial selection. Functions and selection processes have the same sort of directedness that human intentions do; naturalistic philosophers of mind have long connected them to the intentionality of human and animal mental states. If ChatGPT is understood as having intentions or intention-like states in this way, its intention is to present itself in a certain way (as a conversational agent or interlocutor) rather than to represent and convey facts. In other words, it has the intentions we associate with hard bullshitting.

One way we can think of ChatGPT as having intentions is by adopting Dennett’s intentional stance towards it. Dennett ( 1987 : 17) describes the intentional stance as a way of predicting the behaviour of systems whose purpose we don’t already know.

“To adopt the intentional stance […] is to decide – tentatively, of course – to attempt to characterize, predict, and explain […] behavior by using intentional idioms, such as ‘believes’ and ‘wants,’ a practice that assumes or presupposes the rationality” of the target system (Dennett, 1983 : 345).

Dennett suggests that if we know why a system was designed, we can make predictions on the basis of its design (1987). While we do know that ChatGPT was designed to chat, its exact algorithm and the way it produces its responses has been developed by machine learning, so we do not know its precise details of how it works and what it does. Under this ignorance it is tempting to bring in intentional descriptions to help us understand and predict what ChatGPT is doing.

When we adopt the intentional stance, we will be making bad predictions if we attribute any desire to convey truth to ChatGPT. Similarly, attributing “hallucinations” to ChatGPT will lead us to predict as if it has perceived things that aren’t there, when what it is doing is much more akin to making something up because it sounds about right. The former intentional attribution will lead us to try to correct its beliefs, and fix its inputs --- a strategy which has had limited if any success. On the other hand, if we attribute to ChatGPT the intentions of a hard bullshitter, we will be better able to diagnose the situations in which it will make mistakes and convey falsehoods. If ChatGPT is trying to do anything, it is trying to portray itself as a person.

Since this reason for thinking ChatGPT is a hard bullshitter involves committing to one or more controversial views on mind and meaning, it is more tendentious than simply thinking of it as a bullshit machine; but regardless of whether or not the program has intentions, there clearly is an attempt to deceive the hearer or reader about the nature of the enterprise somewhere along the line, and in our view that justifies calling the output hard bullshit.

So, though it’s worth making the caveat, it doesn’t seem to us that it significantly affects how we should think of and talk about ChatGPT and bullshit: the person using it to turn out some paper or talk isn’t concerned either with conveying or covering up the truth (since both of those require attention to what the truth actually is ), and neither is the system itself. Minimally, it churns out soft bullshit, and, given certain controversial assumptions about the nature of intentional ascription, it produces hard bullshit; the specific texture of the bullshit is not, for our purposes, important: either way, ChatGPT is a bullshitter.

Bullshit? hallucinations? confabulations? The need for new terminology

We have argued that we should use the terminology of bullshit, rather than “hallucinations” to describe the utterances produced by ChatGPT. The suggestion that “hallucination” terminology is inappropriate has also been noted by Edwards ( 2023 ), who favours the term “confabulation” instead. Why is our proposal better than this or other alternatives?

We object to the term hallucination because it carries certain misleading implications. When someone hallucinates they have a non-standard perceptual experience, but do not actually perceive some feature of the world (Macpherson, 2013 ), where “perceive” is understood as a success term, such that they do not actually perceive the object or property. This term is inappropriate for LLMs for a variety of reasons. First, as Edwards ( 2023 ) points out, the term hallucination anthropomorphises the LLMs. Edwards also notes that attributing resulting problems to “hallucinations” of the models may allow creators to “blame the AI model for faulty outputs instead of taking responsibility for the outputs themselves”, and we may be wary of such abdications of responsibility. LLMs do not perceive, so they surely do not “mis-perceive”. Second, what occurs in the case of an LLM delivering false utterances is not an unusual or deviant form of the process it usually goes through (as some claim is the case in hallucinations, e.g., disjunctivists about perception). The very same process occurs when its outputs happen to be true.

So much for “hallucinations”. What about Edwards’ preferred term, “confabulation”? Edwards ( 2023 ) says:

In human psychology, a “confabulation” occurs when someone’s memory has a gap and the brain convincingly fills in the rest without intending to deceive others. ChatGPT does not work like the human brain, but the term “confabulation” arguably serves as a better metaphor because there’s a creative gap-filling principle at work […].

As Edwards notes, this is imperfect. Once again, the use of a human psychological term risks anthropomorphising the LLMs.

This term also suggests that there is something exceptional occurring when the LLM makes a false utterance, i.e., that in these occasions - and only these occasions - it “fills in” a gap in memory with something false. This too is misleading. Even when the ChatGPT does give us correct answers, its process is one of predicting the next token. In our view, it falsely indicates that ChatGPT is, in general, attempting to convey accurate information in its utterances. But there are strong reasons to think that it does not have beliefs that it is intending to share in general–see, for example, Levenstein and Herrmann ( forthcoming ). In our view, it falsely indicates that ChatGPT is, in general, attempting to convey accurate information in its utterances. Where it does track truth, it does so indirectly, and incidentally.

This is why we favour characterising ChatGPT as a bullshit machine. This terminology avoids the implications that perceiving or remembering is going on in the workings of the LLM. We can also describe it as bullshitting whenever it produces outputs. Like the human bullshitter, some of the outputs will likely be true, while others not. And as with the human bullshitter, we should be wary of relying upon any of these outputs.

Investors, policymakers, and members of the general public make decisions on how to treat these machines and how to react to them based not on a deep technical understanding of how they work, but on the often metaphorical way in which their abilities and function are communicated. Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all. They are bullshitting.

Calling chatbot inaccuracies ‘hallucinations’ feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ rather than ‘hallucinations’ isn’t just more accurate (as we’ve argued); it’s good science and technology communication in an area that sorely needs it.

A particularly surprising position is espoused by Fichte, who regards as lying not only lies of omission, but knowingly not correcting someone who is operating under a falsehood. For instance, if I was to wear a wig, and someone believed this to be my real hair, Fichte regards this as a lie, for which I am culpable. Bacin ( 2021 ) for further discussion of Fichte’s position.

Originally published in Raritan , VI(2) in 1986. References to that work here are from the 2005 book version.

In making this comment, Frankfurt concedes that what Cohen calls “bullshit” is also worthy of the name. In Cohen’s use ( 2002 ), bullshit is a type of unclarifiable text, which he associates with French Marxists. Several other authors have also explored this area in various ways in recent years, each adding valuable nuggets to the debate. Dennis Whitcomb and Kenny Easwaran expand the domains to which “bullshit” can be applied. Whitcomb argues there can be bullshit questions (as well as propositions), whereas Easwaran argues that we can fruitfully view some activities as bullshit ( 2023 ).

While we accept that these offer valuable streaks of bullshit insight, we will restrict our discussion to the Frankfurtian framework. For those who want to wade further into these distinctions, Neil Levy’s Philosophy, Bullshit, and Peer Review ( 2023 ) offers a taxonomical overview of the bullshit out there.

This need not undermine their goal. The advertiser may intend to impress associations (e.g., positive thoughts like “cowboys” or “brave” with their cigarette brand) upon their audience, or reinforce/instil brand recognition.

Frankfurt describes this kind of scenario as occurring in a “bull session”: “Each of the contributors to a bull session relies…upon a general recognition that what he expresses or says is not to be understood as being what he means wholeheartedly or believes unequivocally to be true” ( 2005 : 37). Yet Frankfurt claims that the contents of bull sessions are distinct from bullshit.

It’s worth noting that something like the distinction between hard and soft bullshitting we draw also occurs in Cohen ( 2002 ): he suggests that we might think of someone as a bullshitter as “a person who aims at bullshit, however frequently or infrequently he hits his target”, or if they are merely “disposed to bullshit: for whatever reason, to produce a lot of unclarifiable stuff” (p334). While we do not adopt Cohen’s account here, the parallels between his characterisation and our own are striking.

Of course, rocks also can’t express propositions – but then, part of the worry here is whether ChatGPT actually is expressing propositions, or is simply a means through which agents express propositions. A further worry is that we shouldn’t even see ChatGPT as expressing propositions - perhaps there are no communicative intentions, and so we should see the outputs as meaningless. Even accepting this, we can still meaningfully talk about them as expressing propositions. This proposal - fictionalism about chatbots - has recently been discussed by Mallory ( 2023 ).

Alkaissi, H., & McFarlane, S. I., (2023, February 19). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15 (2), e35179. https://doi.org/10.7759/cureus.35179 .

Bacin, S. (2021). My duties and the morality of others: Lying, truth and the good example in Fichte’s normative perfectionism. In S. Bacin, & O. Ware (Eds.), Fichte’s system of Ethics: A critical guide . Cambridge University Press.

Cassam, Q. (2019). Vices of the mind . Oxford University Press.

Cohen, G. A. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), The contours of Agency: Essays on themes from Harry Frankfurt . MIT Press.

Davis, E., & Aaronson, S. (2023). Testing GPT-4 with Wolfram alpha and code interpreter plub-ins on math and science problems. Arxiv Preprint: arXiv, 2308 , 05713v2.

Dennett, D. C. (1983). Intentional systems in cognitive ethology: The panglossian paradigm defended. Behavioral and Brain Sciences , 6 , 343–390.

Article   Google Scholar  

Dennett, D. C. (1987). The intentional stance . The MIT.

Dennis Whitcomb (2023). Bullshit questions. Analysis , 83 (2), 299–304.

Easwaran, K. (2023). Bullshit activities. Analytic Philosophy , 00 , 1–23. https://doi.org/10.1111/phib.12328 .

Edwards, B. (2023). Why ChatGPT and bing chat are so good at making things up. Ars Tecnica . https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/ , accesssed 19th April, 2024.

Frankfurt, H. (2002). Reply to cohen. In S. Buss, & L. Overton (Eds.), The contours of agency: Essays on themes from Harry Frankfurt . MIT Press.

Frankfurt, H. (2005). On Bullshit , Princeton.

Knight, W. (2023). Some glimpse AGI in ChatGPT. others call it a mirage. Wired , August 18 2023, accessed via https://www.wired.com/story/chatgpt-agi-intelligence/ .

Levenstein, B. A., & Herrmann, D. A. (forthcoming). Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies , 1–27.

Levy, N. (2023). Philosophy, Bullshit, and peer review . Camridge University.

Lightman, H., et al. (2023). Let’s verify step by step. Arxiv Preprint: arXiv , 2305 , 20050.

Lysandrou (2023). Comparative analysis of drug-GPT and ChatGPT LLMs for healthcare insights: Evaluating accuracy and relevance in patient and HCP contexts. ArXiv Preprint: arXiv , 2307 , 16850v1.

Macpherson, F. (2013). The philosophy and psychology of hallucination: an introduction, in Hallucination , Macpherson and Platchias (Eds.), London: MIT Press.

Mahon, J. E. (2015). The definition of lying and deception. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/win2016/entries/lying-definition/ .

Mallory, F. (2023). Fictionalism about chatbots. Ergo , 10 (38), 1082–1100.

Google Scholar  

Mandelkern, M., & Linzen, T. (2023). Do language models’ Words Refer?. ArXiv Preprint: arXiv , 2308 , 05576.

OpenAI (2023). GPT-4 technical report. ArXiv Preprint: arXiv , 2303 , 08774v3.

Proops, I., & Sorensen, R. (2023). Destigmatizing the exegetical attribution of lies: the case of kant. Pacific Philosophical Quarterly . https://doi.org/10.1111/papq.12442 .

Sarkar, A. (2023). ChatGPT 5 is on track to attain artificial general intelligence. The Statesman , April 12, 2023. Accesses via https://www.thestatesman.com/supplements/science_supplements/chatgpt-5-is-on-track-to-attain-artificial-general-intelligence-1503171366.html .

Shah, C., & Bender, E. M. (2022). Situating search. CHIIR ‘22: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval March 2022 Pages 221–232 https://doi.org/10.1145/3498366.3505816 .

Weise, K., & Metz, C. (2023). When AI chatbots hallucinate. New York Times, May 9, 2023. Accessed via https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html .

Weiser, B. (2023). Here’s what happens when your lawyer uses ChatGPT. New York Times , May 23, 2023. Accessed via https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html .

Zhang (2023). How language model hallucinations can snowball. ArXiv preprint: arXiv: , 2305 , 13534v1.

Zhu, T., et al. (2023). Large language models for information retrieval: A survey. Arxiv Preprint: arXiv , 2308 , 17107v2.

Download references

Acknowledgements

Thanks to Neil McDonnell, Bryan Pickel, Fenner Tanswell, and the University of Glasgow’s Large Language Model reading group for helpful discussion and comments.

Author information

Authors and affiliations.

University of Glasgow, Glasgow, Scotland

Michael Townsen Hicks, James Humphries & Joe Slater

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michael Townsen Hicks .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26 , 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

Download citation

Published : 08 June 2024

DOI : https://doi.org/10.1007/s10676-024-09775-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Large language models
  • Find a journal
  • Publish with us
  • Track your research

Welcome to the MIT CISR website!

This site uses cookies. Review our Privacy Statement.

Audio hero

Building a Platform Business Requires Balance—Lessons from Salesforce

Platform business models have become highly popular; they are used by half of the world’s ten largest companies by market capitalization. The challenge for established companies is that running a platform business is different from running a product business. A platform business requires building an ecosystem of various constituents with differing interests: customers, the company’s internal product teams, and partners. Based on an in-depth case study of Salesforce Platform, this briefing illustrates one approach to balancing the interests of these constituents.

The June 2024 research briefing is read by author Martin Mocker.

DOWNLOAD THE TRANSCRIPT

Follow the research briefing podcast series on SoundCloud.

© 2024 MIT Center for Information Systems Research, Mocker and Sebastian. MIT CISR Research Briefings are published monthly to update the center’s member organizations on current research projects.

Related Publications

research paper of information systems

Research Briefing

research paper of information systems

Talking Points

Building a successful platform business.

research paper of information systems

Working Paper: Case Study

How salesforce built its platform business.

research paper of information systems

Designed for Digital: How to Architect Your Business for Sustained Success

research paper of information systems

About the Researchers

Profile picture for user mmocker@mit.edu

Martin Mocker, Professor, ESB Business School, Reutlingen University and Academic Research Fellow, MIT CISR

Profile picture for user isebasti@mit.edu

Ina M. Sebastian, Research Scientist, MIT Center for Information Systems Research (CISR)

Mit center for information systems research (cisr).

Founded in 1974 and grounded in MIT's tradition of combining academic knowledge and practical purpose, MIT CISR helps executives meet the challenge of leading increasingly digital and data-driven organizations. We work directly with digital leaders, executives, and boards to develop our insights. Our consortium forms a global community that comprises more than seventy-five organizations.

MIT CISR Associate Members

MIT CISR wishes to thank all of our associate members for their support and contributions.

MIT CISR's Mission Expand

MIT CISR helps executives meet the challenge of leading increasingly digital and data-driven organizations. We provide insights on how organizations effectively realize value from approaches such as digital business transformation, data monetization, business ecosystems, and the digital workplace. Founded in 1974 and grounded in MIT’s tradition of combining academic knowledge and practical purpose, we work directly with digital leaders, executives, and boards to develop our insights. Our consortium forms a global community that comprises more than seventy-five organizations.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

systems-logo

Article Menu

research paper of information systems

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Integrating trade-in strategies for optimal pre-positioning decisions in relief supply-chain systems.

research paper of information systems

Share and Cite

Ju, Y.; Hou, H.; Yang, J.; Ren, Y.; Yang, J. Integrating Trade-In Strategies for Optimal Pre-Positioning Decisions in Relief Supply-Chain Systems. Systems 2024 , 12 , 216. https://doi.org/10.3390/systems12060216

Ju Y, Hou H, Yang J, Ren Y, Yang J. Integrating Trade-In Strategies for Optimal Pre-Positioning Decisions in Relief Supply-Chain Systems. Systems . 2024; 12(6):216. https://doi.org/10.3390/systems12060216

Ju, Yingjie, Hanping Hou, Jianliang Yang, Yuheng Ren, and Jimei Yang. 2024. "Integrating Trade-In Strategies for Optimal Pre-Positioning Decisions in Relief Supply-Chain Systems" Systems 12, no. 6: 216. https://doi.org/10.3390/systems12060216

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Closing Gaps in Data-Sharing Is Critical for Public Health

Updated federal strategy could also ease burdens on agencies, providers.

Navigate to:

  • Table of Contents

research paper of information systems

Every day, public health officials use data from each other and from doctors, hospitals, and health systems to protect people from infectious and environmental threats. When these officials receive timely, accurate, and complete information from health care providers, they can more clearly detect disease, prevent its spread, and help people connect to care. To improve the quality of this information, the U.S. Centers for Disease Control and Prevention developed the Public Health Data Strategy (PHDS), which was updated in April, to facilitate data-sharing between these many stakeholders. As the director of the CDC’s Office of Public Health Data, Surveillance, and Technology, Dr. Jennifer Layden is responsible for leading, coordinating, and executing the strategy.  

This interview has been edited for clarity and length.

What is the Public Health Data Strategy?

It’s CDC’s two-year plan to provide accountability for the data, technology, policy, and administrative actions necessary to meet our public health data goals. We aim to address challenges in data exchange between health care organizations and public health authorities, moving us toward one interconnected system that protects and improves health.

And what are the main goals of this effort?

The PHDS has four main goals: strengthen the core of public health data; accelerate access to analytic and automated solutions that support public health investigations and advance health equity; visualize and share insights to inform public health action; and advance more open and interoperable public health data. The plan sets milestones that help public health partners, health care organizations and providers, and the public understand what’s being done and what progress is being made toward these goals.

What barriers does the strategy aim to address?

Electronic health care records (EHRs) and associated efforts at interoperability [the successful exchange of health information between different systems] have seen over $35 billion of investment over the last couple decades. This has led to robust and widespread use of EHRs , adoption of health IT standards , and improved data-sharing across health care. Public health, however, hasn’t seen the same investment. And this has contributed to gaps in the completeness of data and the timely exchange of information to support public health.

Can you share an example of these gaps?

At the beginning of the COVID pandemic, we had race and ethnicity data on less than 60% of cases. New investments in public health, largely tied to the COVID response, allowed for advanced connectivity with the use of electronic case reporting, or eCR [the automated electronic reporting of individual cases of illness], as well as electronic laboratory reporting [the automated sharing of lab reports]. This led to a rapid improvement in the completeness of race and ethnicity data, which improved the nation’s ability to identify disparities in COVID burden and severity.

As we work to transform public health systems, we need to leverage existing health IT standards and technical approaches to ensure better connections between public health and health care. This benefits us all through more streamlined data-sharing, reduced burden on health care facilities and providers, and faster detection of health threats and outbreaks. And ultimately, improved bi-directional data-sharing [where data is available to health care providers who generate the information and health departments that receive the data] will benefit patients and those who care for them .

What progress have you seen so far?

The PHDS was launched in 2023 with 15 milestones, such as increasing the number of critical access hospitals sending electronic case reports as well as increasing the number of jurisdictions inputting eCR data into disease surveillance systems. Twelve were met , and work continues on the remaining three. The milestones reached in 2023 have made it easier to share information, provided access to modern tools, and improved the real-time monitoring of health threats, all of which strengthened public health data systems. The latest version of the PHDS includes updated 2024 milestones as well as new ones for 2025 that will advance the nation’s public health data capabilities. Milestones for the next two years focus on improving the completeness and coverage of eCR, syndromic surveillance [which uses anonymized emergency room data to identify emerging threats quickly], and data on mortality and wastewater. [When wastewater contains viruses, bacteria, and other infectious diseases circulating in a community, it can provide early warning even if people don’t have symptoms or seek care.]

How will the strategy make it easier for public health agencies and health care to share data?

Collaboration is at the heart of the new milestones. The updated strategy focuses on accelerating the adoption of eCR to ensure timely detection of illnesses, expanding data-sharing initiatives to improve public health responses and decision-making, and driving innovations in analytics to address health disparities and promote health equity.

These new milestones aim to reduce burdens on public health agencies by reducing the need to manually input case data into disease surveillance systems and will mitigate the overhead for managing individual point-to-point connections with labs to support eCR. The strategy will also let public health agencies more effectively identify and address health disparities based on a wider range of health equity measures.

In addition, the Workforce Accelerator Initiative, launched by the CDC Foundation, will recruit, place and support more than 100 technical experts in public health agencies to achieve the strategy’s goals.

What other partners will be engaged to accomplish the strategy?

Successful implementation will require collaboration with public health agencies, public health partners, private industry, health care partners, and other federal agencies, as well as sustained resources. We will directly engage with public health agencies to understand their priority needs and work with public health partners to support their progress toward key milestones. We’ll also collaborate with private partners to encourage dialogue and promote data exchange pilots, as well as with providers and labs to gather feedback on how we can better support their progress.

The CDC is working with the Office of the National Coordinator for Health Information Technology (ONC) to create a common approach for data exchange among health care, public health agencies, and federal agencies. This effort involves a partnership with representatives from health care, health IT, states, and federal organizations that sets up an exchange system to make it easier for providers to send data to public health agencies and for public health agencies to receive it. The collaboration will provide data standards, common agreements, and exchange networks that will assist public health agencies in their data exchange needs. We’ll continue to collaborate with ONC, as well as the Centers for Medicare & Medicaid Services, to advance a shared understanding of activities that support our milestones and will reach out to other federal agencies to synergize our efforts.

What will success look like?

We have ambitious goals to strengthen the connections between public health and health care. And other federal initiatives, like the movement toward the Trusted Exchange Framework and Common Agreement (TEFCA), adoption of USCDI+ , and new data standards lay out a pathway to making this a reality.

In five years, we aim to have 75% of state and big city jurisdictions , along with CDC, connected to TEFCA. This can eliminate inefficient point-to-point interfaces and enable more reliable exchange of real-time information. We also want to have 90% of emergency room data connected and flowing to public health agencies and envision a future where eCR has replaced most manual reporting of cases of infectious diseases and other conditions.

And big picture, what would this accomplish?

Reaching these goals would mean having more complete data and faster reporting of threats that could put our nation at risk. This will lead to better detection of outbreaks, faster response times, and healthier communities—and ultimately result in an integrated public health ecosystem that produces and uses data to support healthier communities and keep people safe.

Sheri Doyle

Don’t miss our latest facts, findings, and survey results in The Rundown

ADDITIONAL RESOURCES

An adult’s hand holds a near-empty glass of alcohol on a dark wooden table.

Trust Magazine

More from pew.

 A marsh with mostly water in the foreground and mostly seagrasses beyond is tranquil in the soft light of sunrise or sunset, which gives most of the seagrasses a golden hue. The sky is partly cloudy.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

MLA General Format 

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

MLA Style specifies guidelines for formatting manuscripts and citing research in writing. MLA Style also provides writers with a system for referencing their sources through parenthetical citation in their essays and Works Cited pages. 

Writers who properly use MLA also build their credibility by demonstrating accountability to their source material. Most importantly, the use of MLA style can protect writers from accusations of plagiarism, which is the purposeful or accidental uncredited use of source material produced by other writers. 

If you are asked to use MLA format, be sure to consult the  MLA Handbook  (9th edition). Publishing scholars and graduate students should also consult the  MLA Style Manual and Guide to Scholarly Publishing  (3rd edition). The  MLA Handbook  is available in most writing centers and reference libraries. It is also widely available in bookstores, libraries, and at the MLA web site. See the Additional Resources section of this page for a list of helpful books and sites about using MLA Style.

Paper Format

The preparation of papers and manuscripts in MLA Style is covered in part four of the  MLA Style Manual . Below are some basic guidelines for formatting a paper in  MLA Style :

General Guidelines

  • Type your paper on a computer and print it out on standard, white 8.5 x 11-inch paper.
  • Double-space the text of your paper and use a legible font (e.g. Times New Roman). Whatever font you choose, MLA recommends that the regular and italics type styles contrast enough that they are each distinct from one another. The font size should be 12 pt.
  • Leave only one space after periods or other punctuation marks (unless otherwise prompted by your instructor).
  • Set the margins of your document to 1 inch on all sides.
  • Indent the first line of each paragraph one half-inch from the left margin. MLA recommends that you use the “Tab” key as opposed to pushing the space bar five times.
  • Create a header that numbers all pages consecutively in the upper right-hand corner, one-half inch from the top and flush with the right margin. (Note: Your instructor may ask that you omit the number on your first page. Always follow your instructor's guidelines.)
  • Use italics throughout your essay to indicate the titles of longer works and, only when absolutely necessary, provide emphasis.
  • If you have any endnotes, include them on a separate page before your Works Cited page. Entitle the section Notes (centered, unformatted).

Formatting the First Page of Your Paper

  • Do not make a title page for your paper unless specifically requested or the paper is assigned as a group project. In the case of a group project, list all names of the contributors, giving each name its own line in the header, followed by the remaining MLA header requirements as described below. Format the remainder of the page as requested by the instructor.
  • In the upper left-hand corner of the first page, list your name, your instructor's name, the course, and the date. Again, be sure to use double-spaced text.
  • Double space again and center the title. Do not underline, italicize, or place your title in quotation marks. Write the title in Title Case (standard capitalization), not in all capital letters.
  • Use quotation marks and/or italics when referring to other works in your title, just as you would in your text. For example:  Fear and Loathing in Las Vegas  as Morality Play; Human Weariness in "After Apple Picking"
  • Double space between the title and the first line of the text.
  • Create a header in the upper right-hand corner that includes your last name, followed by a space with a page number. Number all pages consecutively with Arabic numerals (1, 2, 3, 4, etc.), one-half inch from the top and flush with the right margin. (Note: Your instructor or other readers may ask that you omit the last name/page number header on your first page. Always follow instructor guidelines.)

Here is a sample of the first page of a paper in MLA style:

This image shows the first page of an MLA paper.

The First Page of an MLA Paper

Section Headings

Writers sometimes use section headings to improve a document’s readability. These sections may include individual chapters or other named parts of a book or essay.

MLA recommends that when dividing an essay into sections you number those sections with an Arabic number and a period followed by a space and the section name.

MLA does not have a prescribed system of headings for books (for more information on headings, please see page 146 in the MLA Style Manual and Guide to Scholarly Publishing , 3rd edition). If you are only using one level of headings, meaning that all of the sections are distinct and parallel and have no additional sections that fit within them, MLA recommends that these sections resemble one another grammatically. For instance, if your headings are typically short phrases, make all of the headings short phrases (and not, for example, full sentences). Otherwise, the formatting is up to you. It should, however, be consistent throughout the document.

If you employ multiple levels of headings (some of your sections have sections within sections), you may want to provide a key of your chosen level headings and their formatting to your instructor or editor.

Sample Section Headings

The following sample headings are meant to be used only as a reference. You may employ whatever system of formatting that works best for you so long as it remains consistent throughout the document.

Formatted, unnumbered:

Level 1 Heading: bold, flush left

Level 2 Heading: italics, flush left

Level 3 Heading: centered, bold

Level 4 Heading: centered, italics

Level 5 Heading: underlined, flush left

COMMENTS

  1. Information Systems Research

    Information Systems Research is a peer-reviewed journal that seeks to publish the best research in the information systems discipline. INFORMS.org; ... Call for Papers ISR has issued a call for papers for a special issue on Analytical Creativity. ScholarOne will be open to submissions beginning on January 2, 2024.

  2. Artificial intelligence in information systems research: A systematic

    Conference papers from two of the leading IS conferences, namely, International Conference on Information Systems (ICIS) and the European Conference on Information Systems (ECIS) were also retrieved and analysed. The number of papers retrieved from each selected database is shown in Table 5.

  3. Information Systems Research

    Information Systems Research (ISR) is a leading peer-reviewed, international journal focusing on theory, research, and intellectual development for information systems in organizations, institutions, the economy, and society. It is dedicated to furthering knowledge in the application of information technologies to human organizations and their management and, more broadly, to improving ...

  4. Information Systems Journal

    Information Systems Journal is an international information technology journal publishing a broad range of impactful information systems (IS) research. ISJ publishes papers addressing the challenges and opportunities in making information systems a positive contribution to practice, policy, and society more broadly.

  5. Information Systems

    Information systems are the software and hardware systems that support data-intensive applications. The journal Information Systems publishes articles concerning the design and implementation of languages, data models, process models, …. View full aims & scope. $3090.

  6. Literature review: Understanding information systems strategy in the

    Information Systems Research: El Sawy, Omar A.; Malhotra, Arvind; Park, YoungKi; Pavlou, Paul A. 2010: ... Excluding non-academic sources, however, does not mean that we had a narrow focus on research papers. We also included academic textbooks and, when cited in the research literature, professional publications with an academic background. ...

  7. The role of information systems research in shaping the future of

    Information Systems Journal is an international information technology journal publishing a broad range of impactful information systems (IS) research. ... while the third paper provides an analysis of information privacy at the organizational level of analysis. All of the papers explore information privacy from different theoretical ...

  8. Submission Guidelines

    Information Systems Research (ISR) welcomes three types of submissions, as outlined in this section. Research Articles. Research Articles are full-length papers that seek to theoretically and/or empirically examine significant information systems phenomena. Appropriate submissions should offer a contribution that is sufficiently original and ...

  9. Trends in the conduct of information systems research

    With the rapidly changing institutional and technological environment for information systems research, it is useful to examine how the conduct of research itself has been changing. In this Debates and Perspectives article, based on analysis of over 1800 articles, we first describe trends from 2000 to 2015 in co-authorship, research areas, unit ...

  10. Information Systems Research: Foundations, Design and Theory

    Mohammed Ali is a Lecturer in Digital Business and Information Systems and the programme lead for Business Management with Innovation and Technology at the University of Salford Business School, UK.He is a Senior Fellow of Higher Education with extensive teaching experience. His research interests fall under the socio-technical and system complexities of information systems and digital ...

  11. Information Systems Research

    Editorial Statement Information Systems Research (ISR) is an author-friendly peer-reviewed journal that seeks to publish the best research in the information systems discipline.Its mission is to advance knowledge about the effective and efficient utilization of information technology by individuals, groups, organizations, society, and nations for the improvement of economic and social welfare.

  12. Information systems as a nexus of information technology systems: A new

    His research regards security of information systems, methods of information systems design and development, and the interaction of information systems and organizations. Baskerville is editor emeritus of the European Journal of Information Systems. He is a Chartered Engineer, and holds a BS summa cum laude University of Maryland, MSc and PhD ...

  13. European Journal of Information Systems

    For example, an empirical response paper might replicate a previous study with a different method but come up with contradictory findings. References. ROWE F (2012) Toward a richer diversity of genres in information systems research: New categorization and guidelines. European Journal of Information Systems 21(5), 469-478.

  14. Communications of the Association for Information Systems

    The majority of meta-analysis projects code papers based on a combination of two or more of the following four dimensions: research topic, research model, research methodology, and paradigmatic research approach. For example, Claver, González, and Llopis (2000) study the research topics and research methodology in IS research.

  15. Artificial Intelligence (AI) and Information Systems: Perspectives to

    In this paper, we build on a comparative case analysis of three companies from the energy sector and examine how AI governance is implemented to facilitate the development of robust AI applications that do not introduce negative effects. ... Lessons for information systems research from the study of ICT and societal challenges.MIS Quarterly, 40 ...

  16. Twenty‐five years of the Information Systems Journal: A bibliometric

    The Information Systems Journal (ISJ) published its first issue in 1991, and in 2015, the journal celebrated its 25th anniversary. This study presents an overview of the leading research trends in the papers that the journal has published during its first quarter of a century via a bibliometric and ontological analysis.

  17. A Systematic Literature Review of Health Information Systems for

    1. Introduction. Health information systems (HIS) are critical systems deployed to help organizations and all stakeholders within the healthcare arena eradicate disjointed information and modernize health processes by integrating different health functions and departments across the healthcare arena for better healthcare delivery [1,2,3,4,5,6].Over time, the HIS has transformed significantly ...

  18. (PDF) A Review Study of Information Systems

    Volume 179 - No.18, February 2018. 15. A Review Study of Information Systems. Forat Fa lih Hasa n. Dept. o f comput er T.E ngineeri ng. Alkitab Univers ity Col lege Kir kuk, Ir aq. ABSTRACT. The ...

  19. (PDF) The What, Why, and How of Health Information Systems: A

    Abstract: The literature on the topic of health information systems (HISs) is reviewed in this paper. Specifically, the paper. reviews the literature on (i) the theoretical concept o f HISs (The ...

  20. 25650 PDFs

    This paper aims to analyze the factors that influence individual and organizational performance regarding the use of an Internship Management Information System (IMIS), which is named the SIM-PKL ...

  21. The Prompt Report: A Systematic Survey of Prompting Techniques

    Generative Artificial Intelligence (GenAI) systems are being increasingly deployed across all parts of industry and research settings. Developers and end users interact with these systems through the use of prompting or prompt engineering. While prompting is a widespread and highly researched concept, there exists conflicting terminology and a poor ontological understanding of what constitutes ...

  22. 2024 Conference

    The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research advances in Artificial Intelligence and Machine Learning, principally by hosting an annual interdisciplinary academic conference with the highest ethical standards for a diverse and inclusive community.

  23. Evolving usability heuristics for visualising Augmented Reality/Mixed

    Research in system-centric design has led to advancements in interactive methods and improvements in headgear and light weight AR/MR glasses. However, from a user-centric design perspective, it is crucial to investigate the user experience (UX) and determine the comfort level of using these immersive technologies for the success of any emerging ...

  24. ChatGPT is bullshit

    Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as ...

  25. Call for Papers

    Authors must submit all manuscripts through Information System Research's online submission system by January 15, 2022. ... Authors may submit rejected papers as regular submissions to Information Systems Research only if the special issue rejection letter recommends such an action. The Guest Editors will recommend submission as a regular ISR ...

  26. Building a Platform Business Requires Balance—Lessons from Salesforce

    Working Paper: Case Study How Salesforce Built Its Platform Business. By Martin ... Center for Information Systems Research. Massachusetts Institute of Technology. Sloan School of Management. 245 First Street, E94-15th Floor. Cambridge, MA 02142. Directions. [email protected]. 617-253-2348.

  27. Information systems security research agenda: Exploring the gap between

    Topic modeling of Information Systems Security research between 1990 and 2020. • Delphi study of CISOs to rank order important Information Systems Security concerns. • Explores the gap between what practitioners consider to be important and what researchers are currently studying. • Develop a research agenda in Information Systems Security.

  28. Systems

    This paper delves into optimizing the rotation of relief supplies within the relief supply chain system, concentrating on reserve quantity decisions for governments and humanitarian organizations involved in disaster response. By integrating a trade-in strategy with suppliers, it ensures a precise and timely response to the fluctuating demand for relief supplies post-disaster. Utilizing the ...

  29. Closing Gaps in Data-Sharing Is Critical for Public Health

    Every day, public health officials use data from each other and from doctors, hospitals, and health systems to protect people from infectious and environmental threats. When these officials receive timely, accurate, and complete information from health care providers, they can more clearly detect disease, prevent its spread, and help people connect to care.

  30. General Format

    Books. MLA does not have a prescribed system of headings for books (for more information on headings, please see page 146 in the MLA Style Manual and Guide to Scholarly Publishing, 3rd edition).If you are only using one level of headings, meaning that all of the sections are distinct and parallel and have no additional sections that fit within them, MLA recommends that these sections resemble ...