SciTechDaily

  • June 30, 2024 | Unlocking the Secrets of Aging: Could Protein Clumps Predict Age-Related Diseases?
  • June 30, 2024 | Breakthrough Study Links Gut Bacteria to Food Addiction and Obesity
  • June 30, 2024 | NASA’s Gateway Unveiled: A Stunning Tour of Humanity’s First Space Station to Orbit the Moon
  • June 30, 2024 | Ancient DNA Uncovers the Secrets of Human Sacrifice at Chichén Itzá
  • June 30, 2024 | Milestone Achieved in Unraveling the Universe’s Fundamental Forces at the Large Hadron Collider

Technology News

Read the latest technology news on SciTechDaily, your comprehensive source for the latest breakthroughs, trends, and innovations shaping the world of technology. We bring you up-to-date insights on a wide array of topics, from cutting-edge advancements in artificial intelligence and robotics to the latest in green technologies, telecommunications, and more.

Our expertly curated content showcases the pioneering minds, revolutionary ideas, and transformative solutions that are driving the future of technology and its impact on our daily lives. Stay informed about the rapid evolution of the tech landscape, and join us as we explore the endless possibilities of the digital age.

Discover recent technology news articles on topics such as Nanotechnology ,  Artificial Intelligence , Biotechnology ,  Graphene , Green Tech , Battery Tech , Computer Tech , Engineering , and Fuel-cell Tech featuring research out of MIT , Cal Tech , Yale , Georgia Tech , Karlsruhe Tech , Vienna Tech , and Michigan Technological University . Discover the future of technology with SciTechDaily.

Graphene Stack Concept

Technology June 29, 2024

Graphene Nanolayers Reinvented: The Key to Advanced Electronics

Graphene nanolayers are cross-linked with rotaxanes. Graphene, composed of layers of carbon atoms arranged in a honeycomb pattern, is recognized as a supermaterial due to…

Refinery Industrial Carbon Capture Concept Art

Revolutionary Reactors Turn CO2 Into Valuable Minerals With Fly Ash

Hyper-X Research Vehicle Scramjet Engine Firing

Revolution at Mach 10: NASA-Backed Hypersonic Jets Poised to Transform Space Travel

5G Technology Concept

5G Without Limits: Japanese Scientists Develop Efficient Wireless-Powered Transceiver Array

Modular Fabrication Process To Produce a Quantum-System-on-Chip

MIT’s Diamond Qubits Redefine the Future of Quantum Computing

Future Computing Magnetic Semiconductor Chip Concept

600% Boost: Scientists Develop Game-Changing AI Chip With Impressive Energy Efficiency

First Chip-Based 3D Printer

Tiny Titan: MIT’s Revolutionary Coin-Sized 3D Printer Fits in Your Pocket

Abstract Coloful Cubes Technology Art

Metamaterial Marvel: Kirigami Cubes Unlock the Future of Mechanical Computing

Adsorbent “Fins” Collect Humidity Rather Than Swim Through Water

Harvesting Drinking Water From Air With Innovative Absorbent Fins

Quantum Material Superconductivity Art

Technology June 25, 2024

Quantum Transformation: TiS3 Nanoribbons Become Superconductors Under Pressure

A study has shown that compressing TiS3 nanoribbons transforms them from insulators to superconductors, enabling electricity transmission without energy loss. This discovery opens new possibilities…

AI vs Human Perception Art Concept

Technology June 24, 2024

The Great Voice Trick: Is It AI or Is It Real?

People assume happy voices to be real and ‘neutral’ voices to be AI. Research shows that people struggle to identify AI from human voices, but…

Artificial Intelligence Digital Neuron Concept

Supercharging AI With New Computational Model of Real Neurons

A new model from the Flatiron Institute’s Center for Computational Neuroscience proposes that individual neurons are more influential controllers within their networks than previously believed,…

Blood Chip Technology Art Concept Illustration

Powered by Blood: Innovative Chip Provides Real-Time Health Monitoring

Researchers have developed a portable device that utilizes blood to generate electricity for immediate medical diagnostics. This lab-on-a-chip technology, aimed at combating the global rise…

Electric Diode To Manipulate Qubits Inside a Silicon Wafer

Technology June 23, 2024

Silicon Magic: Powering the Quantum Internet of the Future

By utilizing traditional semiconductor devices, researchers have unlocked new potentials in quantum communication, pushing us closer to realizing the vast potential of the quantum internet….

Terahertz Waves Magnetic Material Art Concept

Terahertz Waves Supercharged: A Breakthrough With Magnetic Materials

Positioned between microwaves and infrared light, terahertz waves are key to pioneering advancements in imaging and diagnostic technologies. A recent discovery at Tohoku University of…

Advanced Solar Cells Concept

Bending the Rules of Solar: Novel Flexible Perovskite/Silicon Tandem Solar Cell Achieves Record Efficiency

A new study highlights the successful development of the first flexible perovskite/silicon tandem solar cell with a record efficiency of 22.8%, representing a major advance…

Smartphone Map Geospatial Data Art Concept Illustration

Technology June 22, 2024

Revolutionizing the Map: How Smartphones and Crowdsourcing Are Redefining Geospatial Data

Geospatial data has undergone significant transformations due to the internet and smartphones, revolutionizing accessibility and real-time updates. A collaborative international team reviewed this evolution, highlighting…

Light Control Electronics Concept

Controlling Electronics With Light: Magnetite’s Hidden Phases Exposed by Lasers

Researchers have successfully manipulated the structural properties of magnetite using light-induced phase transitions. This technique uncovered hidden phases of magnetite, paving the way for new…

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals

Information technology articles from across Nature Portfolio

Information technology is the design and implementation of computer networks for data processing and communication. This includes designing the hardware for processing information and connecting separate components, and developing software that can efficiently and faultlessly analyse and distribute this data.

Latest Research and Reviews

research issues technology

Peak response regularization for localization

  • Jinzhen Yao

research issues technology

Improving the performance of 3D image model compression based on optimized DEFLATE algorithm

  • Zhang Yuxiang

research issues technology

Comparing machine learning screening approaches using clinical data and cytokine profiles for COVID-19 in resource-limited and resource-abundant settings

  • Hooman H. Rashidi
  • Aamer Ikram
  • Imran H. Khan

research issues technology

Cross shard leader accountability protocol based on two phase atomic commit

  • Zhiqiang Du
  • Wendong Zhang

research issues technology

Analyzing post-COVID-19 demographic and mobility changes in Andalusia using mobile phone data

  • Joaquín Osorio Arjona

research issues technology

Finding multifaceted communities in multiplex networks

  • László Gadár
  • János Abonyi

Advertisement

News and Comment

research issues technology

Misinformation might sway elections — but not in the way that you think

Rampant deepfakes and false news are often blamed for swaying votes. Research suggests it’s hard to change people’s political opinions, but easier to nudge their behaviour.

research issues technology

Behavioral health and generative AI: a perspective on future of therapies and patient care

There have been considerable advancements in artificial intelligence (AI), specifically with generative AI (GAI) models. GAI is a class of algorithms designed to create new data, such as text, images, and audio, that resembles the data on which they have been trained. These models have been recently investigated in medicine, yet the opportunity and utility of GAI in behavioral health are relatively underexplored. In this commentary, we explore the potential uses of GAI in the field of behavioral health, specifically focusing on image generation. We propose the application of GAI for creating personalized and contextually relevant therapeutic interventions and emphasize the need to integrate human feedback into the AI-assisted therapeutics and decision-making process. We report the use of GAI with a case study of behavioral therapy on emotional recognition and management with a three-step process. We illustrate image generation-specific GAI to recognize, express, and manage emotions, featuring personalized content and interactive experiences. Furthermore, we highlighted limitations, challenges, and considerations, including the elements of human emotions, the need for human-AI collaboration, transparency and accountability, potential bias, security, privacy and ethical issues, and operational considerations. Our commentary serves as a guide for practitioners and developers to envision the future of behavioral therapies and consider the benefits and limitations of GAI in improving behavioral health practices and patient outcomes.

  • Emre Sezgin

research issues technology

Rate-splitting multiple-access-enabled V2X communications

An article in IEEE Transactions on Wireless Communications proposes solutions for interference management in vehicle-to-everything communication systems by leveraging a one-layer rate-splitting multiple-access scheme.

research issues technology

The dream of electronic newspapers becomes a reality — in 1974

Efforts to develop an electronic newspaper providing information at the touch of a button took a step forward 50 years ago, and airborne bacteria in the London Underground come under scrutiny, in the weekly dip into Nature ’s archive.

research issues technology

Autonomous interference-avoiding machine-to-machine communications

An article in IEEE Journal on Selected Areas in Communications proposes algorithmic solutions to dynamically optimize MIMO waveforms to minimize or eliminate interference in autonomous machine-to-machine communications.

Combining quantum and AI for the next superpower

Quantum computing can benefit from the advancements made in artificial intelligence (AI) holistically across the tech stack — AI may even unlock completely new ways of using quantum computers. Simultaneously, AI can benefit from quantum computing leveraging the expected future compute and memory power.

  • Martina Gschwendtner
  • Henning Soller
  • Sheila Zingg

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research issues technology

  • Browse All Articles
  • Newsletter Sign-Up

Technology →

research issues technology

  • 27 Jun 2024
  • Research & Ideas

Gen AI Marketing: How Some 'Gibberish' Code Can Give Products an Edge

An increasing number of consumers are turning to generative AI for buying recommendations. But if companies can subtly manipulate the technology to favor their own products, some businesses may gain unfair advantage, says Himabindu Lakkaraju.

research issues technology

  • 25 Jun 2024

How Transparency Sped Innovation in a $13 Billion Wireless Sector

Many companies are wary of sharing proprietary information with suppliers and partners. However, Shane Greenstein and colleagues show in a study of wireless routers that being more open about technology can lead to new opportunities.

research issues technology

  • 18 Jun 2024

Industrial Decarbonization: Confronting the Hard Challenges of Cement

CEOs in construction and heavy industries must prioritize innovative abatement strategies to meet rising global demand for cement while reducing emissions. Research by Gunther Glenk offers an economic framework for identifying emission reduction options.

research issues technology

  • 11 Jun 2024
  • In Practice

The Harvard Business School Faculty Summer Reader 2024

What's on your vacation reading list? Harvard Business School faculty members plan to explore not only sober themes, such as philosophy and climate policy, but classic mysteries and hip-hop history.

research issues technology

  • 22 May 2024

Banned or Not, TikTok Is a Force Companies Can’t Afford to Ignore

It may be tempting to write off TikTok, the highly scrutinized social media app whose cat clips and dance videos propelled it to the mainstream. However, business leaders could learn valuable lessons about engaging consumers from the world's most-used platform, says Shikhar Ghosh in a case study.

research issues technology

  • 15 May 2024

A Major Roadblock for Autonomous Cars: Motorists Believe They Drive Better

With all the advances in autonomous vehicle technology, why aren't self-driving cars chauffeuring more people around? Research by Julian De Freitas, Stuti Agarwal, and colleagues reveals a simple psychological barrier: Drivers are overconfident about their own abilities, so they resist handing over the wheel.

research issues technology

  • 13 May 2024

Picture This: Why Online Image Searches Drive Purchases

Smaller sellers' products often get lost on large online marketplaces. However, harnessing images in search can help consumers find these products faster, increasing sales and customer satisfaction, finds research by Chiara Farronato and colleagues.

research issues technology

  • 03 May 2024

How Much Does Proximity Influence Startup Innovation? 20 Meters' Worth to Be Exact

When it comes to sharing ideas, how much does close proximity matter? A study by Maria Roche evaluates how knowledge spreads in a coworking space, providing insights that could help shape the debate over remote work.

research issues technology

  • 23 Apr 2024
  • Cold Call Podcast

Amazon in Seattle: The Role of Business in Causing and Solving a Housing Crisis

In 2020, Amazon partnered with a nonprofit called Mary’s Place and used some of its own resources to build a shelter for women and families experiencing homelessness on its campus in Seattle. Yet critics argued that Amazon’s apparent charity was misplaced and that the company was actually making the problem worse. Paul Healy and Debora Spar explore the role business plays in addressing unhoused communities in the case “Hitting Home: Amazon and Mary’s Place.”

research issues technology

  • 09 Apr 2024

When Climate Goals, Housing Policy, and Corporate R&D Collide, Social Good Can Emerge

Grants designed to improve housing can make homes more energy efficient and save money for low-income families, providing a powerful way to confront climate change, says research by Omar Asensio. What do the findings mean for companies trying to scale innovation?

research issues technology

  • 26 Mar 2024

How Humans Outshine AI in Adapting to Change

Could artificial intelligence systems eventually perform surgeries or fly planes? First, AI will have to learn to navigate shifting conditions as well as people do. Julian De Freitas and colleagues pit humans against machines in a video game to study AI's current limits and mine insights for the real world.

research issues technology

  • 22 Mar 2024

Open Source Software: The $9 Trillion Resource Companies Take for Granted

Many companies build their businesses on open source software, code that would cost firms $8.8 trillion to create from scratch if it weren't freely available. Research by Frank Nagle and colleagues puts a value on an economic necessity that will require investment to meet demand.

research issues technology

  • 12 Mar 2024

How Used Products Can Unlock New Markets: Lessons from Apple's Refurbished iPhones

The idea of reselling old smartphones might have seemed risky for a company known for high-end devices, but refurbished products have become a major profit stream for Apple and an environmental victory. George Serafeim examines Apple's circular model in a case study, and offers insights for other industries.

research issues technology

  • 22 Feb 2024

How to Make AI 'Forget' All the Private Data It Shouldn't Have

When companies use machine learning models, they may run the risk of inadvertently sharing sensitive and private data. Seth Neel explains why it’s important to understand how to wipe AI’s spongelike memory clean.

research issues technology

  • 23 Jan 2024

More Than Memes: NFTs Could Be the Next Gen Deed for a Digital World

Non-fungible tokens might seem like a fad approach to selling memes, but the concept could help companies open new markets and build communities. Scott Duke Kominers and Steve Kaczynski go beyond the NFT hype in their book, The Everything Token.

research issues technology

  • 16 Jan 2024

How SolarWinds Responded to the 2020 SUNBURST Cyberattack

In December of 2020, SolarWinds learned that they had fallen victim to hackers. Unknown actors had inserted malware called SUNBURST into a software update, potentially granting hackers access to thousands of its customers’ data, including government agencies across the globe and the US military. General Counsel Jason Bliss needed to orchestrate the company’s response without knowing how many of its 300,000 customers had been affected, or how severely. What’s more, the existing CEO was scheduled to step down and incoming CEO Sudhakar Ramakrishna had yet to come on board. Bliss needed to immediately communicate the company’s action plan with customers and the media. In this episode of Cold Call, Professor Frank Nagle discusses SolarWinds’ response to this supply chain attack in the case, “SolarWinds Confronts SUNBURST.”

research issues technology

  • 09 Jan 2024

Harnessing AI: What Businesses Need to Know in ChatGPT’s Second Year

Companies across industries rushed to adopt ChatGPT last year, seeing its potential to streamline tasks formerly handled by people and vendors at much higher cost. As generative AI enters its next phase in 2024, what can leaders expect? Harvard Business School faculty members highlight four trends to watch.

research issues technology

  • 05 Dec 2023

Are Virtual Tours Still Worth It in Real Estate? Evidence from 75,000 Home Sales

Many real estate listings still feature videos and interactive tools that simulate the experience of walking through properties. But do they help homes sell faster? Research by Isamar Troncoso probes the post-pandemic value of virtual home tours.

research issues technology

  • 22 Nov 2023

Humans vs. Machines: Untangling the Tasks AI Can (and Can't) Handle

Are you a centaur or a cyborg? A study of 750 consultants sheds new light on the strengths and limits of ChatGPT, and what it takes to operationalize generative AI. Research by Edward McFowland III, Karim Lakhani, Fabrizio Dell'Acqua, and colleagues.

research issues technology

  • 07 Nov 2023

How Should Meta Be Governed for the Good of Society?

Julie Owono is executive director of Internet Sans Frontières and a member of the Oversight Board, an outside entity with the authority to make binding decisions on tricky moderation questions for Meta’s companies, including Facebook and Instagram. Harvard Business School visiting professor Jesse Shapiro and Owono break down how the Board governs Meta’s social and political power to ensure that it’s used responsibly, and discuss the Board’s impact, as an alternative to government regulation, in the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.”

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Thinking Through the Ethics of New Tech…Before There’s a Problem

  • Beena Ammanath

research issues technology

Historically, it’s been a matter of trial and error. There’s a better way.

There’s a familiar pattern when a new technology is introduced: It grows rapidly, comes to permeate our lives, and only then does society begin to see and address the problems it creates. But is it possible to head off possible problems? While companies can’t predict the future, they can adopt a sound framework that will help them prepare for and respond to unexpected impacts. First, when rolling out new tech, it’s vital to pause and brainstorm potential risks, consider negative outcomes, and imagine unintended consequences. Second, it can also be clarifying to ask, early on, who would be accountable if an organization has to answer for the unintended or negative consequences of its new technology, whether that’s testifying to Congress, appearing in court, or answering questions from the media. Third, appoint a chief technology ethics officer.

We all want the technology in our lives to fulfill its promise — to delight us more than it scares us, to help much more than it harms. We also know that every new technology needs to earn our trust. Too often the pattern goes like this: A technology is introduced, grows rapidly, comes to permeate our lives, and only then does society begin to see and address any problems it might create.

research issues technology

  • BA Beena Ammanath is the Executive Director of the global Deloitte AI Institute, author of the book “Trustworthy AI,” founder of the non-profit Humans For AI, and also leads Trustworthy and Ethical Tech for Deloitte. She is an award-winning senior executive with extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies such as HPE, GE, Thomson Reuters, British Telecom, Bank of America, and e*trade.

Partner Center

Scientific breakthroughs: 2024 emerging trends to watch

research issues technology

December 28, 2023

 width=

Across disciplines and industries, scientific discoveries happen every day, so how can you stay ahead of emerging trends in a thriving landscape? At CAS, we have a unique view of recent scientific breakthroughs, the historical discoveries they were built upon, and the expertise to navigate the opportunities ahead. In 2023, we identified the top scientific breakthroughs , and 2024 has even more to offer. New trends to watch include the accelerated expansion of green chemistry, the clinical validation of CRISPR, the rise of biomaterials, and the renewed progress in treating the undruggable, from cancer to neurodegenerative diseases. To hear what the experts from Lawrence Liverpool National Lab and Oak Ridge National Lab are saying on this topic, join us for a free webinar on January 25 from 10:00 to 11:30 a.m. EDT for a panel discussion on the trends to watch in 2024.

The ascension of AI in R&D

Trends-To-Watch-The-Ascension-Of-AI-In-R&D-1920x1080-Hero

While the future of AI has always been forward-looking, the AI revolution in chemistry and drug discovery has yet to be fully realized. While there have been some high-profile set-backs , several breakthroughs should be watched closely as the field continues to evolve. Generative AI is making an impact in drug discovery , machine learning is being used more in environmental research , and large language models like ChatGPT are being tested in healthcare applications and clinical settings.

Many scientists are keeping an eye on AlphaFold, DeepMind’s protein structure prediction software that revolutionized how proteins are understood. DeepMind and Isomorphic Labs have recently announced how their latest model shows improved accuracy, can generate predictions for almost all molecules in the Protein Data Bank, and expand coverage to ligands, nucleic acids, and posttranslational modifications . Therapeutic antibody discovery driven by AI is also gaining popularity , and platforms such as the RubrYc Therapeutics antibody discovery engine will help advance research in this area.

Though many look at AI development with excitement, concerns over accurate and accessible training data , fairness and bias , lack of regulatory oversight , impact on academia, scholarly research and publishing , hallucinations in large language models , and even concerns over infodemic threats to public health are being discussed. However, continuous improvement is inevitable with AI, so expect to see many new developments and innovations throughout 2024.

‘Greener’ green chemistry

INSGENENGSOC101826-2024-Trends-To-Watch-Greener-Green-Chemistry-1920x1080-Hero

Green chemistry is a rapidly evolving field that is constantly seeking innovative ways to minimize the environmental impact of chemical processes. Here are several emerging trends that are seeing significant breakthroughs:

  • Improving green chemistry predictions/outcomes : One of the biggest challenges in green chemistry is predicting the environmental impact of new chemicals and processes. Researchers are developing new computational tools and models that can help predict these impacts with greater accuracy. This will allow chemists to design safer and more environmentally friendly chemicals.
  • Reducing plastics: More than 350 million tons of plastic waste is generated every year. Across the landscape of manufacturers, suppliers, and retailers, reducing the use of single-use plastics and microplastics is critical. New value-driven approaches by innovators like MiTerro that reuse industrial by-products and biomass waste for eco-friendly and cheaper plastic replacements will soon be industry expectations. Lowering costs and plastic footprints will be important throughout the entire supply chain.    
  • Alternative battery chemistry: In the battery and energy storage space, finding alternatives to scarce " endangered elements" like lithium and cobalt will be critical. While essential components of many batteries, they are becoming scarce and expensive. New investments in lithium iron phosphate (LFP) batteries that do not use nickel and cobalt have expanded , with 45% of the EV market share being projected for LFP in 2029. Continued research is projected for more development in alternative materials like sodium, iron, and magnesium, which are more abundant, less expensive, and more sustainable.
  • More sustainable catalysts : Catalysts speed up a chemical reaction or decrease the energy required without getting consumed. Noble metals are excellent catalysts; however, they are expensive and their mining causes environmental damage. Even non-noble metal catalysts can also be toxic due to contamination and challenges with their disposal. Sustainable catalysts are made of earth-abundant elements that are also non-toxic in nature. In recent years, there has been a growing focus on developing sustainable catalysts that are more environmentally friendly and less reliant on precious metals. New developments with catalysts, their roles, and environmental impact will drive meaningful progress in reducing carbon footprints.  
  • Recycling lithium-ion batteries: Lithium-ion recycling has seen increased investments with more than 800 patents already published in 2023. The use of solid electrolytes or liquid nonflammable electrolytes may improve the safety and durability of LIBs and reduce their material use. Finally, a method to manufacture electrodes without solvent s could reduce the use of deprecated solvents such as N-methylpyrrolidinone, which require recycling and careful handling to prevent emissions.

Rise of biomaterials

INSGENENGSOC101826-2024-Trends-To-Watch-Rise-Of-Biomaterials-1920x1080-Hero

New materials for biomedical applications could revolutionize many healthcare segments in 2024. One example is bioelectronic materials, which form interfaces between electronic devices and the human body, such as the brain-computer interface system being developed by Neuralink. This system, which uses a network of biocompatible electrodes implanted directly in the brain, was given FDA approval to begin human trials in 2023.

  • Bioelectronic materials: are often hybrids or composites, incorporating nanoscale materials, highly engineered conductive polymers, and bioresorbable substances. Recently developed devices can be implanted, used temporarily, and then safely reabsorbed by the body without the need for removal. This has been demonstrated by a fully bioresorbable, combined sensor-wireless power receiver made from zinc and the biodegradable polymer, poly(lactic acid).
  • Natural biomaterials: that are biocompatible and naturally derived (such as chitosan, cellulose nanomaterials, and silk) are used to make advanced multifunctional biomaterials in 2023. For example, they designed an injectable hydrogel brain implant for treating Parkinson’s disease, which is based on reversible crosslinks formed between chitosan, tannic acid, and gold nanoparticles.
  • Bioinks : are used for 3D printing of organs and transplant development which could revolutionize patient care. Currently, these models are used for studying organ architecture like 3D-printed heart models for cardiac disorders and 3D-printed lung models to test the efficacy of drugs. Specialized bioinks enhance the quality, efficacy, and versatility of 3D-printed organs, structures, and outcomes. Finally, new approaches like volumetric additive manufacturing (VAM) of pristine silk- based bioinks are unlocking new frontiers of innovation for 3D printing.

To the moon and beyond

INSGENENGSOC101826-2024-Trends-To-Watch-To-The-Moon-And-Beyond-1920x1080-Hero

The global Artemis program is a NASA-led international space exploration program that aims to land the first woman and the first person of color on the Moon by 2025 as part of the long-term goal of establishing a sustainable human presence on the Moon. Additionally, the NASA mission called Europa Clipper, scheduled for a 2024 launch, will orbit around Jupiter and fly by Europa , one of Jupiter’s moons, to study the presence of water and its habitability. China’s mission, Chang’e 6 , plans to bring samples from the moon back to Earth for further studies. The Martian Moons Exploration (MMX) mission by Japan’s JAXA plans to bring back samples from Phobos, one of the Mars moons. Boeing is also expected to do a test flight of its reusable space capsule Starliner , which can take people to low-earth orbit.

The R&D impact of Artemis extends to more fields than just aerospace engineering, though:

  • Robotics: Robots will play a critical role in the Artemis program, performing many tasks, such as collecting samples, building infrastructure, and conducting scientific research. This will drive the development of new robotic technologies, including autonomous systems and dexterous manipulators.
  • Space medicine: The Artemis program will require the development of new technologies to protect astronauts from the hazards of space travel, such as radiation exposure and microgravity. This will include scientific discoveries in medical diagnostics, therapeutics, and countermeasures.
  • Earth science: The Artemis program will provide a unique opportunity to study the Moon and its environment. This will lead to new insights into the Earth's history, geology, and climate.
  • Materials science: The extreme space environment will require new materials that are lightweight, durable, and radiation resistant. This will have applications in many industries, including aerospace, construction, and energy.
  • Information technology: The Artemis program will generate a massive amount of data, which will need to be processed, analyzed, and shared in real time. This will drive the development of new IT technologies, such as cloud computing, artificial intelligence, and machine learning.

The CRISPR pay-off

INSGENENGSOC101826-2024-Trends-To-Watch-The-CRISPR-Pay-Off-1920x1080-Hero

After years of research, setbacks, and minimal progress, the first formal evidence of CRISPR as a therapeutic platform technology in the clinic was realized. Intellia Therapeutics received FDA clearance to initiate a pivotal phase 3 trial of a new drug for the treatment of hATTR, and using the same Cas9 mRNA, got a new medicine treating a different disease, angioedema. This was achieved by only changing 20 nucleotides of the guide RNA, suggesting that CRISPR can be used as a therapeutic platform technology in the clinic.

The second great moment for CRISPR drug development technology came when Vertex and CRISPR Therapeutics announced the authorization of the first CRISPR/Cas9 gene-edited therapy, CASGEVY™, by the United Kingdom MHRA, for the treatment of sickle cell disease and transfusion-dependent beta-thalassemia. This was the first approval of a CRISPR-based therapy for human use and is a landmark moment in realizing the potential of CRISPR to improve human health.

In addition to its remarkable genome editing capability, the CRISPR-Cas system has proven to be effective in many applications, including early cancer diagnosis . CRISPR-based genome and transcriptome engineering and CRISPR-Cas12a and CRISPR-Cas13a appear to have the necessary characteristics to be robust detection tools for cancer therapy and diagnostics. CRISPR-Cas-based biosensing system gives rise to a new era for precise diagnoses of early-stage cancers.

MIT engineers have also designed a new nanoparticle DNA-encoded nanosensor for urinary biomarkers that could enable early cancer diagnoses with a simple urine test. The sensors, which can detect cancerous proteins, could also distinguish the type of tumor or how it responds to treatment.

Ending cancer

INSGENENGSOC101826-2024-Trends-To-Watch-Ending-Cancer-1920x1080-Hero

The immuno-oncology field has seen tremendous growth in the last few years. Approved products such as cytokines, vaccines, tumor-directed monoclonal antibodies, and immune checkpoint blockers continue to grow in market size. Novel therapies like TAC01-HER2 are currently undergoing clinical trials. This unique therapy uses autologous T cells, which have been genetically engineered to incorporate T cell Antigen Coupler (TAC) receptors that recognize human epidermal growth factor receptor 2 (HER2) presence on tumor cells to remove them. This could be a promising therapy for metastatic, HER2-positive solid tumors.

Another promising strategy aims to use the CAR-T cells against solid tumors in conjunction with a vaccine that boosts immune response. Immune boosting helps the body create more host T cells that can target other tumor antigens that CAR-T cells cannot kill.

Another notable trend is the development of improved and effective personalized therapies. For instance, a recently developed personalized RNA neoantigen vaccine, based on uridine mRNA–lipoplex nanoparticles, was found effective against pancreatic ductal adenocarcinoma (PDAC). Major challenges in immuno-oncology are therapy resistance, lack of predictable biomarkers, and tumor heterogenicity. As a result, devising novel treatment strategies could be a future research focus.

Decarbonizing energy

INSGENENGSOC101826-2024-Trends-To-Watch-Decarbonizing-Energy-1920x1080-Hero

Multiple well-funded efforts are underway to decarbonize energy production by replacing fossil fuel-based energy sources with sources that generate no (or much less) CO2 in 2024.

One of these efforts is to incorporate large-scale energy storage devices into the existing power grid. These are an important part of enabling the use of renewable sources since they provide additional supply and demand for electricity to complement renewable sources. Several types of grid-scale storage that vary in the amount of energy they can store and how quickly they can discharge it into the grid are under development. Some are physical (flywheels, pumped hydro, and compressed air) and some are chemical (traditional batteries, flow batteries , supercapacitors, and hydrogen ), but all are the subject of active chemistry and materials development research. The U.S. government is encouraging development in this area through tax credits as part of the Inflation Reduction Act and a $7 billion program to establish regional hydrogen hubs.

Meanwhile, nuclear power will continue to be an active R&D area in 2024. In nuclear fission, multiple companies are developing small modular reactors (SMRs) for use in electricity production and chemical manufacturing, including hydrogen. The development of nuclear fusion reactors involves fundamental research in physics and materials science. One major challenge is finding a material that can be used for the wall of the reactor facing the fusion plasma; so far, candidate materials have included high-entropy alloys and even molten metals .

Neurodegenerative diseases

INSGENENGSOC101826-2024-Trends-To-Watch-Neurodegenerative-Diseases-1920x1080-Hero

Neurodegenerative diseases are a major public health concern, being a leading cause of death and disability worldwide. While there is currently no cure for any neurodegenerative disease, new scientific discoveries and understandings of these pathways may be the key to helping patient outcomes.

  • Alzheimer’s disease: Two immunotherapeutics have received FDA approval to reduce both cognitive and functional decline in individuals living with early Alzheimer's disease. Aducannumab (Aduhelm®) received accelerated approval in 2021 and is the first new treatment approved for Alzheimer’s since 2003 and the first therapy targeting the disease pathophysiology, reducing beta-amyloid plaques in the brains of early Alzheimer’s disease patients. Lecanemab (Leqembi®) received traditional approval in 2023 and is the first drug targeting Alzheimer’s disease pathophysiology to show clinical benefits, reducing the rate of disease progression and slowing cognitive and functional decline in adults with early stages of the disease.
  • Parkinson’s disease: New treatment modalities outside of pharmaceuticals and deep brain stimulation are being researched and approved by the FDA for the treatment of Parkinson’s disease symptoms. The non-invasive medical device, Exablate Neuro (approved by the FDA in 2021), uses focused ultrasound on one side of the brain to provide relief from severe symptoms such as tremors, limb rigidity, and dyskinesia. 2023 brought major news for Parkinson’s disease research with the validation of the biomarker alpha-synuclein. Researchers have developed a tool called the α-synuclein seeding amplification assay which detects the biomarker in the spinal fluid of people diagnosed with Parkinson’s disease and individuals who have not shown clinical symptoms.
  • Amyotrophic lateral sclerosis (ALS): Two pharmaceuticals have seen FDA approval in the past two years to slow disease progression in individuals with ALS. Relyvrio ® was approved in 2022 and acts by preventing or slowing more neuron cell death in patients with ALS. Tofersen (Qalsody®), an antisense oligonucleotide, was approved in 2023 under the accelerated approval pathway. Tofersen targets RNA produced from mutated superoxide dismutase 1 (SOD1) genes to eliminate toxic SOD1 protein production. Recently published genetic research on how mutations contribute to ALS is ongoing with researchers recently discovering how NEK1 gene mutations lead to ALS. This discovery suggests a possible rational therapeutic approach to stabilizing microtubules in ALS patients.

Gain new perspectives for faster progress directly to your inbox.

Drive industry-leading advancements and accelerate breakthroughs by unlocking your data’s full potential. Contact our CAS Custom Services SM experts to find the digital solution to your information challenges.

  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

June 24, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

New technology helps solve the unsolvable in rare disease diagnoses

by Susan Murphy, Mayo Clinic

genetics

At Mayo Clinic, the mission to solve the unsolvable is at the heart of every rare disease case. Each diagnosis is a testament to perseverance, innovation and the relentless pursuit of answers.

This commitment has led researchers at the Mayo Clinic Center for Individualized Medicine to develop a semi-automated system, known as RENEW, to rapidly reanalyze unresolved rare disease cases.

In a new study published in Human Genetics , researchers describe how RENEW (REanalysis of NEgative Whole-exome/ genome data ) led to a probable diagnosis for 63 patients out of 1,066 undiagnosed cases.

The innovative technology, launched in 2022, regularly compares patient genomic sequencing data with newly published global research discoveries, with the goal of pinpointing previously elusive, disease-causing genetic variants.

"Considering that the majority of patients with rare diseases who undergo genomic sequencing remain without a diagnosis, this is no small accomplishment," says Alejandro Ferrer, Ph.D., a translational omics researcher at the center and lead author of the study.

"Each successful diagnosis facilitated by RENEW signifies a profound breakthrough in providing answers and hope to people navigating the complexities of rare diseases ."

RENEW features a sophisticated filtering system that sifts through the new genetic information to help zero in on the pathogenic variant or variants causing a patient's disorder.

On average, it took RENEW approximately 20 seconds to review each of the 5,741 genomic variants it prioritized. The total analysis time for each patient with an unresolved case ranged from 10 seconds to 1.5 hours. In contrast, manual reanalysis by researchers and clinicians typically takes weeks and involves an extensive review of published papers and scouring patient data in the search for clues.

RENEW was created by Eric Klee, Ph.D., the center's Everett J. and Jane M. Hauck Midwest Associate Director, Research and Innovation.

"Looking forward, through advances in technology, we hope to further improve the automation and efficiency in this interpretative process, bringing this technology to a broader aperture of genetic test data," Dr. Klee says.

Explore further

Feedback to editors

research issues technology

Insurance coverage disruptions, challenges accessing care common amid Medicaid unwinding

Jun 29, 2024

research issues technology

Scientists developing a monoclonal antibody to neutralize Nipah virus one of the deadliest zoonotic pathogens

research issues technology

Researchers develop scalable synthesis of cancer-fighting compounds

Jun 28, 2024

research issues technology

New device inspired by python teeth may reduce the risk of rotator cuff re-tearing

research issues technology

Serotonin 2C receptor regulates memory in mice and humans: Implications for Alzheimer's disease

research issues technology

Fears of attack and no phone signal deter women trail runners, finds study

research issues technology

Creating supranormal hearing in mice

research issues technology

Visualizing core pathologies of Parkinson's disease and related disorders in live patients

research issues technology

Novel mechanism for targeting bone marrow adipocytes to prevent bone loss

research issues technology

Breakthrough research makes cancer-fighting viral agent more effective

Related stories.

research issues technology

Using AI to improve diagnosis of rare genetic disorders

Apr 25, 2024

research issues technology

Research team uses genomic testing broadly for rare diseases, improves patient care

Jun 26, 2023

research issues technology

Reanalysis of clinical molecular data yields new genetic diagnoses

Jun 20, 2019

research issues technology

Better data framework needed to improve rare disease diagnostic rates

Apr 16, 2020

research issues technology

Metabolomics meets genomics to improve patient diagnosis

Jul 7, 2020

research issues technology

Single genomic test promises accelerated diagnoses for rare genetic diseases

Mar 28, 2024

Recommended for you

research issues technology

Study reveals significant differences in RNA editing between postmortem and living human brain

research issues technology

Loss of salt and body fluid stimulates kidney regeneration in mice

research issues technology

Gene therapy halts progression of rare genetic condition in young boy

research issues technology

New predictors of metastasis in patients with early-stage pancreatic cancer

research issues technology

Collaboration develop a potent therapy candidate for fatal prion diseases

Jun 27, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Search Issues

research issues technology

Chesley Bonestell, "The Exploration of Mars" (1953), oil on board, 143/8 x 28 inches, gift of William Estler, Smithsonian National Air and Space Museum. Reproduced courtesy of Bonestell LLC. Read more about how space art shaped national identity.

research issues technology

Alma Thomas, "Blast Off" (2020) acrylic on canvas, 72 x 52 inches, gift of Vincent Melzac, Smithsonian National Air and Space Museum. Read more about how space art shaped national identity.

research issues technology

Artist rendering of Jupiter’s auroras from NASA Jet Propulsion Laboratory’s Visions of the Future poster series, 2016, detail, courtesy of NASA/JPL. Read more about how space art shaped national identity.

research issues technology

Étienne Léopold Trouvelot, "The Planet Jupiter" (1881–1882), Chromolithograph, 33 x 41 inches, Smithsonian National Air and Space Museum. Image credit: New York Public Library, Rare Book Division, Digital Collections: All Astronomical Drawings. Read more about how space art shaped national identity.

research issues technology

Alma Thomas, "Astronauts Glimpse of the Earth" (1974), acrylic on canvas, 50.5 x 50.5 inches, gift of Mr. and Mrs. Jacob Kainen, Smithsonian National Air and Space Museum. Read more about how space art shaped national identity.

phillips on open records requests

Preparing Researchers for an Era of Freer Information

Peter W. B. Phillips

Science Policy IRL

research issues technology

Brent Blevins Makes Mars Policy in Congress

Brent Blevins , Lisa Margonelli

Trust in Science

research issues technology

Embracing the Social in Social Science

Rayvon Fouché

Illustration by Shonagh Rae

Natasha Trethewey

Science Diplomacy

research issues technology

Channels for Arctic Diplomacy

Nataliya Shok , Katherine Ginsbach

A Justice-Led Approach to AI Innovation

Strategies to Govern AI Effectively

An interdisciplinary group of experts explores some of the challenges posed by the use of artificial intelligence in scientific research.

Newsletter Sign-up

Be the first to get access to new articles — sign up for the Issues newsletter today.

The Spring Issue

Spring 2024 ISSUES Cover

Join the Conversation

Social media.

Follow us on Twitter, Facebook, and LinkedIn.

Listen to The Ongoing Transformation  for conversations with today’s most exciting thinkers.

Read responses to our published essays from experts around the world.

Attend Events

Connect with our dynamic community of experts and policy-makers.

In response to essays published in Issues , our readers weigh in on critical topics in policy related to science, technology, and society.

Spring 2024

Principles for Fostering Health Data Integrity

Needed: a vision and strategy for biotech education, current issue, aristotle on the moon.

For better or worse, finding workable solutions to significant problems among people who share land, traditions, and values may be easier and more effective than global and national efforts. For the scientific enterprise, the devolution of big policy to small places poses new challenges around establishing spaces for democratic decisionmaking, building knowledge to inform those decisions, and effectively linking the two. As decisionmaking moves toward states and localities, science leaders will need to understand how the landscape of opportunity is shifting and build the capacity to answer questions posed by specific geographic communities.

Browse the Issue

Spring 2024 ISSUES Cover

For faster access to our full journal and to see the beautiful artwork that accompanies our feature essays, subscribe to the print edition today.

From This Issue

The science-politics power struggle.

Helen Pearson

Boost Opportunities for Science Learning With Regional Alliances

Heidi Schweingruber , Susan Singer , Kerry Brenner

Celebrating the Centennial of the National Academy of Sciences Building

Byline will display on front end

Advertisement

research issues technology

On the Front Lines of Change: Reporting From the Gulf of Mexico

“ In many respects, the Gulf is on the front lines. Massive disruptions brought by climate change, the need to transition to a new energy economy, and the potential collapse of vital ecosystems are on the horizon for the nation as a whole, not just the Gulf region. ”

— Lauren Alexander Augustine

Supporting the Gulf

A skiff cleans up oil from the Deepwater Horizon oil spill in the Gulf of Mexico

Ten Years Into the Gulf Research Program

Lauren Alexander Augustine

Emergency Management

photograph of bulrushes for the roots that ward off disaster

The Roots That Ward Off Disaster

Samantha Montano

The Loop Current

Gewin on the loop current

A Scientific “Forced Marriage” Takes on the Mysteries of the Loop Current

Virginia Gewin

The Energy Transition

Tristan Baurick on the energy transition in Port Arthur, Texas

How Will Carbon Capture Transform Port Arthur, Texas?

Tristan Baurick

Connecting today’s headlines with deeper policy analyses from the Issues archives.

New Study of Geoengineering Raises New Worries

Misinformation watchdog under threat, crafting a better response to the “anthropocene”, capitalizing on benefits of private space missions.

Science and Technology Policy Survey

Featured Art Galleries

research issues technology

How Space Art Shaped National Identity

Carolyn Russo

research issues technology

An Elusive and Indefinable Boundary

Virginia Hanusik

Browse our recent issues

Spring 2024 ISSUES Cover

Explore Issues By Topic

  • Agriculture and Food
  • Defense and National Security
  • Ethics, Values, and Philosophy
  • Global Affairs
  • Engineering and Infrastructure
  • Innovation Policy
  • Online Exclusives
  • Real Numbers
  • Science Politics
  • Fiction and Poetry
  • Science and Society
  • Socrates Untenured
  • Transportation
  • Artificial Intelligence
  • Generative AI
  • Business Operations
  • Cloud Computing
  • Data Center
  • Data Management
  • Emerging Technology
  • Enterprise Applications
  • IT Leadership
  • Digital Transformation
  • IT Strategy
  • IT Management
  • Diversity and Inclusion
  • IT Operations
  • Project Management
  • Software Development
  • Vendors and Providers
  • Enterprise Buyer’s Guides
  • United States
  • Middle East
  • España (Spain)
  • Italia (Italy)
  • Netherlands
  • United Kingdom
  • New Zealand
  • Data Analytics & AI
  • Newsletters
  • Foundry Careers
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Copyright Notice
  • Member Preferences
  • About AdChoices
  • Your California Privacy Rights

Our Network

  • Computerworld
  • Network World

Mary K. Pratt

The 10 biggest issues IT faces today

Ai — and how to create realistic, trustworthy value from it — has disrupted the it agenda. but managing a deepening vendor roster, securing the enterprise, and developing talent are also reshaping the cio agenda mid-year..

it person assisting employee

CIOs have an overflowing docket, with numerous critical and complex issues dominating their time and attention.

Not surprisingly, capitalizing on AI tops the to-do list, as does building the right expectations, security, and trust around it. Managing change is also vital for CIOs today.

Whether and to what extent these and other big issues impact any given CIO depend on various factors, such as the size of the IT department, the size of the organization, the industry, and so on. But there’s no question that they affect a significant number of IT execs seeking to deliver business value with IT in the year ahead.

With that in mind, here is a look at what researchers, consultants, and CIOs say are among the biggest issues IT leaders are dealing with right now.

1. Seizing on AI — and being smart about it

AI has been a means for enterprise innovation, automation, and competitive edge for years now, but it shot up the IT priority list after ChatGPT and the current crop of AI tools — generative AI, in particular — hit the market. That has brought more specific AI-related issues to the surface at a wider range of organizations.

One of the most-pressing issues among those: Seizing on AI capabilities, not just for fear of missing out but to truly deliver value.

“Not every organization has the same opportunities in the very-near term to apply AI in a way that materially changes their business model. But everybody is worried about it, and they’re wondering how it will improve productivity, market research, and decision-making quality,” says Jeff Stovall, CIO of Abt Global, noting that he is working with his executive colleagues to identify the best business cases for AI at his company.

Diane Carco, president and CEO of management consulting company Swingtide and a former CIO, says she sees this issue across organizations.

“Others in the C-suite look to the CIO to make sense of AI, application providers are marketing AI techniques — whether warranted or not — with religious fanaticism, and employees are exploring generative AI tools without guidance,” she says. “The CIO is playing catch-up. Internal education, including self-education, is imperative.”

2. Setting realistic expectations for AI

CIOs say they’re also spending significant time setting realistic expectations of AI’s capabilities — a tough challenge given all the hype .

“There is a general perception that AI and genAI can solve all manners of problems,” says Stovall, a Society for Information Management national board member and DEI Committee lead for SIM. “It’s not a general-purpose tool to accomplish everything.”

He points to one use case within his organization that illustrates the need for CIOs to set realistic expectations as workers look to AI to solve more of their workplace pain points. His company is using AI to write proposals, with AI providing significant productivity gains around this task. But he also educated colleagues on the technology’s potential for inaccuracies and outright fabrications — known as hallucinations, stressing the need for strong human oversight and quality assurance to maximize the value of AI and control for risks.

“AI doesn’t do everything well, and you cannot at this stage utilize AI to completely replace the human element; it’s a human augmentation tool,” Stovall says.

3. Creating secure, trustworthy AI

Despite AI’s limitations, organizations are forging ahead with their AI initiatives . And as is the case with all technology deployments, AI projects have CIOs and their teams evaluating capabilities, integrating them into the IT infrastructure, and customizing when needed.

But they’re also spending a lot of energy understanding the unique risks that AI presents , how to educate others on them and how they, as tech leaders, can counteract fears .

Anthony Moisant, CIO and chief security officer at Indeed, a job matching and hiring platform, says the CIO’s task here is to create secure, trustworthy, and responsible AI . “It’s how do we make sure that this incredible transformative technology doesn’t create pain down the line,” he explains.

4. Tightening data security

All the work around AI has further highlighted the value of data — for the organizations and hackers alike. That, along with the ever-increasing sophistication of the bad actors and the consequences of suffering an attack, has turned up the heat on CIOs.

“Indications are that hackers/ransomware agents are becoming more aggressive. At the same time, operations and decision-making are increasingly dependent on data availability and accuracy. Meanwhile, the perimeter of exposure widens as remote workers and connected devices proliferate. This is an arms race, and the CIO must lead the charge by implementing better tools and training,” Carco says.

5. Keeping up with ever-accelerating pace of change

Professional services firm Deloitte polled 211 US-based CIOs and technology leaders in February 2024 and found that one of the top priorities for CIOs is staying ahead of emerging technologies and solutions.

“The pace of change is increasing, and being able to stay ahead of that is challenging,” says Lou DiLorenzo Jr., principal and national US CIO program leader at Deloitte Consulting LLP as well as the firm’s AI and data strategy practice leader.

DiLorenzo acknowledges that CIOs have always had to keep up with new tech — it’s a big part of the job. And its rapid evolvement is not necessarily new, either.

“But it does feel different now; the velocity feels different,” he says, pointing to the exponentially fast maturation of genAI as case in point. “The continued evolution of capabilities and who is providing those capabilities and the people enabling them are changing more dramatically than I’ve seen in the past.”

Consequently, CIOs will need to reassess providers more often and more quickly, too, all of which requires more orchestration .

“It requires a different level of appreciation for ‘optionality’ in the IT architecture,” DiLorenzo says. There’s a need for more flexibility and modularity, with more components that won’t be kept for long stretches.

6. Managing vendors for today’s environment

Swingtide’s Carco sees a related issue that many CIOs face today, which is effectively managing vendors as the number of providers within the IT function dramatically grows.

“CIOs are coming to recognize that an organization built for internal operations is not well-suited to managing dozens or hundreds of external providers, and the proliferation of contractual obligations can be overwhelming. In case of emergency, knowing who has your data, what their contractual obligations are to safeguard it, and how they are performing has become extremely difficult,” Carco says.

Those challenges have leading CIOs looking to beef up their vendor management practices , moving vendor management from “a side job for technical resources” to a true discipline with a clarity of roles.

7. Implementing security as quickly as the tech

Ricki Koinig, CIO of Wisconsin’s Department of Natural Resources, says one of the most significant issues she faces today is “delivering more and more projects while ensuring that security in those projects is fully funded and supported.”

She says CIOs like her often have to ask whether to speed up implementation of security practices or rein in delivery so security can keep up.

That’s prompting a bit of introspection, saying there’s a need to ask: “What’s hindering this, and how can we find opportunities to move forward?”

Koinig brings up some challenging dynamics on this front.

“Most organizations do not see their IT departments as solely cybersecurity units, but rather IT delivery and support departments. This means delivering more with less time and money is often seen as the true success measure of your organization’s IT department. Security practices may be seen at best as simply expected and at least as time consuming delays to project delivery,” she says.

She continues: “Relentless conversations and campaigning for the importance of funding and allowing time for security activities within projects may fall flat of expectations, therefore rethinking approaches in your organization might be helpful. The art is always to steadfastly balance efforts between delivery expectations and growing security requirements.”

8. Identifying, communicating, and delivering value

A majority of chief executives see tech as not just supporting their business but part and parcel of it. According to Deloitte, 57% of CEOs plan to embed new technologies in their business models to find opportunities for growth .

Moisant says CIOs are responding by adopting product mindsets and identifying tech-enabled opportunities that deliver disruptive, not only incremental, value. The goal, he explains, is “disrupting ourselves by creating new, foundational values using technology.”

That puts more pressure on CIOs to recognize where and how technology can produce value to the business and to effectively communicate that vision, DiLorenzo says.

“CIOs need to translate what they do and why it matters to their company and their customers in a really compelling way,” he adds.

9. Watching the IT spend

CIOs aren’t only thinking of the economics of their innovation efforts: They remain focused on the economics of their overall IT spend.

Abt Global’s Stovall, for one, says he is “being very cautious and conservative” with his IT investments, noting a general sense of uncertainty about what’s ahead is driving his approach.

He’s not alone in this thinking.

Indeed’s Moisant says he has become more attentive to finding efficiencies and wringing out some of the complexities that were added during the strong growth and investment cycle that happened over the past several years.

Carco says she sees many CIOs taking similar approaches, adding that CIOs face a big challenge in this area.

“Technology continues to permeate business more deeply. IT has surged past being a helpful driver of productivity; it increasingly saturates all functions of the organization, including its relationship with customers,” she says. “Managing IT spend is increasingly difficult and requires cleanup of old technologies and relationships to ensure money is not wasted. This legacy cleanup now needs dedicated resources.”

10. Attracting and retaining talent

Attracting and retaining talent has long been a top issue for CIOs, yet Larry Bonfante, founder and CEO of CIO Bench Coach, says it not only continues to be an area of intense work but one that could become even more critical.

“It has always been an issue, but it’s exponentially more challenging and complex now,” he says.

There are two major reasons why, he says. First, baby boomers will be leaving the workforce in higher numbers in upcoming years with fewer younger workers to replace them. Second, today’s workers have different ideas on how, where, and when work should get done. For example, a growing number of people are opting out of traditional full-time positions and instead working as contract, or “gig,” workers. And many expect to have flexible hours and remote work options with some in-office opportunities.

“You have this confluence of issues coming together,” he says.

These dynamics mean CIOs must adapt their recruitment and retention strategies if they want to draw in and keep talent, Bonfante says.

“The smart leaders are creating a hybrid environment, so there’s enough human interaction but there’s still flexibility and autonomy,” he says, noting that savvy execs work to create schedules that work for each individual rather than have a one-size-fits-all policy.

Top CIOs take that same approach to retention strategies, tailoring training and advancement opportunities that incorporate individual worker’s career ambitions — and not just the organization’s own needs.

“Today you have to think about making each employee the best version of themselves,” he says. Yes, they may still leave for another job, but they’ll also be more likely to recommend the organization to colleagues looking for new jobs.

On a related note, those workforce dynamics coupled with the rapid adoption of new technologies such as AI have many CIOs working out how they can quickly reskill workers to handle emerging tasks — and how to reskill workers at scale , not just within IT but across the organization.

“CIOs are being asked to increase the tech fluency for the entire organization because technology is letting work be done differently. Processes are changing and decisions are being made differently because of technology. There is a chance to operate differently, and CIOs need to help bring employees along and help them learn and grow,” DiLorenzo adds.

Related content

Microstrategy boosts hyperintelligence with artificial intelligence, marine corps enlists rpa, 5g, and ar/vr to retool fighting force, the sta is realizing potential in predictive analysis and automation, systems-level approach drives optimal performance and power efficiency for linux and open-source workloads, from our editors straight to your inbox.

Mary K. Pratt

Mary K. Pratt is a freelance writer based in Massachusetts.

More from this author

It leaders rethink talent strategies to cope with ai skills crunch, 10 projects top of mind for it leaders today, ceos’ top priorities for it leaders today, top 10 barriers to strategic it success, secrets of business-driven it orgs, skills-first hiring has cios rethinking talent strategies, 8 strategies for accelerating it modernization, women it leaders take center stage, most popular authors.

research issues technology

  • Paula Rooney Senior Writer

research issues technology

Show me more

Demystifying ai: understanding weak and strong ai.

Image

Infinidat Revolutionizes Enterprise Cyber Storage Protection to Reduce Ransomware and Malware Threat Windows

Image

New trade body wants to license training data for AI use

Image

CIO Leadership Live Middle East with Kenan Begovic, Group Director of Information Security, beIN Media Group

Image

Pacific Coast Companies CIO Marty Menard on leveraging vendor partners

Image

CIO Leadership Live UK with Elizabeth Akorita, Group Deputy Director, Digital Delivery, Department for Science and Innovation and Technology

Image

Sponsored Links

  • Visibility, monitoring, analytics. See Cisco SD-WAN in a live demo.
  • Everyone’s moving to the cloud. Are they realizing expected value?
  • The cloud shouldn’t be complicated. Unlock its potential with SAS.
  • Everybody's ready for AI except your data. Unlock the power of AI with Informatica

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Owen Coté, military technology expert and longtime associate director of the Security Studies Program, dies at 63

Press contact :.

Owen Coté in front of bookshelves

Previous image Next image

Owen Coté PhD ’96, a principal research scientist with the MIT Security Studies Program (SSP), passed away on June 8 after battling cancer. He joined SSP in 1997 as associate director, a role he held for the rest of his life. He guided the program through the course of three directors — each profiting from his wise counsel, leadership skills, and sense of responsibility.

“Owen was an indomitable scholar and leader of the field of security studies,” says M. Taylor Fravel, the Arthur and Ruth Sloan Professor of Political Science and the director of SSP. “Owen was the heart and soul of SSP and a one-of-a-kind scholar, colleague, and friend. He will be greatly missed by us all.”

Having earned his doctorate in political science at MIT, Coté embodied the program’s professional and scholarly values. Through his research and his teaching, he nurtured three of the program’s core interests — the study of nuclear weapons and strategy, the study of the relationship between technological change and military practice, and the application of organization theory to understanding the behavior of military institutions.

He was the author of “The Third Battle: Innovation in the U.S. Navy’s Silent Cold War Struggle with Soviet Submarines,” a book analyzing the sources of the U.S. Navy’s success in its Cold War antisubmarine warfare effort, and a co-author of “Avoiding Nuclear Anarchy: Containing the Threat of Loose Russian Nuclear Weapons and Fissile Material.” He also wrote on the future of naval doctrine, nuclear force structure issues, and the threat of weapons of mass destruction terrorism.

He was an influential national expert on undersea warfare. According to Ford International Professor of Political Science Barry Posen, Coté’s colleague for several decades who served as SSP director from 2006 to 2019, “Owen is credited, among others, with helping the U.S. Navy see the wisdom of transforming four ‘surplus’ Ohio-class ballistic missile submarines into cruise missile platforms that serve the Navy and the country to this day.”

Coté’s principal interest in recent years was maritime “war in three dimensions” — surface, air, and subsurface — and how they interacted and changed with advancing technology. He recently completed a book manuscript on this complex history. At the time of his death, he was also preparing a manuscript that analyzed the sources of innovative military doctrine, using cases that compared U.S. Navy responses to moments in the Cold War when U.S. leaders worried about the vulnerability of land-based missiles to Soviet attack.

“No one in our field was as knowledgeable about military organizations and operations, the politics that drives security policy, and relevant theories of international relations as Owen,” according to Harvey Sapolsky, MIT Professor of Public Policy and Organization, Emeritus, and SSP director from 1989 to 2006. “And no one was more willing to share that knowledge to help others in their work.”

This broad portfolio of expertise served him well as co-editor and ultimately editor of the journal International Security,  the longtime flagship journal of the security studies subfield. His colleague and editor-in-chief of  International Security  Steven Miller reflects that, “Owen combined a brilliant analytic mind, a mischievous sense of humor, and a passion for his work. His contribution to  International Security  was immense and will be missed, as I relied on his judgement with total confidence.”

Coté believed in sharing his scholarly findings with the policy community. With Cindy Williams, a principal research scientist at SSP, he helped organize and ran a series of national security simulations for military officers and Department of Defense (DoD) civilians in the national security studies program at the Elliott School of International Affairs at George Washington University. He regularly produced major conferences at MIT, with several on the U.S. nuclear attack submarine force perhaps the most influential.

He was passionate about nurturing younger scholars. In recent years, he led programs for visiting fellows at SSP: the Nuclear Security Fellows Program and the Grand Strategy, Security, and Statecraft Fellows Program.

Caitlin Talmage PhD ’11, one of his former students and now an associate professor of political science at MIT, describes Coté as "a devoted mentor and teacher. His classes sparked many dissertations, and he engaged deeply with students and their research, providing detailed feedback, often over steak dinners. Despite his towering expertise in the field of security studies, Owen was always patient, generous, and respectful toward his students. He continued to advise many even after graduation as they launched their careers, myself included. He will be profoundly missed.”

Phil Haun PhD ’10, also one of Coté’s students and now professor and director of the Rosenberg Deterrence Institute at the Naval War College, describes Coté as “a mentor, colleague, and friend to a generation of MIT SSP graduate students,” noting that “arguably his greatest achievement and legacy are the scholars he nurtured and loved.”  As Haun notes, “Owen’s expertise, with a near encyclopedic knowledge of innovations in military technology, coupled with a gregarious personality and willingness to share his time and talent, attracted dozens of students to join in a journey to study important issues of international security. Owen’s passion for his work and his eagerness to share a meal and a drink with those with similar interests encouraged those around him. The degree to which so many MIT SSP alums have remained connected to the program is testament to the caring community of scholars that Owen helped create.”

Posen describes Coté as a “larger-than-life figure and the most courageous and determined human being I have ever met. He could light up a room when he was among people he liked, and he liked most people. He was in the office suite nearly every day of the week, including weekends, and his door was usually open. Professors, fellows, and graduate students would drop by to seek his counsel on issues of every kind, and it was not uncommon for an expected 10-minute interlude to turn into a one-hour seminar. He had a truly unique ability to understand the interaction of technology and military operations. I have never met anyone who could match him in this ability. He also knew how to really enjoy life. It is an incredible loss on many, many levels.”

As Miller notes , “I got to know Owen while serving as supervisor of his senior thesis at Harvard College in 1981–82. That was the beginning of a lifelong friendship and happily our careers remained entangled for the remainder of his life. I will miss the wonderful, decent human being, the dear friend, the warm and committed colleague. He was a brave soul, suffering much, overcoming much, and contributing much. It is deeply painful to lose such a friend.”

“Owen was kind and generous, and though he endured much, he never complained,” says Sapolsky. “He gave wonderfully organized and insightful talks, improved the writing of others with his editing, and always gave sound advice to those who were wise enough to seek it.”

After graduating from Harvard College in 1982 and before returning to graduate school, Coté worked at the Hudson Institute and the Center for Naval Analyses. He received his PhD in 1996 from MIT, where he specialized in U.S. defense policy and international security affairs.

Before joining SSP in 1997, he served as assistant director of the International Security Program at Harvard's Center for Science and International Affairs (now the Belfer Center).  He was the son of Ann F. Coté and the late Owen R. Coté Sr. His family wrote in his  obituary  that at home, he was always up for a good discussion about Star Wars or Harry Potter movies. Motorcycle magazines were a lifelong passion. He was a devoted uncle to his nieces Eliza Coté, Sofia Coté, and Livia Coté, as well as his self-proclaimed “fake” niece and nephew, Sam and Nina Harrison. In addition to his mother and his nieces, he is survived by his siblings: Mark T. Coté of Blacksburg, Virginia; Peter H. Coté and his wife Nina of Topsfield, Massachusetts; and Suzanne Coté Curtiss and her husband Robin of Cape Neddick, Maine.

Share this news article on:

Related links.

  • Security Studies Program
  • Center for International Studies

Related Topics

  • School of Humanities Arts and Social Sciences
  • Political science
  • Security studies and military
  • Books and authors

Related Articles

Anoushka Bose is moving purposefully toward a public-service career in nuclear policy and diplomacy.

Anoushka Bose: Targeting a career in security studies and diplomacy

“For more than 40 years, the faculty, fellows, and students of SSP have been conducting policy-relevant and rigorous research on questions of war and peace, both among states and within them. I am honored to be given this opportunity to serve as director and look forward to working with my SSP and MIT colleagues in this new role,” says M. Taylor Fravel, the Arthur and Ruth Sloan Professor of P...

Taylor Fravel named director of the MIT Security Studies Program

Barry Posen discusses the fate of Iraq during a CIS Starr Forum held in Morss Hall on Oct. 27. Posen has just been named the new head of the MIT Security Studies Program.

Posen will direct Security Studies

Previous item Next item

More MIT News

Monica and Kevin Chan in front of a red MIT tent

From group stretches to “Hitting Roman,” MIT Motorsports traditions live on

Read full story →

Anthony Hallee-Farrell

Faces of MIT: Anthony Hallee-Farrell '13

Gevorg Grigoryan

Creating the crossroads

X-ray images

Study reveals why AI models that analyze medical images can be biased

Michael Birnbaum in the lab, with blurry equipment in foreground.

Leaning into the immune system’s complexity

A glowing penicillin molecule

Scientists use computational modeling to guide a difficult chemical synthesis

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Are you seeking one-on-one college counseling and/or essay support? Limited spots are now available. Click here to learn more.

54 Most Interesting Technology Research Topics for 2023

May 30, 2023

Scrambling to find technology research topics for the assignment that’s due sooner than you thought? Take a scroll down these 54 interesting technology essay topics in 10 different categories, including controversial technology topics, and some example research questions for each.

Social technology research topics

Whether you have active profiles on every social media platform, you’ve taken a social media break, or you generally try to limit your engagement as much as possible, you probably understand how pervasive social technologies have become in today’s culture. Social technology will especially appeal to those looking for widely discussed, mainstream technology essay topics.

  • How do viewers respond to virtual influencers vs human influencers? Is one more effective or ethical over the other?
  • Across social media platforms, when and where is mob mentality most prevalent? How do the nuances of mob mentality shift depending on the platform or topic?
  • Portable devices like cell phones, laptops, and tablets have certainly made daily life easier in some ways. But how have they made daily life more difficult?
  • How does access to social media affect developing brains? And what about mature brains?
  • Can dating apps alter how users perceive and interact with people in real life?
  • Studies have proven “doomscrolling” to negatively impact mental health—could there ever be any positive impacts?

Cryptocurrency and blockchain technology research topics

Following cryptocurrency and blockchain technology has been a rollercoaster the last few years. And since Bitcoin’s conception in 2009, cryptocurrency has consistently showed up on many lists of controversial technology topics.

  • Is it ethical for celebrities or influential people to promote cryptocurrencies or cryptographic assets like NFTs ?
  • What are the environmental impacts of mining cryptocurrencies? Could those impacts ever change?
  • How does cryptocurrency impact financial security and financial health?
  • Could the privacy cryptocurrency offers ever be worth the added security risks?
  • How might cryptocurrency regulations and impacts continue to evolve?
  • Created to enable cryptocurrency, blockchain has since proven useful in several other industries. What new uses could blockchain have?

Artificial intelligence technology research topics

We started 2023 with M3GAN’s box office success, and now we’re fascinated (or horrified) with ChatGPT , voice cloning , and deepfakes . While people have discussed artificial intelligence for ages, recent advances have really pushed this topic to the front of our minds. Those searching for controversial technology topics should pay close attention to this one.

  • OpenAI –the company behind ChatGPT–has shown commitment to safe, moderated AI tools that they hope will provide positive benefits to society. Sam Altman, their CEO, recently testified before a US Senate He described what AI makes possible and called for more regulation in the industry. But even with companies like OpenAI displaying efforts to produce safe AI and advocating for regulations, can AI ever have a purely positive impact? Are certain pitfalls unavoidable?
  • In a similar vein, can AI ever actually be ethically or safely produced? Will there always be certain risks?
  • How might AI tools impact society across future generations?
  • Countless movies and television shows explore the idea of AI going wrong, going back all the way to 1927’s Metropolis . What has a greater impact on public perception—representations in media or industry developments? And can public perception impact industry developments and their effectiveness?

Beauty and anti-aging technology 

Throughout human history, people in many cultures have gone to extreme lengths to capture and maintain a youthful beauty. But technology has taken the pursuit of beauty and youth to another level. For those seeking technology essay topics that are both timely and timeless, this one’s a gold mine.

  • With augmented reality technology, companies like Perfect allow app users to virtually try on makeup, hair color, hair accessories, and hand or wrist accessories. Could virtual try-ons lead to a somewhat less wasteful beauty industry? What downsides should we consider?
  • Users of the Perfect app can also receive virtual diagnoses for skin care issues and virtually “beautify” themselves with smoothed skin, erased blemishes, whitened teeth, brightened under-eye circles, and reshaped facial structures. How could advancements in beauty and anti-aging technology affect self-perception and mental health?
  • What are the best alternatives to animal testing within the beauty and anti-aging industry?
  • Is anti-aging purely a cosmetic pursuit? Could anti-aging technology provide other benefits?
  • Could people actually find a “cure” to aging? And could a cure to aging lead to longer lifespans?
  • How might longer human lifespans affect the Earth?

Geoengineering technology research topics

An umbrella term, geoengineering refers to large-scale technologies that can alter the earth and its climate. Typically, these types of technologies aim to combat climate change. Those searching for controversial technology topics should consider looking into this one.

  • What benefits can solar geoengineering provide? Can they outweigh the severe risks?
  • Compare solar geoengineering methods like mirrors in space, stratospheric aerosol injection, marine cloud brightening, and other proposed methods. How have these methods evolved? How might they continue to evolve?
  • Which direct air capture methods are most sustainable?
  • How can technology contribute to reforestation efforts?
  • What are the best uses for biochar? And how can biochar help or harm the earth?
  • Out of all the carbon geoengineering methods that exist or have been proposed, which should we focus on the most?

Creative and performing arts technology topics

While tensions often arise between artists and technology, they’ve also maintained a symbiotic relationship in many ways. It’s complicated. But of course, that’s what makes it interesting. Here’s another option for those searching for timely and timeless technology essay topics.

  • How has the relationship between art and technology evolved over time?
  • How has technology impacted the ways people create art? And how has technology impacted the ways people engage with art?
  • Technology has made creating and viewing art widely accessible. Does this increased accessibility change the value of art? And do we value physical art more than digital art?
  • Does technology complement storytelling in the performing arts? Or does technology hinder storytelling in the performing arts?
  • Which current issues in the creative or performing arts could potentially be solved with technology?

Cellular agriculture technology research topics

And another route for those drawn to controversial technology topics: cellular agriculture. You’ve probably heard about popular plant-based meat options from brands like Impossible and Beyond Meat . While products made with cellular agriculture also don’t require the raising and slaughtering of livestock, they are not plant-based. Cellular agriculture allows for the production of animal-sourced foods and materials made from cultured animal cells.

  • Many consumers have a proven bias against plant-based meats. Will that same bias extend to cultured meat, despite cultured meat coming from actual animal cells?
  • Which issues can arise from patenting genes?
  • Does the animal agriculture industry provide any benefits that cellular agriculture may have trouble replicating?
  • How might products made with cellular agriculture become more affordable?
  • Could cellular agriculture conflict with the notion of a “ circular bioeconomy ?” And should we strive for a circular bioeconomy? Can we create a sustainable relationship between technology, capitalism, and the environment, with or without cellular agriculture?

Transportation technology research topics

For decades, we’ve expected flying cars to carry us into a techno-utopia, where everything’s shiny, digital, and easy. We’ve heard promises of super fast trains that can zap us across the country or even across the world. We’ve imagined spring breaks on the moon, jet packs, and teleportation. Who wouldn’t love the option to go anywhere, anytime, super quickly? Transportation technology is another great option for those seeking widely discussed, mainstream technology essay topics.

  • Once upon a time, Lady Gaga was set to perform in space as a promotion for Virgin Galactic . While Virgin Galactic never actually launched the iconic musician/actor, soon, they hope to launch their first commercial flight full of civilians–who paid $450,000 a pop–on a 90-minute trip into the stars. And if you think that’s pricey, SpaceX launched three businessmen into space for $55 million in April, 2022 (though with meals included, this is actually a total steal). So should we be launching people into space just for fun? What are the impacts of space tourism?
  • Could technology improve the way hazardous materials get transported?
  • How can the 5.9 GHz Safety Band affect drivers?
  • Which might be safer: self-driving cars or self-flying airplanes?
  • Compare hyperloop and maglev Which is better and why?
  • Can technology improve safety for cyclists?

Gaming technology topics

A recent study involving over 2000 children found links between video game play and enhanced cognitive abilities. While many different studies have found the impacts of video games to be positive or neutral, we still don’t fully understand the impact of every type of video game on every type of brain. Regardless, most people have opinions on video gaming. So this one’s for those seeking widely discussed, mainstream, and controversial technology topics.

  • Are different types or genres of video games more cognitively beneficial than others? Or are certain gaming consoles more cognitively beneficial than others?
  • How do the impacts of video games differ from other types of games, such as board games or puzzles?
  • What ethical challenges and safety risks come with virtual reality gaming?
  • How does a player perceive reality during a virtual reality game compared to during other types of video games?
  • Can neurodivergent brains benefit from video games in different ways than neurotypical brains?

Medical technology 

Advancements in healthcare have the power to change and save lives. In the last ten years, countless new medical technologies have been developed, and in the next ten years, countless more will likely emerge. Always relevant and often controversial, this final technology research topic could interest anyone.

  • Which ethical issues might arise from editing genes using CRISPR-Cas9 technology? And should this technology continue to be illegal in the United States?
  • How has telemedicine impacted patients and the healthcare they receive?
  • Can neurotechnology devices potentially affect a user’s agency, identity, privacy, and/or cognitive liberty?
  • How could the use of medical 3-D printing continue to evolve?
  • Are patients more likely to skip digital therapeutics than in-person therapeutic methods? And can the increased screen-time required by digital therapeutics impact mental health

What do you do next?

Now that you’ve picked from this list of technology essay topics, you can do a deep dive and immerse yourself in new ideas, new information, and new perspectives. And of course, now that these topics have motivated you to change the world, look into the best computer science schools , the top feeders to tech and Silicon Valley , the best summer programs for STEM students , and the best biomedical engineering schools .

  • College Success
  • High School Success

Mariya holds a BFA in Creative Writing from the Pratt Institute and is currently pursuing an MFA in writing at the University of California Davis. Mariya serves as a teaching assistant in the English department at UC Davis. She previously served as an associate editor at Carve Magazine for two years, where she managed 60 fiction writers. She is the winner of the 2015 Stony Brook Fiction Prize, and her short stories have been published in Mid-American Review , Cutbank , Sonora Review , New Orleans Review , and The Collagist , among other magazines.

  • 2-Year Colleges
  • Application Strategies
  • Best Colleges by Major
  • Best Colleges by State
  • Big Picture
  • Career & Personality Assessment
  • College Essay
  • College Search/Knowledge
  • Costs & Financial Aid
  • Data Visualizations
  • Dental School Admissions
  • Extracurricular Activities
  • Graduate School Admissions
  • High Schools
  • Homeschool Resources
  • Law School Admissions
  • Medical School Admissions
  • Navigating the Admissions Process
  • Online Learning
  • Outdoor Adventure
  • Private High School Spotlight
  • Research Programs
  • Summer Program Spotlight
  • Summer Programs
  • Teacher Tools
  • Test Prep Provider Spotlight

“Innovative and invaluable…use this book as your college lifeline.”

— Lynn O'Shaughnessy

Nationally Recognized College Expert

College Planning in Your Inbox

Join our information-packed monthly newsletter.

  • A deadly new strain of mpox is raising alarm

Health officials warn it could soon spread beyond the Democratic Republic of Congo

Children with lesions, one of the symptoms of Monkey Pox.

Your browser does not support the <audio> element.

M POX IS A viral infection typically found in parts of Africa and spread through contact with infected animals as well as within households. It causes severe fever, flu-like symptoms and a rash of pus-filled blisters across the body. In 2022 the disease, formerly known as monkeypox, spread around the world—cases turned up everywhere from Nigeria to America and Australia. A newly discovered strain of the virus, described by some researchers as the most dangerous yet, now threatens to spread beyond the Democratic Republic of Congo into neighbouring countries such as Rwanda, Burundi and Uganda.

Although much remains unknown about this strain, Jean Claude Udahemuka, a lecturer at the University of Rwanda who has been studying the outbreak, reports fatality rates of approximately 5% in adults and 10% in children. The virus exhibits different transmission patterns and disproportionately affects children. On June 25th the World Health Organisation emphasised the urgent need to deal with the surge of mpox cases in Africa.

The mpox outbreak in 2022 was caused by a different, and less severe form of the virus of the type “clade II”. The new strain was first identified in April in Kamituga, a gold-mining town in Congo’s South Kivu province. Researchers discovered it was a new lineage of the virus, distinct from previously known mpox strains, which they called “clade Ib”. The clade Ib strain has reportedly mutated to become more efficient at human-to-human transmission. This is causing concern about its potential for broader spread. Mpox has been circulating in humans for many years but it also exists in wild animals in several African countries and occasionally jumps to humans, for example through the consumption of bushmeat.

Unlike the mpox outbreak in 2022, which was driven by male-to-male sexual contact, the new strain is spreading through heterosexual contact, particularly among sex workers, who account for about 30% of recorded cases. Researchers estimate that the outbreak began around mid-September 2023. As of May 26th, 7,851 mpox cases and 384 deaths have been reported in Congo (though it is unclear how many are clade Ib infections, as there is likely to be more than one outbreak going on in the country).

In Congo the new strain is behaving quite differently from other strains of mpox, with cases also suggesting transmission through close (non-sexual) contact. Dr Udahemuka reports instances of household transmission as well as an outbreak in a school. It is also just as common in women as in men, and is reported to be causing miscarriages. The risk of international spread appears to be high, with the strain detected in towns near national borders. The new strain has also been found in sex workers from Rwanda and Uganda, a group that is normally quite mobile. With the arrival of the dry season facilitating greater migration, experts fear it is only a matter of time before the virus starts to emerge in neighbouring countries and then spreads worldwide through close contact at international airports.

In April the Africa Centres for Disease Control and Prevention called for an increase in surveillance and contact-tracing efforts. Some experts suggest it would be worth deploying the smallpox vaccine among high-risk groups such as sex workers and health-care workers, as it has been known in the past to offer cross-protection against mpox, which is a related virus. However, the effectiveness of the smallpox vaccine against this new strain remains unknown. Trudie Lang, a professor of global-health research at the University of Oxford, suggests that although there are uncertainties, the vaccine is safe, easy to use and worth trying. There are also trials under way of an antiviral drug known as tecovirimat, with results expected next year.

The situation in the region is complicated by war, displacement and food insecurity. Containment efforts are made harder still by the likelihood of asymptomatic cases, where individuals do not know they are infected but can nevertheless spread the virus to others. Dr Lang emphasises that this, along with the number of mild cases of the infection, are the biggest unknowns in the current outbreak. Preventing this new mpox strain from becoming another global health crisis requires swift and co-ordinated action. ■

Curious about the world? To enjoy our mind-expanding science coverage, sign up to  Simply Science , our weekly subscriber-only newsletter.

This article appeared in the Science & technology section of the print edition under the headline “Breaking out”

Science & technology June 29th 2024

The race to prevent satellite armageddon, at least 10% of research may already be co-authored by ai.

The centre cannot hold

From the June 29th 2024 edition

Discover stories from this section and more in the list of contents

More from Science and technology

research issues technology

How the last mammoths went extinct

Small genetic mutations accumulated through inbreeding may have made them vulnerable to disease

research issues technology

Fears of a Russian nuclear weapon in orbit are inspiring new protective tech

research issues technology

That might not be a bad thing

What The Economist thought about solar power

A look back through our archives: sometimes prescient, sometimes not

A flower’s female sex organs can speed up fertilisation

They can also stop it from happening

How physics can improve image-generating AI

The laws governing electromagnetism and even the weak nuclear force could be worth mimicking

National Academies Press: OpenBook

Computing and Communications in the Extreme: Research for Crisis Management and Other Applications (1996)

Chapter: technology: research problems motivated by application needs, 2 technology: research problems motivated by application needs, introduction.

Chapter 1 identifies opportunities to meet significant needs of crisis management and other national-scale application areas through advances in computing and communications technology. This chapter examines the fundamental research and development challenges those opportunities imply. Few of these challenges are entirely new; researchers and technologists have been working for years to advance computing and communications theory and technology, investigating problems ranging from maximizing the power of computation and communications capabilities to designing information applications that use those capabilities. What this discussion offers is a contemporary calibration, with implications for possible focusing of ongoing or future efforts, based on the inputs of technologists at the three workshops as well as a diverse sampling of other resources.

This chapter surveys the range of research directions motivated by opportunities for more effective use of technology in crisis management and other domains, following the same framework of technology areas—networking, computation, information management, and user-centered systems—developed in Chapter 1 . Some of the directions address relatively targeted approaches toward making immediate progress in overcoming barriers to effective use of computing and communications, such as technologies to display information more naturally to people or to translate information more easily from one format to another. Others aim at gaining an understanding of coherent architectures and services that, when broadly deployed, could lead eventually to eliminating these barriers

in a less ad hoc, more comprehensive fashion. Research on modeling the behavior of software systems composed from heterogeneous parts, for example, fits this category.

NETWORKING: THE NEED FOR ADAPTIVITY

Because of inherently unpredictable conditions, the communications support needed in a crisis must be adaptable; the steering committee characterizes the required capability as "adaptivity." Adaptivity involves making the best use of the available network capacity (including setting priorities for traffic according to needs and blocking out lower-priority traffic), as well as adding capacity by deploying and integrating new facilities. It also must support different kinds of services with fundamentally different technical demands, and to do so efficiently requires adaptivity. This section addresses specific areas for research in adaptive networks and describes the implications of a requirement for adaptivity; the importance of adaptivity at levels of information infrastructure above the network is discussed in other sections of this chapter.

Box 2.1 provides a sampling of networking research priorities discussed in the workshops. Although problems of networking that arise in national-scale applications are not entirely new, they require rethinking and redefinition because the boundaries of the problem domains are changing. Three issues that influence the scope of networking research problems are (1) scale, (2) interoperability, and (3) usability.

  • Scale . High-performance networking is often thought of in terms of speed and bandwidth. Speed is limited, of course, by the speed of light in the transmission medium (copper, fiber, or air), and individual data bits cannot move over networks any faster. However, the overall speed of networks can be increased by raising the bandwidth (making the pipes wider and/or using more pipes in parallel) and reducing delays at bottlenecks in the network. High-speed networks (which include both high-bandwidth conduits or "pipes" and high-speed switching and routing) allow larger streams of data to traverse the network from point A to point B in a given amount of time. This makes possible the transmission of longer individual messages such as data files, wider signals (such as full-motion video), and greater numbers of messages (such as data integrated from large numbers of distributed sensors) over a given path at the same time. Research challenges related to the operation of high-speed networks include high-speed switching, buffering, error control, and similar needs; these were investigated with significant progress in the Defense Advanced Research Project Agency's (DARPA's) gigabit network testbeds.
  • Speed and bandwidth are not the only performance challenges related to scale; national-scale applications must also scale in size. The number of information sources involved in applications may meet or even far exceed the size of the

Daniel Duchamp, Columbia University:

Rajeev Jain, University of California, Los Angeles:

  • nation's or world's population. In theory, every information producer may be an information consumer and vice versa. Consequently, there is the need not only to reduce the amount of time needed for quantities of bits to be moved but, even at the limits of technology in increasing that speed, to transport more bits to more places. The set of people, workstations, databases, and computation platforms on networks is growing rapidly. Sensors are a potential source of even faster growth in the number of end points; as crisis management applications illustrate, networks may have to route bits to and from environmental sensors, seismometers, structural sensors on buildings and bridges, security cameras in stores and automated teller machines, and perhaps relief workers wearing cameras and other sensors on their clothes, rendering them what Vinton Cerf, of MCI Telecommunications Corporation, called "mobile multimodal sensor nets." Medical sensors distributed at people's homes, doctor's offices, crisis aid stations, and other locations may enable health care delivery in a new, more physically distributed fashion, but only if networks can manage the increased number of end points. In
  • response, the communications infrastructure must be prepared to transport orders of magnitude more data and information and to handle orders of magnitude more separate addresses.
  • A particular case, such as a response to a single disaster, may not involve linking simultaneously to millions or billions of end points, but because the specific points that will be linked are not known in advance, the networking infrastructure must be able to accommodate the full number of names and addresses. The numbering plan of the public switched telecommunications network provides for this capability for point-to-point (voice circuit) calling under normal circumstances. In the broader context of all data, voice, and video communications, the Internet's distributed Domain Name Servers manage the numerical addresses that identify end points and names associated with those addresses. The explosive growth in Internet usage has motivated a change in the standard, Internet Protocol version 6, to accommodate more addresses. 1
  • Interoperability . The need for successfully communicating across boundaries in heterogeneous, long-lived, and evolving environments cannot be ignored. In crisis management, voice communications are necessary but not sufficient; response managers and field workers must be able to mobilize data inputs and more fully developed information (knowledge) from an enormous breadth of existing sources—some of them years old—in many forms. Telemedicine similarly requires a mix of communications modes, although not always over as unpredictable an infrastructure as crises present. Interoperation is more than merely passing waveforms and bits successfully; interoperation among the supporting services for communications, such as security and access priority, is highly complex when heterogeneous networks interconnect.
  • Usability . The information and communications infrastructure is there to provide support to people, not just computers. In national-scale applications, nonexperts are increasingly important users of communications, making usability a crucial issue. What is needed are ways for people to use technology more effectively to communicate, not only with computers and other information sources and tools, but also with other people. Collaboration between people includes many modes of telecommunication: speech, video, passing data files to one another, sharing a consensus view of a document or a map. In crises, for example, the ability to manage the flow of communications among the people and machines involved is central to the enterprise and cannot be reserved solely to highly specialized technicians. Users of networks must be able to configure their communications to fit their organizational demands, not the reverse. This requirement implies far more than easy-to-use human-computer interfaces for network management software; the network itself must be able to adapt actively to its users and whatever information or other resources they need to draw upon.

For networks to be adaptive, they must be able to function during or recover quickly from unusual and challenging circumstances. The unpredictable damage

and disruption caused by a crisis constitute challenging circumstances for which no specific preparations can be made. Unpredicted changes in a financial or medical network, such as movement of customers or a changing business alliance among insurers and hospitals that exchange clinical records, may also require adaptive response. Mobility—of users, devices, information, and other objects in a network—is a particular kind of challenge that is relevant not only to crisis response, but also to electronic commerce with portable devices, telemedicine, and wireless inventory systems in manufacturing, among others. Whenever the nodes, links, inputs, and outputs on a network move, that network must be able to adapt to change.

Randy Katz, of the University of California, Berkeley, has illustrated the demands for adaptivity of wireless (or, more generally, tetherless) networks for mobile computing in the face of highly diverse requirements with the example of a multimedia terminal for a firefighter (Katz, 1995). The device might be used in many ways: to access maps and plan routes to a fire; examine building blueprints for tactical planning; access databases locating local fire hydrants and nearby fire hazards such as chemical plants; communicate with and display the locations of other fire and rescue teams; and provide a location signal to a central headquarters so the firefighting team can be tracked for broader operational planning. All of the data cannot be stored on the device (especially because some data may have to be updated during the operation), so real-time access to centrally located data is necessary. The applications require different data rates and different trade-offs between low delay (latency) and freedom from transmission errors. Voice communications, for example, must be real time but can tolerate noisy signals; users can wait a few seconds to receive a map or blueprint, but errors may make it unusable. Some applications, such as voice conversation, require symmetrical bandwidth; others, such as data access and location signaling, are primarily one way (the former toward the mobile device, the latter away from it).

Research issues in network adaptivity fall into a number of categories, discussed in this section: self-organizing networks, network management, security, resource discovery, and virtual subnetworks. For networks to be adaptive, they must be easily reconfigurable either to meet different requirements from those for which they were originally deployed or to work around partial failures. In many cases of partial failures, self-configuring networks might discover, analyze, work around, and perhaps report failures, thereby achieving some degree of fault tolerance in the network. Over short periods, such as the hours after a disaster strikes, an adaptive network should restore services in a way that best utilizes the surviving infrastructure, enables additional resources to be integrated as they become available, and gives priority to the most pressing emergency needs. Daniel Duchamp, of Columbia University, observed, "Especially if the crisis is some form of disaster, there may be little or no infrastructure (e.g., electrical and telephone lines, cellular base stations) for two-way communication in the vicinity of an action site. That which exists may be overloaded. There are two

approaches to such a problem: add capacity and/or shed load. Adding capacity is desirable but may be difficult; therefore, a mechanism for load shedding is desirable. Some notion of priority is typically a prerequisite for load shedding."

Networks can be adaptive not only to sharp discontinuities such as crises, but also to rapid, continuous evolution over a longer time scale, one appropriate to the pattern of growth of new services and industries in electronic commerce or digital libraries. The Internet's ability to adapt to and integrate new technologies, such as frame relay, asynchronous transfer mode (ATM), and new wireless data services, among many others, is one example.

Self-Organization

Self-organizing networks facilitate adaptation when the physical configuration or the requirements for network resources have changed. Daniel Duchamp cast the problem in terms of an alternative to static operation:

Most industry efforts are targeted to the commercial market and so are focused on providing a communications infrastructure whose underlying organization is static (e.g., certain sites are routers and certain sites are hosts, always). Statically organized systems ease the tasks of providing security and handling accounting/billing. Most communication systems are also pre-optimized to accommodate certain traffic patterns; the patterns are in large part predictable as a function of intra- and inter-business organization. It may be difficult or impossible to establish and maintain a static routing and/or connection establishment structure, because (1) hosts may move relative to each other, and (2) hosts, communication links, or the propagation environment may be inherently unstable. Therefore, a dynamically "self-organizing" routing and/or connection establishment structure is desirable.

Crisis management provides a compelling case for the need of networks to be self-organizing in order to create rapidly an infrastructure that supports communication and information sharing among workers and managers operating in the field. Police, fire, citizen's band, and amateur radio communications are commonly available in crises and could be used to set up a broadcast network, but they provide little support to manage peer-to-peer communications and make efficient use of the available spectrum. Portable, bandwidth-efficient peer-to-peer network technologies would allow information systems to be set up to support communications for relief workers. The issues of hardware development, peer-to-peer networking, and multimedia support are not limited to crisis management; they may be equally important to such fields as medicine and manufacturing (e.g., in networking of people, computers, and machine tools within a factory). Thus, research and development on self-organizing networks may be useful in the latter fields as well.

Rajeev Jain, of the University of California, Los Angeles, suggested two main deficiencies in terms of communications or networking technologies in a

situation where relief officials arrive carrying laptop computers: (1) portable computing technology is not as well integrated with wireless communications technology as it should be, and (2) wireless communications systems still often rely on a wireline backbone for networking. 2 These factors imply that portable computers cannot currently be used to set up a peer-to-peer network if the backbone fails; radio modem technology has not yet advanced to a point where it can provide an alternative. 3 In mobile situations, people using portable computers need access to a wireline infrastructure to set up data links with another computer even if they are in close proximity. In addition, portable cellular phones cannot communicate with each other if the infrastructure breaks down. Jain concluded that both of these problems must be solved by developing technologies that better integrate portable computers with radio modems and allow peer-to-peer networks to be set up without wireline backbones, by using bandwidth-efficient transmission technologies.

Peer-to-peer networking techniques involve network configuration, multiple access protocols, and bandwidth management protocols. Better protocols need to be developed in conjunction with an understanding of the wireless communications technology so that bandwidth is utilized efficiently and the overhead of self-organization does not reduce the usable bandwidth drastically (the current situation in packet radio networks). Bandwidth is at a premium because of the large volume of information required in a crisis and because, although data and voice networks can be deployed using portable wireless technology, higher and/or more flexibly usable bandwidths are needed to support video communication. For example, images can convey vital information much more quickly than words, which can be important in crises or remote telemedicine. If paramedics need to communicate a diagnostic image of a patient (such as an electrocardiogram or x-ray) to a physician at a remote site and receive medical instructions, the amount of data that must be sent exceeds the capabilities of most wireless data communications technologies for portable computers. Technologies are now emerging that support data transmission rates in the tens of kilobits per second, which is sufficient for still pictures but not for full-motion video of more than minimal quality. A somewhat higher bandwidth capability could support a choice between moderate-quality full-motion video and high-quality images at a relatively low image or frame rate (resulting in jerky apparent motion). Another example relates to the usefulness of broadcasting certain kinds of data, such as full-motion video images of disaster conditions from a helicopter to workers in the field; traffic helicopters of local television stations often serve this function. However, if terrestrial broadcast capabilities are disabled, it could be valuable to use a deployable peer-to-peer network capability to disseminate such pictures to many recipients, potentially by using multicast technology.

The statement of James Beauchamp, of the U.S. Commander in Chief, Pacific Command, quoted in Chapter 1 underscored the low probability that all individuals or organizations involved in a crisis response will have interoperable

radios (voice or data), especially in an international operation or one in which groups are brought together who have not trained or planned together before. Self-organizing networks that allowed smooth interoperation would be very useful in civilian and military crisis management and thus could have a high payoff for research. The lack of such technologies may be due partly to the absence of commercial applications requiring rapid configuration of wireless communications among many diverse technologies.

One purpose of the Department of Defense's (DOD's) Joint Warrior Interoperability Demonstrations (JWIDs; discussed in Chapter 1 ) is to test new technologies for bridging gaps in interoperability of communications equipment. The SpeakEasy technology developed at Rome Laboratory, for example, is scheduled to be tested in an operational exercise in the summer of 1996 during JWID '96. 4 SpeakEasy is an effort sponsored by DARPA and the National Security Agency to produce a radio that can emulate a multitude of existing commercial and military radios by implementing previously hardware-based waveform-generation technologies in software. Such a device should be able to act as if it were a high-frequency (HF) long-range radio, a very high frequency (VHF) air-to-ground radio, or a civilian police radio. Managing a peer-to-peer network of radios that use different protocols, some of which can emulate more than one protocol, is a complex problem for network research that could yield valuable results in the relatively near term.

Network Management

Network management helps deliver communications capacity to whoever may need it when it is needed. This may range from more effective sharing of network resources to priority overrides (blocking all other users) as needed. Network management schemes must support making decisions and setting priorities; it is possible that not all needs will be met if there simply are not enough resources, but allocations must be made on some basis of priority and need. Experimentation is necessary to understand better the architectural requirements with respect to such aspects as reliability, availability, security, throughput, connectivity, and configurability.

A network manager responding to a crisis must determine the state of the communications infrastructure. This means identifying what is working, what is not, and what is needed and can be provided, by taking into account characteristics of the network that can and should be maintained. For example, the existing infrastructure may provide some level of security. Then it must be determined whether it is both feasible and reasonable to continue to provide that level of security. Fault tolerance and priorities for activities are other characteristics of the network that must similarly be resolved.

In addition to network management tools to assess an existing situation, tools are needed to incorporate new requirements into the existing structure. For

example, there may be great variability in the direction of data flow into and out of an area in which a crisis has occurred—for example, between command posts and field units. During some phases, remote units may be used for data collection to be transmitted to centralized or command facilities that in turn will need only lower communication bandwidth to the mobile units.

Adaptive network management can help increase the capability of the network elements, for example, by making the communications and computation able to run efficiently with respect to power consumption. Randy Katz has observed that wireless communication removes only one of the tethers on mobile computing; the other tether is electrical power (Katz, 1995). Advances in lightweight, long-lived battery technology and hardware technologies, such as low-power circuits, displays, and storage devices, would improve the performance of portable computers in a mobile setting. A possibility that is related directly to network management is the development of schemes that adapt to specific kinds of communications needs and incorporate broadcast and asymmetric communications to reduce the number and length of power-consuming transmissions by portable devices. For example, Katz observes that if a mobile device's request for a particular piece of information need not be satisfied immediately, the request can be transmitted at low power and low bandwidth. The response can be combined with those to other mobile devices, which are broadcast periodically to all of the units together at high power and bandwidth from the base stations. If a particular piece of information such as weather data is requested repeatedly by many users, it can be rebroadcast frequently to eliminate the need for remote units to transmit requests.

Priority policy is a critical issue in many applications; the need for rapid deployment and change in crisis management illustrates the issue especially clearly. Priority policy is the set of procedures and management principles implemented in a network to allocate resources (e.g., access to scarce communications bandwidth) according to the priority of various demands for those resources. Priority policy may be a function of the situation, the role of each participant, their locations, the content being transmitted, and many other factors. The dynamic nature of some crises may be reflected in the need for dynamic reassignment of such priorities. The problem is that one may have to change the determination of which applications (such as life-critical medical sensor data streams) or users (such as search and rescue workers) have priority in using the communications facilities. Borrowing resources in a crisis may require reconfiguring communications facilities designed for another use, such as local police radio. A collection of priority management issues must be addressed:

  • Who has the authority to make a determination about priorities?
  • How are priorities determined?
  • How are priorities configured? Configuration needs to be secure, but also
  • user friendly, because the people performing it may not be network or communications experts.
  • How are such priorities provided by the network and related resources?
  • How will the network perform under the priority conditions assigned?

The last is a particularly difficult problem for network management. Michael Zyda, of the Naval Postgraduate School, identified predictive modeling of network latency as a difficult research challenge for distributed virtual environments, for which realistic simulation experiences set relatively strict limits on the latency that can be tolerated, implying a need for giving priority to those data streams.

One suggestion arising in the workshops was a priority server within a client-server architecture to centralize and manage evolving priorities. This approach might allow for the development of a multilevel availability policy analogous to a multilevel security policy. A dynamically configurable mechanism for allocating scarce bandwidth on a priority basis could enable creation of the ''emergency lane" over the communications infrastructure that crisis managers at the workshops identified as a high-priority need. If such mechanisms were available they could be of great use in managing priority allocation in other domains such as medicine, manufacturing, and banking. In situations that are not crises, however, one might be able to plan ahead for changes in priority, and it is likely that network and communications expertise might be more readily available.

Victor Frost, of the University of Kansas, discussed the challenges of meeting diverse priority configuration within a network that integrates voice with other services:

Some current networks use multilevel precedence (MLP) to ensure that important users have priority access to communications services. The general idea for MLP-like capabilities is that during normal operations the network satisfies the performance requirements of all users, but when the network is stressed, higher-priority users get preferential treatment. For voice networks, MLP decisions are straightforward: accept, deny, or cut off connections.

However, as crisis management starts to use integrated services (i.e., voice, data, video, and multimedia), MLP decisions become more complex. For example, in today's systems an option is to drop low-precedence calls. In a multimedia network, not all calls are created equal. For example, dropping a low-precedence voice call would not necessarily allow for the connection of a high-precedence data call. MLP-like services should be available in future integrated networks. Open issues include initially allocating and then reallocating network resources in response to rapidly changing conditions in an MLP context. In addition, the infrastructure must be capable of transmitting MLP-like control information (signaling) that can be processed along with other network signaling messages. There is a need to develop MLP-like services that match the characteristics of integrated networks.

An ability to configure priorities, however, will require a much better understanding of what users actually need. Victor Frost also observed,

Unfortunately, defining application-level performance objectives may be elusive. For example, users would always want to download a map or image instantaneously, but would they accept a [slower] response? A 10-minute response time would clearly be unacceptable for users directly connected to a high-speed network; but is this still true for users connected via performance-disadvantaged wireless links? . . . Performance-related deficiencies of currently available computing and communications capabilities are difficult to define without user-level performance specifications.

Security is essential to national-scale applications such as health care, manufacturing, and electronic commerce. It also is important to crisis management, particularly in situations where an active adversary is involved or sensitive information must be communicated. Many traditional ideas of network security must be reconsidered for these applications in light of the greater scale and diversity of the infrastructure and the increased role of nonexperts.

To begin with, the nature of security policies may evolve. Longer-term research on new models of composability of policies will be needed as people begin to communicate more frequently with other people whom they do not know and may not fully trust. On a more short-term basis, new security models are needed to handle the new degree of mobility of users and possibly organizations. The usability or user acceptability of security mechanisms will assume new importance, especially those that inconvenience legitimate use too severely. New perspectives may be required on setting the boundaries of security policies not based on physical location.

Composability of Security Policies

As organizations and individuals form and re-form themselves into new and different groupings, their security policies must also be adapted to the changes. Three reorganization models—partitioning, subsumption, and federation—may be used, and each may engender changes in security policies. The following are simplistic descriptions, but they capture the general nature of changes that may occur. Partitioning involves a divergence of activity where unanimity or cooperation previously existed. In terms of security, partitioning does not appear to introduce a new paradigm or new problems. In contrast, subsumption and federation both involve some form of merging or aligning of activities and policies. Subsumption implies that one entity plays a primary role, while at least one other assumes a secondary role. Federation, on the other hand, implies an equal partnering or relationship. Both subsumption and federation may require that

security policies be realigned, while perhaps seeking ways to continue to support previous policies and mechanisms. Both models of joining may be found in crisis management, as local emergency services agencies provide radio networks that other organizations brought in from outside must interact with and/or assume control over.

If policies and mechanisms are to be subsumed, the problems for security become significantly more difficult to address than in the past. In this case, if a unified top-level policy is to be enforced that is a composite of several others, interfaces among them—or, more abstractly, definitions of the policies, abstraction, and modularity—will be necessary to allow for exchange in controlled and well-known ways. It is only through such formal definitions that the composition of such activities can be sufficiently trustable to allow for the provision of a top-level composite of security policies and mechanisms.

A perhaps even more difficult problem is peer-level interaction within a federated model, in which neither domain's security policy takes clear precedence over the other. Such interaction will become more common as alliances are formed among organizations and individuals who are widely distributed. As virtual networks are set up in conjunction with temporary relationships, there is a continued need for security during any coordinated activities within the affiliation. Thus, the security mechanisms required by each participant must collaborate in ways that do not impede the coordination of their activities. Since there is no domination model in this case, coordination and compromise may be necessary. Again, these problems will be helped by research that provides better modularity and abstraction in order to formalize the relationships and interactions.

Mobility of Access Rights

In many, perhaps all, of the national-scale applications, users can be expected to move from one security policy domain or sphere to another and have a need to continue to function (e.g., carrying a portable computer from the wireless network environment of one's employer into that of a customer, supplier, or competitor). In some cases, the mobile user's primary objective will be to interact with the new local environment; in others, it will be to continue activities within the original or home domain. Most likely, the activities will involve some of both. In the first case, the user can be given a completely new identity with accruing security privileges in the new environment; alternatively, an agreement can be reached between the two domains, such that the new one trusts the old one to some degree, using that as the basis for any policy constraints on the user. This requires reciprocal agreements of trust between any relevant security domains. It is even possible to envisage cascading such trust, in either a hierarchical trust model or something less structured in which a mesh of trust develops with time,

supporting transitive trust among domains. There is significant work to be done in such an area.

Mobile users who want to connect back to their home domain from a foreign one also have several alternatives. It is likely that the local domain will require some form of authentication and authorization of users. The remote domain might either accept that authentication, based on some form of mutual trust between the domains, or require separate, direct authentication and authorization from the user. In addition, such remote access may raise problems of exposure of activities, such as lack of privacy, greater potential for masquerading and spoofing, or denial of service, because all communication must now be transported through environments that may not be trusted.

If the user is trying to merge activities in the two environments, it is likely that a merged authentication and authorization policy will be the only rational solution. It is certainly imaginable that such a merged or federated policy might still be implemented using different security mechanisms in each domain, as long as the interfaces to the domains are explicit so that a composite can be created.

Usability of Security Mechanisms

Usability in a security context means not only that both system and network security must be easy for the end users (such as rescue workers or bank customers and officers) to use, but also that the exercise of translation from policy into configuration must be achievable by people in the field who are defining the policies and who may not be security experts. If security systems cannot be used and configured easily by people whose main objectives are completing other tasks, the mechanisms will be subverted. According to Daniel Duchamp, "Two obvious points . . . need considerable work. First, for disasters especially, technology should intrude as little as possible on the consciousness of field workers. Second, all goals should be achieved without the need for system administrators." Users often do not place a high priority on the security of the resources they are using, because the threats do not weigh heavily against the objective of achieving some other goal. Thus, the cost (including inconvenience) to these users must be commensurate with the perceived level of utility. As Richard Entlich, of the Institute for Defense Analyses, observed, "Creating a realistic way of providing security at each node involves not only technical issues, but a change in operational procedures and user attitudes." Ideally, technological designs and approaches should reinforce those needed changes on the part of users.

Unfortunately, the problems of formulating security policy are even more difficult to address with computational and communications facilities. Policy formation, especially when it involves merging several different security domains, is extremely complex. It must be based on the tasks to be achieved, the probability of subversion if security policy constraints are too obstructive, and

the capabilities of the mechanisms available, especially when merging of separate resources is necessary.

Discovery of Resources

Crisis management highlights the need for rapid resource discovery. Resources may be electronic, such as information or services, or they may be more tangible, such as computers, printers, and wires used in networks. First, one must determine what resources are needed. Then, perhaps with help from information networks, one might be able to discover which resources are local and, if those are inadequate, whether some remote resources may be able to address an otherwise insoluble problem. An example of this latter situation would be finding an expert on an unusual bacterial infection that appears to have broken out in a given location.

In crises, some of the tools mentioned above for network management and reorganization in the face of partial failures may also help to identify which local computing, communications, and networking resources are functional. If high-performance computing is necessary for a given task, such as additional or more detailed weather forecasting or geological (earthquake) modeling, discovering computing and network facilities that are remote and accessible via adequately capable network connections might be invaluable.

Virtual Subnetworks

Another architectural requirement common to several of the application areas is the ability to create virtual subnetworks. The virtual "subnet" feature allows communities to be created for special purposes. For example, in manufacturing, the creation of a virtual subnet for a company and its subcontractors might simplify the building of applications by providing a shared engineering design tool. It would allow a global or national corporation to operate as though it had a private subnet. It might provide similar features for any community, such as a network of hospitals that has a need to exchange patient records.

A virtual subnet will appear to applications and supporting software as if communications are happening on a separate network that actually is configured within a larger one. In essence, the virtual subnet capability allows a policy or activity boundary concept to be made evident in the network model as a subnet. At present, virtual subnets are generally used to reflect administrative domains in which a single consistent set of usage and access policies is enforced.

The possibility of defining a subnet for crisis management in terms of security and priority access has already been suggested. Another potentially useful way to define a boundary around a subnet would be to control the flow of information passing into that subnet by using priority-based filtering

mechanisms. This would be done to reserve scarce bandwidth and storage within the subnet for only the most valuable information.

In order to make virtual subnets useful, there must be automated ways of creating them within the Internet or the broader national or global information infrastructure. This implies understanding the policies to be enforced on such a subnet with respect to, for example, usage and security, and being able to both recognize and requisition resources to create and manage subnets. It may mean provision of various services within the network in such a way that those services can be provided effectively to subnets. Examples of these might be trusted encryption services, firewalls, protocol conversion gateways, and others. A virtual subnet must have all the characteristics of a physical subnet, while allowing its members to be widely distributed physically. 5

By providing application- or user-level community boundary models down into the network, one might create a more robust, survivable environment in which to build applications. Both advances in technology development and more fundamental research on architectural models for subnets are needed to automate support for creating such subnets in real time and on a significantly larger scale than is currently supported.

COMPUTATION: DISTRIBUTED COMPUTING

The networked computational and mass storage resources needed for national-scale application areas are necessarily heterogeneous and geographically distributed. A geographically remote, accessible metacomputing resource, as envisioned in the Crisis 2005 scenario in Chapter 1 , implies network-based adaptive links among available people (using portable computers and communications, such as personal digital assistants) to large-scale computation on high-performance computing platforms. The network connecting these computing and storage resources is the enabling technology for what might be termed a network-intensive style of computation. Allen Sears, of DARPA, summarized this idea as "the network is the computer"; that is, computation to address a user's problem may routinely take place out on a network, somewhere other than the user's location.

Crisis management is a good example of a network-intensive application. People responding to crises could benefit from larger-scale mass storage and higher computation rates than are typically available in the field, for example, to gain the benefits of high-performance simulation performed away from the crisis location. 6 The technical implication of network-intensive computing for crisis management is not merely a massive computational capability, but rather an appropriately balanced computing and communications hierarchy. This would integrate computing, storage, and data communications across a scale from lightweight portable computers in the field to remote, geographically distributed high-performance computation and mass storage systems for database and simulation

support. Research in many areas, such as mobility and coordination of resources and management of distributed computing, is needed to achieve this balanced hierarchy.

Modeling and Simulation

High-performance computation may be used to simulate complex systems, both natural and man-made, for many applications. Networks can make high-performance computation resources remotely accessible, enabling sharing of expensive resources among users throughout the nation. Applications of modeling and simulation to crisis management include the prediction of severe storms, flooding, wildfire evolution, toxic liquid and gas dispersion, structural damage, and other phenomena. As discussed in Chapter 1 , higher-quality predictions than are available today could save lives and reduce the cost of response significantly.

Grand Challenge activities under the High Performance Computing and Communications Initiative (HPCCI) have been a factor in advancing the state of the art of modeling and simulation (CSTB, 1995a; OSTP, 1993, 1994a; NSTC, 1995). The speed of current high-performance simulation for many different applications, however, continues to need improvement. Lee Holcomb, of the National Aeronautics and Space Administration (NASA), observed, for example, that it is currently infeasible for long-term climate change models to involve the coupling of ocean and atmospheric effects, because of inadequate speed of the models for simulating atmospheric effects (which change much more rapidly than ocean effects and therefore must be modeled accordingly). In addition, whereas fluid dynamics models are able to produce very nice pictures of airflow around aircraft wings and to calculate lift, they are not able to model drag accurately, which is the other basic flight characteristic required in aircraft design. Holcomb summarized, "We have requirements that go well beyond the current goals of the High Performance Computing Program."

The urgency of crises imposes a requirement that may pertain more strictly in crisis management than in other applications such as computational science: the ability to run simulations at varying scales of resolution is crucial to being able to make appropriate trade-offs between the accuracy of the prediction and the affordability and speed of the response. Kelvin Droegemeier, of the University of Oklahoma, described work on severe thunderstorm modeling at the university's Center for the Analysis and Prediction of Storms (CAPS), including field trials in 1995 that demonstrated the ability to generate and deliver high-performance modeling results within a time frame useful to crisis managers. For areas within 30 km of a Doppler radar station, microscale predictions have been made at a 1-km scale and can predict rapidly developing events, such as microbursts, heavy rain, hail, and electrical buildup, on 10- to 30-minute time scales. At scales of 1 to more than 10 km, the emergence and intensity of new thunderstorms, cloud ceiling, and visibility have been predicted up to two hours in advance, and

the evolution (e.g., movement, change in intensity) of existing storms has been forecast three to six hours in advance. Rescaling the model thus allows greater detail to be generated where it is most needed, in response to demands from the field. 7

As Droegemeier noted, time is critical for results to be of operational value:

These forecasts are only good for about six hours. This means you have to collect the observational data, primarily from Doppler radars; retrieve from these data various quantities that cannot be observed directly; generate an initial state for the model; run the model; generate the forecast products; and make forecast decisions in a time frame of 30 to 60 minutes because otherwise, you have eaten up a good portion of your forecast period. It is a very timely problem that absolutely requires high-performance computing and communications. If you can't predict the weather significantly faster than it evolves, then the prediction is obviously useless.

When high performance is required, adding complexity at various scales of prediction may not be worth the cost in time or computer resource usage. For example, the CAPS storm model could predict not only the presence of hail, but the average size of the hailstones; however, the cost is probably beyond what one would be willing to pay computationally to have that detail in real time. Because the model's performance scales with added computing capacity, more detailed predictions can in principle be made if enough computational resources can be coordinated to perform them. 8

Crisis managers also require a sense of the reliability of data they work with-the "error bars" around simulation results. To achieve this, an ensemble of simulations may be run using slightly different initial conditions. Ensemble simulation is especially important for chaotic phenomena, where points of great divergence from similar input conditions may not be readily apparent. Ensemble simulation is ideally suited for running in parallel, because the processes are essentially identical and do not communicate with or influence each other. The difficult problem is identifying how to alter the initial conditions. As Droegemeier noted, Monte Carlo simulation optimizes these variations to give the best results, but depends on a knowledge of the natural variability of the modeled phenomena that is not always available (e.g., in the case of severe thunderstorm phenomena at the particular scales CAPS is modeling). The infrequency of large crises makes it difficult to gain this understanding of natural variability in some cases. More broadly, it impedes verifying models of extraordinary events. As Robert Kehlet, of the Defense Nuclear Agency, said, "We are in the awkward position of not wanting to have to deal with a disaster, but needing a disaster to be able to verify and validate our models."

Besides resources to perform the computations, remote modeling and simulation also implies the need for adequate network capability to transport input data to the model and distribute results to the scene. Input data collection requirements may be most demanding if large amounts of real-time sensor data are

involved (see the section "Sensors and Data Collection" below). Sensors will ideally send compressed digitized data in packets that are compatible with existing high-speed networks. However, the observation by Egill Hauksson, of the California Institute of Technology, that high-speed network costs remain too high for nonexperimental applications suggests that additional network research and deployment could be necessary to make this a practical reality for crisis management.

Don Eddington, of the Naval Research and Development Center, outlined a model, tested in the JWID exercises, for performing and/or integrating the results of simulations at "anchor desks." Anchor desks, located away from the front line of crisis, could be staffed with people expert at running and interpreting simulations, who could disseminate results to the field when conditions warrant (e.g., a major change in the situation is detected). This model reduces the amount of network traffic below that required by full-time connection from the field to the remote high-performance computing platform. Distributing results can be done by simply distributing a map or picture of the simulation result.

However, if information is to be integrated with other data available to workers at the scene, or if complex three-dimensional visualizations of the results are called for, a picture or map may not suffice and a complete data file must be sent. (Needs for information integration and display are discussed in the next two sections.) This implies higher-bandwidth connections and greater display capabilities on the front line user's platform. Ultimately, finding the optimal balance of resources for various kinds of crises will require experimentation in training exercises and actual deployments. It should also be influenced by social science research on how crisis managers actually use information provided to them.

Mobility of Computation and Data

Efficiency and performance typically demand that a computation be carried out near its input and output data. Although the traditional solution is to move the data to the computation, sometimes the computation requires so much data so quickly that it is better to move the computation to the data. Since the appropriate software may not already reside on the target system, an executable or interpretable program may have to be transmitted across the network and executed remotely. This extends the meaning of the term relocatable beyond the ability of programmers to port code easily from one platform to another to the ability of code to operate in a truly platform-independent manner in response to urgent demands.

In some circumstances, achieving high performance requires that the application software be optimized specifically for the machine on which it is to operate, which usually requires recompilation of the application. For this approach to have the desired effect, the compilation environment must be able to tailor the application to the specific target machine. This tailoring will not work unless the

application is written in a machine-independent implementation language and it can be compiled on each target machine to achieve performance comparable to the best possible on that machine using the same algorithm.

This problem—compiler and language support for machine-independent programming—is one of the key challenges in high-performance computation. Although languages such as High Performance Fortran (HPF) and standard interfaces like the Message Passing Interface (MPI) are excellent first steps for parallel computing, the machine-independent programming problem remains an important subject of continuing research. Comments from Lee Holcomb indicate that although progress has been made, research on machine-independent programming remains crucial to high-performance computing in all areas, not just crisis management:

I think [programming for high-performance computing] is getting better. I think many of the machines coming out today, as opposed to the ones that were produced, say, a year and a half to two years ago, provide a much better environment. But when you ask a lot of computational scientists, who have spent their whole life porting the current [code] over to one machine and then on to the next, when you give them the third machine to port it over to and have to retune it, they lose a lot of interest and enthusiasm.

An ability to relocate computation rapidly will require dynamic binding of code at run time to common software and system services (e.g., input-output, storage access). This implies a need for further development and standardization of those services (e.g., through common application programming interfaces; APIs) such that software can be written to take advantage of them.

However, software applications that were not originally written to be relocatable may require a wrapper to translate their interfaces for the remote system. In manufacturing applications, such wrappers are prewritten, which is often a costly, labor-intensive process. Research on generic methods enabling more rapid construction of wrappers for software applications—ultimately, producing them "on the fly," as might be required in a crisis—was identified by workshop participants as potentially valuable but currently quite challenging. Advances in wrapper generation for software applications would enable more reuse of software and would benefit many areas in addition to crisis management. However, such advances will require basic research leading to an ability to model, predict, and reason about software systems composed of heterogeneous parts that is far beyond current capabilities. These advances could be more generally relevant to many aspects of software systems, as discussed below in the section "Software System Development."

Storage Servers and Meta-Data

Crisis management applications employ databases of substantial size. For example, workshop participants estimated that a database of the relevant

infrastructure (e.g., utilities, building plans) of Los Angeles requires about 2 terabytes. Not all of it must be handled by any one computer at one time; however, all of it may potentially have to be available through the response organization's distributed communications. In addition, a wide variety of data formats and representations occur and must be handled; this may always be the case because of the unpredictability of some needs for data in a crisis. Reformatting data rapidly through services such as those discussed in the section ''Information Management" can be computationally intensive and require fast storage media.

Comprehensive provisions must also be made for storing not only data, but collateral information (meta-data) needed to interpret the data. Besides concerns appropriate to all distributed file systems (authentication, authorization, naming, and the like), these involve issues of data validity, quality, and timeliness, all of which are needed for reliable use of the data, and semantic self-description to support integration and interoperability.

To customize information handling for particular applications, storage server software should be able to interpret and respond to the meta-data. Workshop participants suggested that in crisis management, for example, a scheme could be developed to use meta-data to limit the use of scarce bandwidth and to minimize storage media access time while accommodating incoherence of data distributed throughout the response organization. To conserve bandwidth, a central database system located outside the immediate crisis area could maintain a copy of the information stored in each computer at the scene of the crisis. Instead of replicating whole databases across the network when new information alters, contradicts, or extends the information in either copy, a more limited communication could take place to restore coherence between copies or at least provide a more consistent depiction of the situation. A "smart" coherence protocol could relay only changes in the data, or perhaps an executable program to accomplish them. Relevant meta-data for making these determinations might include, for example, time of last update for each data point, so that new data can be identified, and an estimate of quality, to avoid replacing older but "good" data with newer "less good" data.

Besides resource conservation, a beneficial side effect of this coherence scheme would be the creation of a fairly accurate and up-to-date representation of the entire crisis situation, valuable for coordination and decision making. Modeling the coherence and flow of information into, within, and out of the crisis zone could be incorporated into a system that would continuously search for (and perhaps correct) anomalies and inconsistencies in the data. It could also support collaboration and coordination among the people working on a response operation by helping crisis managers know what information other participants have available to them.

Anomaly Detection and Inference of Missing Data

High-performance computing can be used for filling in missing data elements (through machine inference that they are part of a computer-recognizable pattern), information validation, and data fusion in many national-scale applications. For example, crisis data are often incomplete or simply contradictory. Simulation could be used to identify outlier data, flagging potential errors that should be verified. Higher computational performance is required to correct or reconstruct missing data from complex dynamic systems, interpolating information such as wind speeds and directions or floodwater levels through machine inference. Incorrect data—perhaps derived from faulty sensors, taken from out-of-date or incorrect databases, or deliberately introduced by an active adversary—could be detected and corrected by computers in situations where the complexity or volume of the data patterns would make it difficult for a human to notice the error. Ordinarily, the absence of key information requires users to make intuitive judgments; tools that help cope with gaps in information are one element of what workshop participants called "judgment support" (see the section "User-Centered Systems" below in this chapter).

The widespread presence of semantic meta-data could enhance data mining and inference for detecting errors in databases. Data mining in high-performance systems has been effective in other applications, for example, in finding anomalous credit card and medical claims; new applications such as clinical research are also anticipated (see Box 2.2 ). However, the nature of crises is such that data being examined for anomalies may be of an unanticipated nature and may not be fully understood. There is a challenge for research in identifying the right types of meta-data that could make data mining and inference over those unanticipated data possible.

Sensors and Data Collection

More widespread use of networked sensors could generate valuable inputs for crisis management, as well as remote health care and manufacturing process automation. The variety of potentially useful sensors is particularly broad in crisis management, including environmental monitors such as those deployed in the Oklahoma MesoNet or the NEXRAD (Next Generation Weather Radar) Doppler radar system; video cameras that have been installed to enhance security or monitor vehicle traffic; and structural sensors (as in "smart" buildings, bridges, and other structures networked with stress and strain sensors).

Some imagery, such as photographs of a building before it collapsed or satellite photographs showing the current extent of a wildfire, are potential input data for simulation. Timely access to and sharing of these data require high-performance communication, including network management, both to and from


Historically, challenges posed by medical problems have motivated many advances in the fields of statistics and artificial intelligence. Traditionally, researchers in both fields have had to make do with relatively small medical datasets that typically consisted of no more than a few thousand patient records. This situation will change dramatically over the next decade, by which time we anticipate that most health care organizations will have adopted computerized patient record systems. A decade from now, we can expect that there will be some 100 million and eventually many more patient records with, for example, a full database size of 10 terabytes, corresponding to 100 text pages of information for each of 100 million patients. Functionalities needed in the use and analysis of distributed medical databases will include segmentation of medical data into typical models or templates (e.g., characterization of disease states) and comparison of individual patients with templates (to aid diagnosis and to establish canonical care maps). The need to explore these large datasets will drive research projects in statistics, optimization, and artificial intelligence. . . .

Care providers and managers will want to be able to rapidly analyze data extracted from large distributed and parallel databases that contain both text and image data. We anticipate that . . . significant performance issues . . . will arise because of the demand to interactively analyze large (multi-terabyte) datasets. Users will want to minimize waste of time and funds due to searches that reveal little or no relevant information in response to a query, or retrieval of irrelevant, incorrect or corrupted datasets.

SOURCE: Davis et al. (1995), as summarized at Workshop III by Joel Saltz, of the University of Maryland.

the crisis scene. Moreover, models could be designed to take real-time sensor inputs and modify their parameters accordingly to accomplish a more powerful capability to predict phenomena. As Donald Brown, of the University of Virginia, noted, the nonlinearity of many real-world phenomena poses challenges for modeling; learning how to incorporate these nonlinearities into models directly from sensors could improve the performance of models significantly.

Sometimes a sensor designed for one purpose can be used opportunistically for another. For example, an addressable network of electric utility power usage monitors could be used to determine which buildings still have power after an earthquake, and which of those buildings with power are likely to have occupants. A similar approach could be taken using the resources of a residential network service provider. Workshop participants suggested that security cameras also provide opportunities for unusual use; with ingenuity it may be possible to estimate the amplitude and frequency of an earth tremor or the rate at which rain falls by processing video images. Given the high cost of dedicated sensor

networks and the infrequency of crises, technology to better exploit existing sensors opportunistically could facilitate their use.

People carrying sensors might be another effective mode of sensor network deployment. Robert Kehlet noted that field workers could wear digital cameras on their helmets; personal geographic position monitors could be used to correlate the video data with position on a map. Physical condition monitors on workers in dangerous situations could hasten response if someone is injured.

Research is needed to optimize architectures for processing real-time information from large, highly scalable numbers of inputs. 9 The problem is likely amenable to parallel processing, as demonstrated on a smaller scale in research described by Jon Webb, of Carnegie Mellon University, on machine vision synthesized from large numbers of relatively inexpensive cameras. A highly decentralized architecture, perhaps using processors built into the sensors themselves (sometimes characterized as "intelligence within the network"), might be a highly effective way to conserve bandwidth and processing; sensors could detect from their neighbors whether a significant change in overall state is occurring and could communicate that fact to a central location, otherwise remaining silent. There could be value in research and development toward a network designed such that, in response to bandwidth or storage constraints in the network, discrete groups of sensors perform some data fusion before passing their data forward; an adaptive architecture could permit this feature to adjust to changing constraints and priorities.

Distributed Resource Management

Network-intensive computing places unusual stress on conventional computer system management and operation practice. Describing the general research challenge, Randy Katz said,

We tend to forget about the fact that [the information infrastructure] won't be just servers and clients, information servers or data servers. There are going to be compute-servers or specialized equipment out there that can do certain functions for us. It will be interesting to understand what it takes to build applications that can discover that such special high-performance engines that exist out there can split off a piece of themselves, execute on it, and recombine when the computation is done.

Because significant remote computing and storage resources may be necessary, standardized services for resource allocation and usage accounting are important. Other important issues are enforcing the proper use of network resources, determining the scale and quality of service available, and establishing priorities among the users and uses. Mechanisms are needed to address these issues automatically and dynamically. Operating system resource management is weak in this area because it treats tasks more or less identically. For example, many current

network-aware batch systems are configured and administered manually and support no rational network-wide basis for cost determination.

Dennis Gannon, of Indiana University, suggested the value of continued development of network resource management tools as follows: "High-performance computing . . . should be part of the fabric of the tools we use. It should be possible for a desktop applications at one site to invoke the resources of a supercomputer or a specialized computing instrument based on the requirements of the problem. A network resource request broker should provide cost-effective solutions based on the capabilities of compute servers." He pointed to the Information Wide-Area Year (I-WAY) experimental network as a useful early demonstration of such capabilities. 10

Software System Development

To the extent that it improves capabilities for integrating software components as they relocate and interact throughout networks, research enabling a network-intensive style of computing may be helpful in addressing a long-standing, fundamental problem for many application areas, that of large software system development. Speaking about electronic commerce systems, Daniel Schutzer, of Citibank, said succinctly, "The programming bottleneck is still there." DARPA's Duane Adams described the problem as follows:

Many of our application programs [at DARPA] are developing complex, software-intensive systems. For example, we are developing control systems for unmanned aerial vehicles (UAVs) that can fly an airplane for 24 or 36 hours at ranges of 3,000 miles from home; we are developing simulation-based design systems to aid in the design of a new generation of ships; and we are developing complex command and control systems. These projects are using very few of the advanced information technologies that are being developed elsewhere in [D]ARPA—new languages, software development methodologies and tools, reusable components. So we still face many of the same problems that we have had for years.

This raises some interesting technology problems. Are we working on the right set of problems, and are we making progress? How do we take this technology and actually insert it into projects that are building real systems? I think one of the biggest challenges we face is building complex systems. We have talked about some of the problems. One of them is clearly a scaling problem . . . scaling to the number of processors in some of the massively parallel systems [and] . . . scaling software so you can build very large systems and incrementally build them, evolve them over time.

Software reuse through integration of existing components with new ones is necessary to avoid the cost of reproducing functionality for new applications from scratch. Building large systems often needs to be done rapidly, and because most large systems have a long, evolutionary lifetime, they must be designed to

change. However, these are not easy challenges. Distributed object libraries such as those facilitated by the Common Object Request Broker Architecture (CORBA; discussed in the next section, "Information Management") may be useful, but more developed frameworks and infrastructure are needed to make them fully usable in the building of applications by large distributed teams of people. Basic tools to support scalable reuse, to catalogue and locate them, and to manage versioning are still primitive.

It is clear that getting even currently available system-building technologies and methods into actual use in the software development enterprise is a major challenge. Changing the work practices of organizations takes time; however, there may be ways in which collaboration technology can make it easier to incorporate the available techniques into work practices more smoothly. A collaboration environment that allows software development teams to manage the complex interactions among their activities could reap benefits across the spectrum of applications. Dennis Gannon identified the need to design a "problem-solving environment" technology that provides an infrastructure of tools to allow a distributed set of experts and users to build, for example, large and complex computer applications. Participants in Workshop I developed a subjective report card rating the current state of the art in computing environments as follows:

D

D

B

C+

C

C

C–

In the absence of a deeper understanding of large, distributed software systems, however, new tools are not likely to improve the situation. Decades of experience with software engineering indicate that the problems are difficult—they are not solvable purely by putting larger teams of engineers to work or by making new tools and techniques available (CSTB, 1992, pp. 103-107; CSTB, 1989). Barbara Liskov, of the Massachusetts Institute of Technology, cited the need for a good model of distributed computation on which to develop systems and reason about their characteristics—a software infrastructure, not just a programming language or a collection of tools, that would support a way of thinking about programs, how they communicate, and their underlying memory model


At Workshop III, Barbara Liskov, of the Massachusetts Institute of Technology, observed:

"People have to write programs that run on these [large-scale, distributed] systems. Applications need to be distributed, and they have got to work, and they must do their job with the right kind of performance. . . . These applications are difficult to build. One of the things I was struck by in the conversations today was the very ad hoc way that people were thinking of building systems. It was just a bunch of stuff that you connect together—this component meshes with that component. You know, we can't build systems that way. And the truth is we hardly know how to build systems that worked on the old kind of distributed network. . . .

"We have a real software problem. If I want to build an application where I can reason about its correctness and its performance under a number of adverse conditions, what I need is a good model of computation on which to base that system, so that I have some basis for thinking about what it is doing. I think what we need is a software infrastructure, and I don't mean by this a programming language and I also don't mean some bag of tools that some manufacturers make available. I mean some way of thinking about what programs are, what their components are, where these components live, how they communicate, what kind of access they have to shared memory, what kind of model of memory it is, whether there is a uniform model, whether it is a model where different pieces of the memory have different fault tolerance characteristics, what is the fault tolerance model of the system as a whole, what kinds of atomicity guarantees are provided, and so on. We don't have anything approaching this kind of model for people to build their applications on today."

(see Box 2.3 ). A consistent software infrastructure model of computation could form the basis not only for building systems using that model but for reasoning about their correctness and performance as they are being built. It would be extremely useful for system developers to be able to predict the performance, fault tolerance, or other specified features of a system composed from parts whose properties are known.

This problem of composability of software components is very difficult and requires fundamental research. Increased understanding, however, could support a valuable increase in the ability to build systems driven by application needs. Dennis Gannon said, "We should be able to have software protocols that would allow us to request a computing capability based on the problem specification, not based on machine architectures." This will be especially crucial as the stability of discrete machine architectures becomes less fixed with network-centered computing. For example, Vinton Cerf observed that in network-intensive computing, the buses of the traditional computer architecture are replaced in some

respects with network links of a reliability that is unpredictable and often less than perfect. There must also be a way of representing the cost, reliability, and bandwidth trade-offs of various network links in a way that software can understand and act upon, so they can be optimized according to the needs of the problem at hand. These fundamental issues of computation represent a difficult but potentially very valuable avenue for investigation.

INFORMATION MANAGEMENT: FINDING AND INTEGRATING RESOURCES

In the past decade, there have been important transitions in information management technologies used in large organizations. This is usually characterized as a shift from centralized to more distributed resources, but perhaps a more accurate characterization is that it is a better balancing between centralized and distributed control of information production, location, and access. Technologies such as client-server architectures and distributed on-line transaction processing systems have enabled this more effective balancing. It is an ongoing activity at all levels of organization structure, from central databases to individual and group-specific resources.

This situation, difficult as it is within a single organization, becomes much more complex with the scale up to national, multiorganizational applications. This section considers the information management challenges posed by national-scale applications, with particular emphasis on crisis management. It examines several important issues and trends in information management and suggests additional challenges.

Information management involves a broad range of resources with different purposes, such as traditional databases (typically relational), digital libraries, multimedia databases (sometimes used in video servers), object request brokers (such as those in CORBA), wide-area file systems (such as the Network File System and Andrew File System), corporate information webs based on groupware and/or the World Wide Web, and others. Besides relational tables, conventional types of information objects can include multimedia objects (images, video, hypermedia), structured documents (possibly incorporating network-mobile, executable software components, or "applets"), geographical coordinate systems, and application- or task-specific data types. It is useful to classify these information management resources into four organizational categories: (1) central institutional resources, (2) individual desktop resources, (3) group resources, and (4) ubiquitous resources such as the communications network and e-mail service.

Central resources include institutional databases, digital libraries, and other centrally managed information stores. These typically have regular schemas; extensive support for concurrency and robust access; and supporting policy frameworks to maintain (or at least monitor) quality, consistency, security,

completeness, and other attributes. Data models for institutional resources are evolving in several ways (such as the evolution from relational to object-relational databases), but these models are meant to support large-scale and uniform data management applications.

Individual resources consist of ad hoc structures. These resources may be in a process of evolving into more regular structures of broader value to an organization (a process often called upscaling). Alternatively, they may be individual resources that differentiate and provide a competitive edge to the individual and so are unlikely to be shared.

Group resources can include scaled-down and specialized institutional resources as well as ad hoc shared resources. This suggests a continuum from ad hoc ephemeral individual resources through group resources to robust managed institutional resources. Examples of group resources are engineering design repositories, software libraries, and partially formulated logistics plans.

The final class of resources, which may be called ubiquitous resources, consists of shared communications and information services on a communications network, including services such as electronic mail, newsgroups, and the World Wide Web. These services exist uniformly throughout an organization and, unlike the other classes of resources, generally do not reflect organizational hierarchy.

This classification of resources provides a useful framework for examining broad trends in information management and considering, particularly, the special problems associated with national-scale applications, such as the following:

  • Information integration . In many of these applications, information must be integrated with other information in diverse formats. This may include integration of diverse access control regimes to enable appropriate sharing of information while simultaneously maintaining confidentiality and integrity. It can also include integration of institutional, group, and personal information. Related to the integration problem is the issue of information location—how can information be indexed and searched to support national-scale applications?
  • Meta-data and types . Shared objects in very large-scale applications can have a rich variety of types, and these types can be very complex. An example of a family of complex types is the diversity of representations and formats for image information. How can objects be shared and used when their types are evolving, perhaps not at the same pace as the applications software that uses them? Also, there is an evolving view of information objects as aggregations of information, computation, and structure. How will this new view affect information management more broadly? Related to this is the more general issue of meta-data: descriptive information about data, including context (origin, ownership, etc.) as well as syntactic and semantic information. Meta-data approaches are needed that support modeling of quality and other attributes through an
  • integration process. This could include integration of information that may appear to be inconsistent due to quality problems.
  • Production and value . A final information product can be derived through a series of steps involving multiple information producers and organizations. This involves addressing the development of models for the kinds of steps that add value to a product beyond the information integration problem mentioned above.
  • Distribution and relocation . The linking of information resources at all levels into national-scale applications places great stress on a variety of distributed computing issues such as robustness and survivability, name management, and flexible networking. In addition, there is the issue of adaptivity—the interplay of network capability and applications behavior.

Before examining these four issues in greater detail, it is useful to point out some general trends in information management that are part of the evolution already under way to national-scale applications.

First, the ongoing shift over the past decade from central mainframe resources to more distributed client-server configurations is giving way to a steady migration of both resources and control over resources within organizations. This suggests that the main challenge is to better enable this shift as an ongoing process, rather than as a one-time effort. This steady flux is sustained by the emergence of ad hoc groups that establish and manage their own resources (which must later be integrated with others), by a continual change and improvement in information management technologies, and by structural change within organizations. A military joint task force and a civilian crisis action team are examples of ad hoc groups that both establish their own resources and rely on a broad range of institutional resources. In other words, we are just beginning to explore the interplay among institutional information resources, individual ad hoc information resources, and communications and information services such as electronic mail and the World Wide Web.

Second, the complexity and quantity of information, the range and diversity of sources, and the range of types and structures for information are all increasing rapidly, as is the need to assimilate and exploit information rapidly. The problem is not an overload of information, but rather a greater challenge to manage it effectively. Also, as noted above, the nature of the information items is changing: they have more explicit structure, more information about their type, more semantic information, and more computational content. There are also increasingly stringent requirements to manage intellectual property protection and support commerce in information items.

Finally, there is greater interconnectivity and heterogeneity both within and among organizations. This enables more complex information pathways, but it also creates greater challenges to the effective management of information. Related to this trend is the rapidly increased extent to which information users are

becoming information producers. The World Wide Web presents the most compelling evidence of this; when barriers are reduced sufficiently, greater numbers of people will make information available on the network. When electronic commerce technologies become widely used, in the relatively near future, this will create a rich and complex marketplace for information products.

Integration and Location

National-scale applications involve large numbers of participating organizations with multiple focal points of organizational control and multiple needs for information. They often involve solving information management problems that rely on multiple sources of data, possibly including legacy databases that are difficult to reengineer. This creates a problem of information integration in which multiple information resources, with different schemas, data representations, access management schemes, locations, and other characteristics, may have to be combined to solve queries. As discussed in Chapter 1 , sometimes this information can be preassembled and integrated in response to mutually agreed-upon, anticipated needs; however, this is not always feasible. Strategies that make integration feasible are needed to meet the short-term press of crises, and they may well have utility in reducing costs and otherwise facilitating information integration in other, less time-sensitive applications, which Chapter 1 discusses with respect to digital libraries.

Information integration is an area of active research aimed at introducing advances over traditional concepts of wrappers and mediators. A ''wrapper" for a database provides it with a new virtual interface, enabling the database to appear to have a particular data model that conforms to a user's requirement for which the database may not have been designed. A "mediator" provides a common presentation for a schema element that is managed differently in a set of related database. A mediator can thus translate different users' requests into the common presentation, which multiple wrappers sharing that presentation can then translate into forms understood by the resources they interact with (i.e., "wrap"). Thus, mediators and wrappers give users a uniform way to access a set of databases integrated into a system, so that they appear as a single virtual aggregate database. In the past, much of this work has been performed on a laborious, ad hoc basis; more general-purpose approaches, such as The Stanford-IBM Manager of Multiple Information Sources (TSIMMIS; see Box 2.4 ) aim at producing mediation architectures of more general use.

Most research now under way focuses on how a virtual aggregate database can be engineered for a set of existing databases. This involves developing data models and schemas suitable for the virtual aggregate, and mappings among the models and schemas for the component databases and the common data model and schema elements. When this is to be done on more than an ad hoc basis, methods are needed to represent the aggregate schemas. When legacy databases

are involved, reverse engineering of those databases may be necessary to determine their schemas. This can be risky, because there are often hidden assumptions and invariants that must be respected if a legacy database is to remain useful. As Yigal Arens, of the University of Southern California, discussed, the information integration problem becomes more difficult when queries to the aggregate database need to be carried out efficiently (subject to a time constraint), creating research challenges for query optimization at the aggregate level.

New approaches in research on information integration are beginning to yield results, but scaling up to national or global scale will significantly complicate the information integration problem. For example, when multiple organizations are involved, access control issues become more important and also more difficult. Just as new schemas are required for the aggregate to reconcile multiple schemas, aggregate access control and security models may also have to be developed. Also, information integration may be complicated by distributed computing issues—for example, a set of databases may be interconnected intermittently or over a low-capacity link, which would affect the way query processing is carried out. This is a familiar issue in distributed databases that becomes more difficult in a heterogeneous setting.

Richer data models have been developed for specialized uses, such as object databases for design applications or information retrieval databases for digital libraries. When these kinds of information assets must be integrated with more traditional databases, the information integration problem can become much more complicated. One way to address this problem is to develop common reusable wrapper and mediator elements that can be adapted easily to apply in a wide range of circumstances.

Applications such as crisis management increase the difficulty of information integration by introducing the need to integrate rapidly a set of databases whose integration was previously not contemplated. The accounts of information management in crisis situations that were presented in the workshops focused on ad hoc information integration solutions designed to meet very specific needs. For example, geographic databases, land use and utility databases, real estate tax databases, and other databases from a variety of sources are necessary to gather information to rapidly process damage claims related to natural disasters such as storms and earthquakes. This suggests that there is value in anticipating this kind of integration, and developing, in advance, a repertoire of task-specific common schemas and associated mediators for legacy databases. This hybrid approach to integration has appeal, in that it supports incremental progress toward common schemas when they can be agreed-upon, and when common schemas cannot be arrived at, mediators can be developed to support interoperability. This also suggests that information integration provides techniques that may be applicable to more general (and less approachable) information fusion problems. 11

In addition to integration, there is the related issue of information location. Searching within a database of a specific digital library depends upon finding the


Wrappers and mediators are not new technologies; they have been implemented in an ad hoc fashion for many years. One of the original motivations for work on wrappers was the desire to make legacy programs and information sources (such as databases) accessible to diverse requesting applications across networks. This required laborious, ad hoc production of wrappers that translate requests from users' applications into queries and other commands that the wrapped resources can interpret and will respond to correctly.

Disagreement among workshop participants and additional inputs solicited for this report illustrate the perhaps inevitable breadth of perspectives about what does or does not constitute a new research idea. Some contributors were pessimistic about the likelihood of solving complex integration problems through wrappers and mediators. They suggested, for example, that years of experience have shown that for integration to work well, applications must be written in the expectation that their output will be used as another application's input, or vice versa—leaving unaddressed the problem of integrating legacy programs and information sources that were not written with reuse in mind. Others accepted the truth of this observation, but interpreted it as an opportunity for fundamental research, pointing to recent research aimed at developing architectures within which generic techniques may be found for more rapidly and reliably building software components to integrate diverse resources, including legacy resources. Gio Wiederhold has described one example in this vein, a three-layer mediation architecture consisting of the basic information sources and their wrappers; a mediation layer that adds value by merging, indexing, abstracting, etc.; and the users and their applications that need the information (Wiederhold, 1992).

There is a range of research challenges to make such an architecture broadly useful. For example, models for representing diverse information sources and languages for interacting with them must accommodate not only sources with a well-defined schema (e.g., the relational model used in many databases), but others such as text files, spreadsheets, and multimedia. Automatic or semiautomatic generation of wrappers would be a significant contribution; this a serious challenge that requires identifying and representing not only the syntactic interfaces but also the semantic content and assumptions of information sources. Some research has focused on rule-based tools for generating wrappers.

Complementary to research on representing characteristics of sources is the formal representation of domain-specific knowledge that users may need to access and explore. This representation could facilitate generation of mediators optimized for understanding requests and translating them into searches that draw upon and integrate multiple information sources, interacting with each source through a wrapper. Yigal Arens, of the University of Southern California, discussed current research on applying a variety of artificial intelligence techniques to partially automate the creation of mediators for specific applications. In this approach, a model is constructed to describe not only the structure and content of a set of information sources, but also the knowledge domain about which the sources have information. The mediator translates user queries related to that domain into search strategies, which it then implements. Changes in the range of information sources available (e.g., addition of new sources) can be accommodated by changing the domain model, rather than rebuilding the mediator.

  

One contributor noted the similarity between wrappers and Unix pipes. The Unix pipe operator provides a software connection between programs, making the output of one program into the input to another. This allows for plugging together applications in novel ways. Successful integration, however, requires more than just passing inputs and outputs back and forth; the two programs must also share—and therefore might have to have been written with explicit recognition of—a semantic agreement about what those elements mean; otherwise, unpredictable, incorrect results may arise.

  

The Stanford-IBM Manager of Multiple Information Sources (TSIMMIS) is one approach that offers a data model and a common query language, as well as techniques for generating mediators and networks that integrate multiple mediators. See Garcia-Molina et al., "The TSIMMIS Approach to Mediation: Data Models and Languages (Extended Abstract)," available on line from .

  

See the SIMS project home page for more information, at .

appropriate database or digital library. As Eliot Christian, of the U.S. Geological Survey, observed:

One of the fundamental issues in information discovery is that one cannot afford to digest all available information and so must rely on abstractions. Yet, the user of information may be working in a context quite different from what the information provider anticipated. While cataloging techniques can characterize a bibliographic information resource statically, I would like to see a "feature extraction" approach that would support abstraction of information resources based more closely on the user's needs at the moment of searching. Natural language processing may help in the direction of search based on knowledge representations, but the more general problem is to support a full range of pattern matching to include imagery and numeric models as well as human language. . . .

To me, the most immediate problem is that it is very difficult to find and retrieve information from disparate information sources. Although some progress has been made in building consensus on presentation issues through the likes of Web browsers, tools for client-based network search are conspicuously absent. With server-based searching, one can only search for information in fairly narrow and pre-determined domains, and then only with the particular user interface that the information source thought to provide.

For critical national-scale applications, approaches to this information resource location problem must go beyond the opportunistic searching and browsing characteristic of the Internet. Even when information resources are diverse, if they may have to be used in critical applications—particularly those with urgent deadlines—there would be benefit from registering them and their characteristics

in an organized manner. With improvements, for example, in schema description techniques, this could make the information integration problem more approachable as well.

Information location also relates to the distributed computing issues raised above, since one approach involves dispatching not just passive queries to information sources, but active information "agents" that monitor and interact with information stores on an ongoing basis. Information agents may also deploy other information agents, increasing the challenges (both to the initial dispatcher of the agents and to the various willing hosts) of monitoring and managing large numbers of deployed agents.

Meta-Data and Types

Information is becoming more complex, is interpreted to a greater extent, and supports a much wider range of issues. Evidence of the increase in complexity is found in (1) the growing demand for enriched data models, such as enhancements to the relational model for objects and types; (2) the adoption of various schemes for network-based sharing and integration of objects, such as CORBA; (3) the development of databases that more fully interpret objects, such as deductive databases; (4) the rapid growth in commercial standards and repository technology for structured and multimedia objects; and (5) the integration of small software components, such as applets, into structured documents.

One important approach to managing this increased complexity is the use of explicit meta-data and type information. William Arms, of the Corporation for National Research Initiatives, observed, "Very simple, basic information about information is, first of all, a wonderfully important building block and [second,] . . . a much more difficult question than anybody really likes to admit."

Multimedia databases, for example, typically maintain separate stores for the encoded multimedia material and the supporting meta-data. Meta-data provide additional information about an object, beyond the content that is the object itself. Any attribute can be managed as meta-data. For example, in a multimedia database, meta-data could include index tags, information about the beginnings and endings of scenes, and so on. Meta-data can also include quality information. In crisis management applications, this is crucial, since there are some cases where many of the raw data (40 percent, in David Kehrlein's commercial GIS example discussed in Chapter 1 ) are inaccurate in some respect. As David Austin, of Edgewater, Maryland, noted, "Often, data are merged and summarized to such an extent that differences attributable to sources of varying validity are lost." Separately distinguishable meta-data about the reliability of sources can help users identify and manage around poor-quality data.

Types are a kind of meta-data that provide information on how objects can be interpreted. In this regard, type information is like the more usual database schema. Types, however, can be task specific and ad hoc. Task specificity

means, for example, that the particular consensus types in the Multi-purpose Internet Mail Extension (MIME) hierarchy are a small subset of the types that could be developed for a particular application.

Because of this task specificity, the evolution of types presents major challenges. For example, the type a user may adopt for a structured document typically evolves over a period of months or years as a result of migration from one desktop publishing system to the next. Either the user resists migration and falls behind technology developments, or the user must somehow manage a set of objects with similar, but not identical types. One approach to this problem is to create a separate set of servers for types that serve up type information and related capabilities (e.g., conversion mechanisms that allow objects to be transformed from one type to another).

A related issue is the evolution of structured objects to contain software components. The distinction between structured documents and assemblies of software components has been blurring for some time, and this trend will complicate further the effective management of structured objects. For example, because a structured object can contain computation, it is no longer benign from the standpoint of security. An information object could threaten confidentiality by embodying a communications channel back to another host, or it could threaten integrity or service access due to computations it makes while within a protected environment. Many concepts are being developed to address these problems, but their interplay with broader information management issues remains to be worked out. This issue also reinforces the increasing convergence between concepts of information management and concepts of software and computation.

Production and Value

National-scale applications provide many more opportunities for information producers to participate in an increasingly rich and complex information marketplace. Every educator, health care professional, and crisis management decision maker creates information, and that information has a particular audience. Technology to support the efficient production of information and, more generally, the creation of value in an information value chain is becoming increasingly important in many application areas and on the Internet in general.

The World Wide Web, even in its present early state of development, provides evidence of the wide range of kinds of value that can be provided beyond what are normally thought of as original content. For example, among the most popular Web services are sites that catalog and index other sites. Many sites are popular because they assess and evaluate other sites. There are services emerging for brokering of information, either locating sites in response to queries or locating likely consumers of produced specialty information. Because of the speed of the electronic network, many steps can be made very efficiently along the way from initial producer to end consumer of information.

Related to these concepts of information value are new information services. For example, there are several candidate services that support commerce in information objects. Because information objects can be delivered rapidly and reliably, they can support commerce models that are very different from models for physical objects. In addition, services are emerging to support information retrieval, serving of complex multimedia objects, and the like. The profusion of information producers on the Web also creates a need for a technology that enables successful small-scale services to scale up to larger-scale and possibly institutional-level services. National-scale applications such as crisis management complicate this picture because they demand attention to quality and timeliness. Thus the capability of an information retrieval system, for example, may be measured in terms of functions ranging from resource availability (for meeting a deadline) to precision and recall.

Distribution and Relocation

As noted above, distributed information resources may have to be applied, in the aggregate, to support national-scale applications. In these applications, there can be considerable diversity that must be managed. The distributed information resources can be public or private, with varying access control, security, and payment provisions. They can include traditional databases, wide-area file systems, digital libraries, object databases, multimedia databases, and miscellaneous ad hoc information resources. They can be available on a major network, on storage media, or in some other form. They also can include a broad range of kinds of data, such as structured text, images, audio, video, multimedia, and application-specific structured types.

For many applications, these issues can interact in numerous ways. For example, when network links are of low capacity or are intermittent, in many cases it may be acceptable to degrade quality. Alternatively, relative availability, distribution, and quality of communications and computing resources may determine the extent to which data and computation migrate over the distributed network. For example, low-capacity links and limited computing resources at the user's location may suggest that query processing is best done at the server; but when clients have significant computing resources and network capacity is adequate, then query processing, if it is complex, could be done at the client site. When multiple distributed databases cooperate in responding to queries, producing aggregated responses, this resource-balancing problem can become more complex; when atomicity and replication issues are taken into account, it can become even more difficult.

In crisis management, resource management and availability issues take on new dimensions. In a crisis, complex information integration problems may yield results that go into public information kiosks. When communications are intermittent or resource constrained, caching and replication techniques must

respond to levels of demand that are unanticipated or are changing rapidly. Can data replicate and migrate effectively without direct manual guidance and intervention? This is more difficult when there are data quality problems or when kiosks support direct interaction and creation of new information.

USER-CENTERED SYSTEMS: DESIGNING APPLICATIONS TO WORK WITH PEOPLE

Research on natural, intuitive user interface technologies has been under way for many years. Although significant progress has been made, workshop participants indicated that a more comprehensive view of the human-computer interface as part of larger systems must be developed in order for these technologies to yield the greatest benefit. Allen Sears observed, "The fact that humans make . . . errors, the fact that humans are impatient, the fact that humans forget—these are the kinds of issues that we need to deal with in integrating humans into the process. The flip side of that . . . is that humans, compared to computers, have orders-of-magnitude more domain-specific knowledge, general knowledge, common sense, and ability to deal with uncertainty."

System designs should focus on integrating humans into the system, not just on providing convenient human-computer interfaces. The term "system" today commonly refers to the distributed, heterogeneous networks, computers, and information that users interact with to build and run applications and to accomplish other tasks. A more useful and accurate view of the user-system relationship is of users as an integral part of the total system and solution space. Among other advantages, this view highlights the need for research integrating computing and communications science and engineering with advances in the understanding of user and organizational characteristics from the social sciences.

Human-centered Systems and Interfaces

Traditional human-computer interface research embraces a wide array of technologies, such as speech synthesis, visualization and virtual reality, recognition of multiple input modes (e.g., speech, gesture, handwriting), language understanding, and many others. 12 All applications can benefit from easy and natural interfaces, but these are relative characteristics that vary for different users and settings. A basic principle is that the presentation should be as natural to use as possible, to minimize demands on those with no time or attention to spare for learning how to use an application. This does not necessarily imply simplicity; an interface that is too simple may not provide some capabilities the user needs and lead to frustration.

In addition, designers of interfaces in large-scale applications with diverse users cannot depend on the presence of a particular set of computing and communications resources, so the interfaces must be adaptable to what is available. The

network-distributed nature of many applications requires attention to the scaling of user interfaces across a range of available platforms, with constraints that are diverse and—especially in crises—unpredictable. Constraints include power consumption in portable computers and communications bandwidth. For example, it is important that user interfaces and similar services for accessing a remote computing resource be usable, given the fidelity and quality of service available to the user. An additional focus for research in making interface technologies usable in national-scale applications is reducing their cost.

Crisis management, however, highlights the need to adapt not only to available hardware and software, but also to the user. Variations in training and skills affect what users can do with applications and how they can best interact with them. As David Austin observed:

Training is also critical; people with the proper skill mix are often in short supply. We have not leveraged the technology sufficiently to deliver short bursts of training to help a person gain sufficient proficiency to perform the task of the moment. . . .

[What is needed is] a system that optimizes both the human element and the information technology element using ideas from the object technology world. In such a system, a person's skills would be considered an object; as the person gained and lost skill proficiency over his career, he would be trained and given different jobs [so that he could be part of] a high-performance work force able to match any in the world. The approach involves matching a person with a job and at the same time understanding the skill shortfalls, training in short bursts, and/or tutoring to obtain greater proficiency. As shortfalls are understood by the person, he or she can task the infrastructure to provide just-in-time, just-enough training at the time and place the learner wants and needs it.

In addition, because conditions such as stress and information overload can vary rapidly during a crisis, there would also be value in an ability to monitor the user's performance (e.g., through changes in response time or dexterity) and adapt in real time to the changing capabilities of users under stress. By using this information, applications such as a "crisis manager's electronic aide" could adjust filtering and prioritization to reduce the flood of information given to the user. Improvements in techniques for data fusion in real time among sensors and other inputs would enhance the quality of this filtering. Applications could also be designed to alter their presentation to provide assistance, such as warnings, reminders, or step-by-step menus, if the user appears to be making increasing numbers of errors.

The focus of these opportunities is inherently multidisciplinary. To achieve significant advances in the usability of applications, improvements in particular interface techniques can be augmented by integrating multiple, complementary technologies. Recent research in multimodal interfaces has proceeded from the recognition that no single technique is always the best for even a single user, much less for all users, all the time, and that a combination of techniques can be

more effective than any single one. Learning how to optimize the interface mode for any given situation requires experimentation, as well as building on social science research in areas such as human factors and organizational behavior.

Recognizing that the ideal for presentation of information to the user is in a form and context that is understandable, workshop participants noted that in some applications a visual presentation is called for. Given adequate performance, an immersive virtual reality environment could benefit applications such as crisis management training, telemedicine, and manufacturing design. In crisis management training especially, a realistic recreation of operational conditions (such as the appearance of damaged structures, the noise and smoke of fires and storms, the sound of explosions) can help reproduce—and therefore train for—the stress-inducing sensations that prevail in the field. Because response to a crisis is inherently a collaborative activity, simulations should synthesize a single, consistent, evolving situation that can be observed from many distinct points of view by the team members. 13

Don Eddington identified a common perception of the crisis situation as a feature that is essential to effective collaboration. A depiction of the geographic neighborhood of a crisis can provide an organizing frame of reference. Photographs and locations of important or damaged facilities, visual renderings of simulation results, logs of team activity, locations of other team members, notes—all can attach to points on a map. Given adequate bandwidth and computing capacity, another way to provide this common perception might be through synthetic virtual environments, displaying a visualization of the situation that could be shared among many crisis managers. (The Crisis 2005 scenario presented in Box 1.3 suggests a long-range goal for implementing this concept such that a crisis manager could be projected into a virtual world optimized to represent the problem at hand in a way that enhances the user's intuition.) Research challenges underlying such visualizations include ways to integrate and display information from diverse sources, including real observations (e.g., from field reports or sensors) and simulations. The variation in performance among both equipment and skills of different users may prevent displaying precisely the same information to all users; presumably, some minimal common elements are necessary to enable collaboration. Determining precisely what information and display features should be common to all collaborators is an example of the need for technology design to be complemented with multidisciplinary research in areas such as cognition and organizational behavior.

Collaboration and Virtual Organizations

Because people work in groups, collaboration support that helps them communicate and share information and resources can be of great benefit. Crisis management has a particularly challenging need: an instant bureaucracy to respond effectively to a crisis. In a crisis, there is little prior knowledge of who will

be involved or what resources will be available; nevertheless, a way must be found to enable them to work together to get their jobs done. This implies assembling resources and groups of people into organized systems that no one could know ahead of time would have to work together. Multiple existing bureaucracies, infrastructures, and individuals must be assembled and formed into an effective virtual organization. The instant bureaucracy of a crisis response organization is an even more unpredictable, horizontal, and heterogeneous structure than is implied by traditional command and control models of military organizations in warfare—themselves a complex collaboration challenge. Crisis management collaboration must accommodate this sort of team building rapidly; thus, it provides requirements for developing and opportunities for testing collaboration technologies that are rapidly configurable and support complex interactions.

One relatively near-term opportunity is to develop and use the concept of anchor desks (discussed above, in the section ''Distributed Computing"). The concept has been tested in technology demonstrations such as JWID (see Chapter 1 ); field deployment in civilian crises could be used to stress the underlying concepts and identify research needs. Anchor desks can provide a resource for efficient, collaborative use of information, particularly where multiple organizations must be coordinated. They represent a hybrid between decentralized and centralized information management. Each anchor desk could support a particular functional need, such as logistics or weather forecasting. A crisis management anchor desk would presumably be located outside the crisis zone, for readier access to worldwide information sources and expertise; however, it would require sufficient communication with people working at the scene of the crisis to be useful to them, as well as the ability to deliver information in scalable forms appropriate to the recipient's available storage and display capabilities (e.g., a geographic information system data file representing the disaster scene for one, a static map image for another, a text file for a third).

An anchor desk could not only integrate data from multiple sources, but also link it with planning aides, such as optimized allocation of beds and medicines and prediction of optimal evacuation routes implemented as electronic overlays on geographic information systems, with tools involving a range of artificial intelligence, information retrieval, integration, and simulation technologies. An anchor desk could also house a concentration of information analysts and subject matter experts (e.g., chemists, as envisioned in the Crisis 2005 scenario); computing resources for modeling, simulation, data fusion, and decision support; information repositories; and others.

Anchor desks could provide services to support cross-organizational collaboration, such as tools for rapidly translating data files, images, and perhaps even human languages into forms usable by different groups of people. Furthermore, the anchor desk might not be physically at one place; a logically combined, but physically separated, collection of networked resources could perform the

same function, opening the possibility for multiple ways of incorporating the capability into the architecture of the crisis response organization. The set of technologies implied by this sort of anchor desk could serve to push research not only in each technology, but also in tools and architectures for integrating these capabilities, such as whiteboards and video-conferencing systems that scale for different users' capacities and can correctly integrate multiple security levels in one system.

Nevertheless, information must be integrated not only at remote locations such as command centers and anchor desks, but also at field sites. David Kehrlein, of the Office of Emergency Services, State of California, noted, "Solutions require development of on-site information systems and an integration of those with the central systems. If you don't have on-site intelligence, you don't know a lot."

Judgment Support

The most powerful component of any system for making decisions in a crisis is a person with knowledge and training. However, crisis decision making is marked by underuse of information and overreliance on personal expertise in an environment that is turbulent and rich in information flows. The expert, under conditions of information overload, acts as if he or she has no information at all. Providing access to information is not enough. The ability to evaluate, filter, and integrate information is the key to its being used.

Filtering and integrating could be done separately for each person on that person's individual workstation. However, a more useful approach for any collaborative activity would be to integrate and allocate information within groups of users. (In fact, information filtering at the boundary of a linked group of users could be one of the most important services performed by the virtual subnets discussed above in the section "Networking"; filters could help individuals and groups avoid information-poor decision making in an information-rich environment.) Information integration techniques such as those discussed in the section "Information Management" are generally presented in terms of finding the best information from diverse sources to meet the user's needs. The flip side of this coin is the advantage of being able to cull the second-best and third-best information, reducing the unmanageable flood.

A set of special needs of crisis management, which may have significant utility in other application areas as well, can be captured in the concept of judgment support. A crisis manager often makes intuitive judgments in real time that correspond to previously undefined problems without complete contingency plans. This should be contrasted with traditional notions of decision support, which are associated with a more methodical, rule-based approach to previously defined and studied problems. Judgment support for crisis management could

rely on rule-based expert systems to some extent, but the previously defined problems used to train these systems will necessarily be somewhat different from any given crisis. Workshop participants suggested a need for automated support comparing current situations with known past cases. To achieve this automation, however, much better techniques are required for abstractly representing problems, possible solutions, and the sensitivity of predicted outcomes to variations, gaps, and uncertain quality in available information.

The last point is particularly important for crises, because it is inevitable that some of the information the judgment maker relies upon will be of low quality. Two examples are the poor quality of maps that crisis management experts remarked on in the workshops and the rapid rate of change in some crises that continually renders knowledge about the situation obsolete. The technology for representing problem spaces and running computations on them must therefore be able to account for the degree of uncertainty about information. Moreover, data may not always vary in a statistically predictable way (e.g., Gaussian distribution). In some kinds of crises, data points may be skewed unpredictably by an active adversary (e.g., a terrorist or criminal), by someone attempting to hide negligence after an accident, or by unexpected failure modes in a sensor network.

Another reason the challenge of representing problems may be particularly difficult in crisis management is that the judgments needed are often multidimensional in ways that are inherently difficult to represent. James Beauchamp's call for tools to help optimize not only the operational and logistical dimensions of a foreign disaster relief operation, but also the political consequences of various courses of action, illustrates the complexity of the problem. Even presenting the variables in a way that represents and could allow balancing among all dimensions of the problem is not possible with current techniques. By contrast, the multidimensional problem discussed in Chapter 1 (see the section "Manufacturing")—simulating and optimizing trade-offs among such facets as product performance parameters, material costs, manufacturability, and full product life-cycle costs—although extremely complex computationally, is perhaps more feasible to define in terms with which computer models can work.

If a problem can be represented adequately, a judgment support system should be able to assist the judgment maker by giving context and consequences from a multidimensional exploration of the undefined problem represented by the current crisis. This context construction requires automated detection and classification of issues and anomalies, identifying outlier data points (which could represent errors, but could also indicate emerging new developments), and recognizing relationships between the current situation and previously known cases that may have been missed by or unknown to the crisis manager.

Because judgments are ultimately made by people, not computers, technologies intended to support making judgments must be designed for ease of use and with an ability to understand and take into account the capabilities and needs of the user. To a great extent, of course, it is up to the user to ask for the information

he or she needs, but a model of what knowledge that individual already has could be used to alter the system's information integration and presentation approaches dynamically. Another special application for crisis management is monitoring the decision maker, because of the stress and fatigue factors that come into play. Performance monitors could detect when the user's performance is slipping, by detecting slowed reaction time and onset of errors. This information could guide a dynamic alteration in the degree of information filtering, along with variations in the user interface (such as simpler menu options). These capabilities could be of more general value. For example, they could assist in assessing the effectiveness of multimedia training and education tools in schools and continuing-education applications.

Of course, to be useful, a monitoring capability would have to be integrated properly with the way users actually use systems. For example, users will ignore a system that instructs them to get some rest when rest is not an option. Instead, it might be valuable for a system to switch to a standard operating procedures-oriented, step-by-step interface when the user shows signs of tiring. Human factors research provides useful insights, including some that are of generic usefulness. However, needs will always vary with the context of specific applications, implying the strong necessity for researchers and application users to interact during testing and deployment of systems and design of new research programs (Drabek, 1991).

1.  

Partridge, Craig, and Frank Kastenholz, "Technical Criteria for Choosing IP the Next Generation (IPng)," Internet Request for Comments 1726, December 1994. Available on line from .

2.  

Services and technologies are now emerging that may meet this need, such as cellular digital packet data and digital spread-spectrum. Portable terminals that can be used to communicate via satellite uplink are an additional exception; however, such systems are not yet portable or affordable enough that many relief workers in a crisis could carry one for general use.

3.  

Noncommercial, amateur packet radio is a counterexample; however, commercial service offerings are lacking. Part of the problem is the lack of methods of accounting for use of the spectrum in peer-to-peer packet radio networks, without which there is a potential problem of overuse of the spectrum—a tragedy of the commons.

4.  

A description of the proposed demonstration is available on line at the JWID '96 home page, .

5.  

Many telephone carriers now provide frame-relay virtual subnets that are intended to support the isolation discussed here. One serious drawback at present is that their establishment is on a custom basis and is both labor intensive and time-consuming. Telephone carriers are likely to adopt a more automated order fulfillment process as demand grows, but it remains technically infeasible to requisition and establish these services in the heat of a crisis to solve an immediate problem.

6.  

Given the current costliness of access to high-performance computation and high-speed network services, achieving this gain will require political and economic decisions about making resources available, perhaps based on building a case that this investment could yield a positive payoff by lowering the eventual cost of responding to crises.

7.  

In addition, the coarser-grained simulation can be used to provide dynamically consistent

   

boundary conditions around the areas examined in finer detail. The model, called the Advanced Regional Prediction System, is written in Fortran and designed for scalability. See Droegemeier (1993) and Xue et al. (1996). See also "The Advanced Regional Prediction System," available on line at .

8.  

A CAPS technical paper explains that "although no meteorological prediction or simulation codes we know of today were designed with massive parallelism in mind, we believe it is now possible to construct models that take full advantage of such architecture." See "The Advanced Regional Prediction System: Model Design Philosophy and Rationale," available on line at

9.  

The ability to effectively handle time as a resource is an issue not only for integrating real-time data, but for distributed computing systems in general. Formal representation of temporal events and temporal constraints, and scheduling and monitoring distributed computing processes with hard real-time requirements, are fundamental research challenges. Some research progress has been made in verifying limited classes of real-time computable applications and implementing prototype distributed real-time operating systems.

10.  

Details about I-WAY are available on line at .

11.  

One key data fusion challenge involves data alignment and registration, where data from different sources are aligned to different norms.

12.  

Some key challenges underlying communication between people and machines relate to information representation and understanding. These are addressed primarily in the section, "Information Management," but it should be understood that without semantic understanding of, for example, a user's requests, no interface technology will produce a good result.

13.  

This concept is currently used for military training in instances when high-performance computation is available; trainees' computers are linked to the high-performance systems that generate the simulation, and the trainees see a more or less realistic virtual crisis (OTA, 1995). Nonmilitary access to such simulations likely requires lower-cost computing resources.

This book synthesizes the findings of three workshops on research issues in high-performance computing and communications (HPCC). It focuses on the role that computing and communications can play in supporting federal, state, and local emergency management officials who deal with natural and man-made hazards (e.g., toxic spills, terrorist bombings). The volume also identifies specific research challenges for HPCC in meeting unmet technology needs in crisis management and other nationally important application areas, such as manufacturing, health care, digital libraries, and electronic commerce and banking.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

More From Forbes

14 tech-related ethical concerns and how they can be addressed.

Forbes Technology Council

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Modern technology tools such as artificial intelligence, machine learning and quantum computing can offer amazing benefits to industry and society, from better sharing of marketing messages to tracking human health to eliminating mundane tasks and giving humans free time to create—and much more. But as tech tools become more powerful, additional ethical concerns arise. Not only can tech tools be turned to malicious uses in the hands of bad actors, but they also have inherent flaws simply because they’re built by flawed humans.

There are no more enthusiastic proponents of technology than leaders in the tech industry, but they’re also the people who are most aware of the functional and ethical drawbacks that almost inevitably arise in rapidly evolving technology tools. Here, 14 members of Forbes Technology Council discuss tech-related ethical issues that they’re concerned about—from the biases in artificial intelligence introduced by its human creators to the ways those with bad intentions can misuse tech—and what can be done to address these problems.

For years, human biases have found their way into AI modeling—this includes gender bias, racial prejudice, age discrimination, and sexual orientation and gender identity omission. Investing in a diversified AI field is essential for algorithmic fairness. In diversifying the AI community, diversity of thought emerges, and the community is better equipped to anticipate, spot and address biases. - Nicole Janssen , AltaML

2. AI Accountability

Artificial intelligence systems are becoming more prevalent and are starting to make decisions directly affecting customers. These AI systems are trained on historic data, which risks perpetuating historical biases that we must overcome. To solve this, AI systems must have traceability, explainability and an ethical AI board established to hold the AI system accountable. - Amit Sinha , Microsoft

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

3. Restricted Access To The Internet

With the rise of the digital economy, internet connectivity has become critical infrastructure, raising questions of whether access to it should be sanctioned in times of conflict. In tackling this ethical dilemma, tech leaders should consider how important open access to digital resources and global perspectives is in counterbalancing disinformation and empowering citizens’ decision making. - Kris Beevers , NS1

4. AI Codes Of Conduct

As AI becomes more prevalent, continuing to build the public’s trust in this technology to do good will be a “must-do” function, not a “nice-to-do” one. A first step is for organizations to develop AI codes of conduct that will ensure they use the technology ethically. These codes should always enshrine sustainability, human-centricity, accountability, transparency and respect for privacy and data protection. - Zhiwei Jiang , Capgemini

5. Deepfakes And Targeted Misinformation

The manipulation of public opinion—for personal or political gain, to commit financial fraud or just to create chaos—through exceedingly realistic deepfakes and the posting of targeted misinformation on social media is particularly concerning. To address this issue, technology companies, regulatory agencies and law enforcement must work together to create legal frameworks for vigilance and action. - Mahesh Jaishankar , ARC Solutions

6. Ad Fraud

Big Tech platforms make huge profits as advertisers spend money to reach their audiences, and they have an ethical responsibility to provide accurate data on whether ads are reaching target audiences. Ad fraud is running rampant, and it costs advertisers who can’t determine the ROI and reach of their ad spend. Advertisers need visibility into the quality of the data from platforms to make decisions. - David Finkelstein , BDEX

7. Commoditization Of User Data

Data is said to be the most valuable asset in the world, so protecting it is paramount. Today, user data is quickly becoming commoditized and sold against the user’s will. As we navigate the future of a data-dependent world, it’s critical to prioritize the safety and satisfaction of the end user, protecting their sensitive data at every turn. - Marc Fischer , Dogtown Media LLC

8. Misuse Of Personal Data

The collection and misuse of personally identifiable information is a human rights concern. But there are situations in which a person may want to release their information in the aggregate. It’s important to educate both businesses and individuals about what constitutes PII, in what circumstances it can be used and how to safely model data to a protected degree of anonymization. - Lewis Wynne-Jones , ThinkData Works

9. Waning Consumer Privacy

The extent to which we are all increasingly willing to make the trade-off between privacy and convenience is expanding at a blistering pace. Permitting location tracking for targeting, giving access to sensitive personal systems such as bank accounts/logins for aggregation or payment convenience, sharing our unique biometric identifiers through facial and thumbprint scans to provide faster and more convenient logins—all of these and more have the potential for abuse. - Saša Zdjelar , Salesforce

10. Liability For Autonomous Vehicle Decisions

Autonomous vehicles and AI are top of the list. If a vehicle has to “choose” between hitting a hard object and a soft one like a person, where does the liability lie? Is it with the vehicle manufacturers, the insurance company or the software coder? How does one design an ethical solution for this? - Blair Currie , Snibble Corp.

11. Human Control By Algorithm

In a time of targeted advertisement and influencing, being controlled by algorithms has become a question that few of us ask. While countless thinkers have explored the question of controlling the population through pleasure (most prominently in Aldous Huxley’s Brave New World ), our ability to predict human behavior using machine learning has enabled us to follow through with frightening precision. - Kevin Korte , Univention

12. Excessive Automation

Replacing humans with machines has always been a complex topic for me, making it difficult to “pick a side.” On the one hand, I love the idea of automation and software to solve problems. On the other, without significant political and societal changes, software/robots/AI can negatively disrupt the lives of millions of people. We need to balance automation with remembering that we’re just humans spinning around in space. - James Beecham , ALTR

13. Power Of Quantum Computing

Once the technology is stabilized, the power of quantum computing is sure to revolutionize the world in terms of the massive leap in processing power that will be accessible to more and more individuals. That said, it will certainly also raise ethical concerns as it could be a nightmare in the wrong hands; it could be used to crack passwords, weaken encryption algorithms and compromise blockchain networks. - Husein Sharaf , Cloudforce

14. Robot-Delivered Medical Care

The use of robotics for remote (offsite) medical operations poses a major ethical concern. While infrastructure and technology have seen tremendous developments, the distance between the patient and the medical practitioner, together with cybersecurity risks, make such operations risky. We need to continue to develop both the technology and administrative controls and regulations—there must be an accountable party. - Spiros Liolis , Micro Focus

Expert Panel®

  • Editorial Standards
  • Reprints & Permissions

14 Major Tech Issues — and the Innovations That Will Resolve Them

Members of the Young Entrepreneur Council discuss some of the past year’s most pressing technology concerns and how we should address them.

Young Entrepreneur Council

The past year has seen unprecedented challenges to public-health systems and the global economy. Many facets of daily life and work have moved into the digital realm, and the shift has highlighted some underlying business technology issues that are getting in the way of productivity, communication and security.

As successful business leaders, the members of the  Young Entrepreneur Council understand how important it is to have functional, up-to-date technology. That ’ s why we asked a panel of them to share what they view as the biggest business tech problem of the past year. Here are the issues they ’ re concerned about and the innovations they believe will help solve them.

Current Major Technology Issues

  • Need For Strong Digital Conference Platforms
  • Remote Internet Speed and Connections
  • Phishing and Data Privacy Issues
  • Deepfake Content
  • Too Much Focus on Automation
  • Data Mixups Due to AI Implementation
  • Poor User Experience

1. Employee Productivity Measurement

As most companies switched to 100 percent remote almost overnight, many realized that they lacked an efficient way to measure employee productivity. Technology with “ user productivity reports ”  has become invaluable. Without being able to “ see ”  an employee in the workplace, companies must find technology that helps them to track and report how productive employees are at home. — Bill Mulholland , ARC Relocation

2. Digital Industry Conference Platforms

Nothing beats in-person communication when it comes to business development. In the past, industry conferences were king. Today, though, the move to remote conferences really leaves a lot to be desired and transforms the largely intangible value derived from attending into something that is purely informational. A new form or platform for industry conferences is sorely needed. — Nick Reese , Elder Guide

3. Remote Internet Speed and Equipment

With a sudden shift to most employees working remotely, corporations need to boost at-home internet speed and capacity for employees that didn ’ t previously have the requirements to produce work adequately. Companies need to invest in new technologies like 5G and ensure they are supported at home. — Matthew Podolsky , Florida Law Advisers, P.A.

4. Too Much Focus on Automation

Yes, automation and multi-platform management might be ideal for big-name brands and companies, but for small site owners and businesses, it ’ s just overkill. Way too many people are overcomplicating things. Stick to your business model and what works without trying to overload the process. — Zac Johnson , Blogger

5. Phishing Sites

There are many examples of phishing site victims. Last year, I realized the importance of good pop-up blockers for your laptop and mobile devices. It is so scary to be directed to a website that you don ’ t know or to even pay to get to sites that actually don ’t  exist. Come up with better pop-up blockers if possible. — Daisy Jing , Banish

6. Data Privacy

I think data privacy is still one of the biggest business tech issues around. Blockchain technology can solve this problem. We need more and more businesses to understand that blockchains don’t just serve digital currencies, they also protect people’s privacy. We also need Amazon, Facebook, Google, etc. to understand that personal data belongs in the hands of the individual. — Amine Rahal , IronMonk Solutions

7. Mobile Security

Mobile security is a big issue because we rely so much on mobile internet access today. We need to be more aware of how these networks can be compromised and how to protect them. Whether it ’ s the IoT devices helping deliver data wirelessly to companies or people using apps on their smartphones, we need to become more aware of our mobile cybersecurity and how to protect our data. — Josh Kohlbach , Wholesale Suite

8. Deepfake Content

More and more people are embracing deepfake content, which is content created to look real but isn ’ t. Using AI, people can edit videos to look like someone did something they didn ’ t do and vice versa, which hurts authenticity and makes people question what ’ s real. Lawmakers need to take this issue seriously and create ways to stop people from doing this. — Jared Atchison , WPForms

9. Poor User Experience

I ’ ve noticed some brands struggling with building a seamless user experience. There are so many themes, plugins and changes people can make to their site that it can be overwhelming. As a result, the business owner eventually builds something they like, but sacrifices UX in the process. I suspect that we will see more businesses using customer feedback to make design changes. — John Brackett , Smash Balloon LLC

10. Cybersecurity Threats

Cybersecurity threats are more prevalent than ever before with increased digital activities. This has drawn many hackers, who are becoming more sophisticated and are targeting many more businesses. Vital Information, such as trade secrets, price-sensitive information, HR records, and many others are more vulnerable. Strengthening cybersecurity laws can maintain equilibrium. — Vikas Agrawal , Infobrandz

11. Data Backup and Recovery

As a company, you ’ ll store and keep lots of data crucial to keeping business moving forward. A huge tech issue that businesses face is their backup recovery process when their system goes down. If anything happens, you need access to your information. Backing up your data is crucial to ensure your brand isn ’ t at a standstill. Your IT department should have a backup plan in case anything happens. — Stephanie Wells , Formidable Forms

12. Multiple Ad and Marketing Platforms

A major issue that marketers are dealing with is having to use multiple advertising and marketing platforms, with each one handling a different activity. It can overload a website and is quite expensive. We ’ re already seeing AdTech and MarTech coming together as MAdTech. Businesses need to keep an eye on this convergence of technologies and adopt new platforms that support it. — Syed Balkhi , WPBeginner

13. Location-Based Innovation

The concentration of tech companies in places like Seattle and San Francisco has led to a quick rise in living costs in these cities. Income isn ’ t catching up, and there ’ s stress on public infrastructure. Poor internet services in rural areas also exacerbate this issue. Innovation should be decentralized. — Samuel Thimothy , OneIMS

14. Artificial Intelligence Implementation

Businesses, especially those in the tech industry, are having trouble implementing AI. If you ’ ve used and improved upon your AI over the years, you ’ re likely having an easier time adjusting. But new online businesses test multiple AI programs at once and it ’ s causing communication and data mix-ups. As businesses settle with specific programs and learn what works for them, we will see improvements. — Chris Christoff , MonsterInsights

Recent Corporate Innovation Articles

50 Company Culture Examples to Get You Inspired

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Experts Predict More Digital Innovation by 2030 Aimed at Enhancing Democracy
  • 5. Tech causes more problems than it solves

Table of Contents

  • 1. The innovations these experts predict by 2030
  • 2. Tech is (just) a tool
  • 3. Power dynamics play a key role in problems and innovation
  • 4. It’s all just history repeating itself
  • 6. The net effects in 10 years will be negligible
  • About this canvassing of experts
  • Acknowledgments

A number of respondents to this canvassing about the likely future of social and civic innovation shared concerns. Some said that technology causes more problems than it solves. Some said it is likely that emerging worries over the impact of digital life will be at least somewhat mitigated as humans adapt. Some said it is possible that any remedies may create a new set of challenges. Others said humans’ uses and abuses of digital technologies are causing societal harms that are not likely to be overcome.

The following comments were selected from among all responses, regardless of an expert’s answer to this canvassing’s main question about the impact of people’s uses of technology. Some of these remarks of concern happen to also include comments about innovations that may emerge. Concerns are organized under four subthemes: Something is rotten in the state of technology; technology use often disconnects or hollows out a community; society needs to catch up and better address the threats and opportunities of tech; and despite current trends, there is reason to hope for better days.

The chapter begins with some overview insights:

Larry Masinter , internet pioneer, formerly with Adobe, AT&T Labs and Xerox PARC, who helped create internet and web standards with IETF and W3C, said, “Technology and social innovation intended to overcome the negatives of the digital age will likely cause additional negative consequences. Examples include: the decentralized web, end-to-end encryption, AI and machine learning, social media.”

James Mickens , associate professor of computer science at Harvard University, formerly with Microsoft, commented, “Technology will obviously result in ‘civic innovation.’ The real question is whether the ‘innovation’ will result in better societal outcomes. For example, the gig economy is enabled by technology; technology finds buyers for workers and their services. However, given the choice between an economy with many gig workers and an economy with an equivalent number of traditional middle-class jobs, I think that most people would prefer the latter.”

Michael Aisenberg , chair, ABA Information Security Committee, wrote, “Misappreciation of limits and genesis of, e.g., AI/machine learning will produce widely disparate results in deployment of tech innovations. Some will be dramatically beneficial; some may enable abuse of law enforcement, economic systems and other fundamental civic institutions and lead to exacerbation of gaps between tech controllers/users and underserved/under- or mis-skilled populations (‘digital divide’) in what may be a significant (embed limitations on career/economic advancement) or even life-threatening (de facto health care or health procedure rationing) manner.”

The problem is that we are becoming more and more dependent on machines and hence more susceptible to bugs and system failures. Yaakov J. Stein

Peter Lunenfeld , a professor of design, media arts and digital humanities at the University of California, Los Angeles, and author of “Tales of the Computer as Culture Machine,” predicted, “We will use technology to solve the problems the use of technology creates, but the new fixes will bring new issues. Every design solution creates a new design problem, and so it is with the ways we have built our global networks. Highly technological societies have to be iterative if they hope to compete, and I think that societies that have experienced democracy will move to curb the slide to authoritarianism that social media has accelerated. Those curbs will bring about their own unintended consequences, however, which will start the cycle anew.”

Yaakov J. Stein , chief technology officer of RAD Data Communications, based in Israel, responded, “The problem with AI and machine learning is not the sci-fi scenario of AI taking over the world and not needing inferior humans. The problem is that we are becoming more and more dependent on machines and hence more susceptible to bugs and system failures. This is hardly a new phenomenon – once a major part of schooling was devoted to, e.g., penmanship and mental arithmetic, which have been superseded by technical means. But with the tremendous growth in the amount of information, education is more focused on how to retrieve required information rather than remembering things, resulting not only in less actual storage but less depth of knowledge and the lack of ability to make connections between disparate bits of information, which is the basis of creativity. However, in the past humankind has always developed a more-advanced technology to overcome limitations of whatever technology was current, and there is no reason to believe that it will be different this time.”

A vice president for research and economic development wrote, “The problems we see now are caused by technology, and any new technological fixes we create will inevitably cause NEW social and political problems. Attempts to police the web will cause freedom of speech conflicts, for example.”

Something is rotten in the state of technology

A large share of these experts say among the leading concerns about today’s technology platforms are the ways in which they are exploited by bad actors who spread misinformation; and the privacy issues arising out of the business model behind the systems.

Misinformation – pervasive, potent, problematic

Numerous experts described misinformation and fake news as a serious issue in digital spaces. They expressed concern over how users will sort through fact and fiction in the coming decade.

Stephanie Fierman , partner, Futureproof Strategies, said, “I believe technology will meaningfully accelerate social and civic innovation. It’s cheap, fast and able to reach huge audiences. But as long as false information is enabled by very large websites, such social and civic innovators will be shadow boxing with people, governments, organizations purposely countering truthful content with lies.”

Sam Lehman-Wilzig , a professor of communications at Bar-Ilan University specializing in Israeli politics and the impact of technological evolution, wrote, “The biggest advance will be the use of artificial intelligence to fight disinformation, deepfakes and the like. There will be an AI ‘arms race’ between those spreading disinformation and those fighting/preventing it. Overall, I see the latter gaining the upper hand.”

Greg Shatan , a lawyer with Moses & Singer LLP and self-described “internet governance wonk,” predicted, “I see success, enabled by technology, as likely. I think it will take technology to make technology more useful and more meaningful. Many of us pride ourselves on having a ‘BS-meter,’ where we believe we can tell honestly delivered information from fake news and disinformation. The instinctual BS-meter is not enough. The next version of the ‘BS-meter’ will need to be technologically based. The tricks of misinformation have far outstripped the ability of people to reliably tell whether they are receiving BS or not – not to mention that it requires a constant state of vigilance that’s exhausting to maintain. I think that the ability and usefulness of the web to enable positive grassroots civic communication will be harnessed, moving beyond mailing lists and fairly static one-way websites. Could there be ‘Slack for Community Self-Governance?’ If not that platform, perhaps something new and aimed specifically at these tasks and needs.”

Oscar Gandy , a professor emeritus of communication at the University of Pennsylvania, said, “Corporate actors will make use of technology to weaken the possibility for improvements in social and civic relationships. I am particularly concerned about the use of technology in the communications realm in order to increase the power of strategic or manipulative communications to shape the engagement of members of the public with key actors within a variety of governance relationships.”

An expert in the ethics of autonomous systems based in Europe responded, “Fake news is more and more used to manipulate a person’s opinion. This war of information is becoming so important that it can influence democracy and the opinion of people before the vote in an election for instance. Some AI tools can be developed to automatically recognize fake news, but such tools can be used in turn in the same manner to enhance the belief in some false information.”

A research leader for a U.S. federal agency wrote, “At this point in time, I don’t know how we will reduce the spread of misinformation (unknowing/individual-level) and disinformation (nefarious/group-level), but I hope that we can.”

A retired information science professional commented, “Dream on, if you think that you can equate positive change with everybody yelling and those with the most clout (i.e., power and money) using their power to see their agendas succeed. Minority views will always be that, a minority. At present and in the near future the elites manipulate and control.”

A research scientist for a major technology company whose expertise is technology design said, “We have already begun to see increased protections around personal privacy. At present, it is less clear how we might avoid the deliberate misuse of news or news-like content to manipulate political opinions or outcomes, but this does not seem impossible. The trick will be avoiding government censorship and maintaining a rich, vigorous exchange of opinions.”

Privacy issues will continue to be a hot button topic

Multiple experts see a growing need for privacy to be addressed in online spaces.

Ayden Férdeline , technology policy fellow at the Mozilla Foundation, responded, “Imagine if everyone on our planet was naked, without any clear options for obtaining privacy technology (clothing). It would not make sense to ask people what they’d pay or trade to get this technology. This is a ‘build it and they will come’ kind of scenario. We’re now on the verge, as a society, of appropriately recognizing the need to respect privacy in our Web 2.0 world, and we are designing tools and rules accordingly. Back in 1992, had you asked people if they’d want a free and open internet, or a graphical browser with a walled garden of content, most would have said they prefer AOL. What society needed was not AOL but something different. We are in a similar situation now with privacy; we’re finally starting to grasp its necessity and importance.”

We’re now on the verge, as a society, of appropriately recognizing the need to respect privacy in our Web 2.0 world, and we are designing tools and rules accordingly. Ayden Férdeline

Graham Norris , a business psychologist with expertise in the future of work, said, “Privacy no longer exists, and yet the concept of privacy still dominates social-policy debates. The real issue is autonomy of the individual. I should own my digital identity, the online expression of myself, not the corporations and governments that collect my interactions in order to channel my behaviour. Approaches to questions of ownership of digital identity cannot shift until the realization occurs that autonomy is the central question, not privacy. Nothing currently visible suggests that shift will take place.”

Eduardo Villanueva-Mansilla , an associate professor of communications at Pontificia Universidad Catolica, Peru, and editor of the Journal of Community Informatics, wrote, “I’m trying to be optimistic, by leaving some room to innovative initiatives from civic society actors. However, I don’t see this as necessarily happening; the pressure from global firms will probably too much to deal with.”

An international policy adviser on the internet and development based in Africa commented, “Technology is creating and will continue to evolve and increase the impact of social and civic innovation. With technology we will see new accountability tools and platforms to raise voices to counter societal ills, be it in leadership, business and other faculties. We must however be careful so that these innovations themselves are not used to negatively impact end users, such issues like privacy and use of data must be taken on in a way that users are protected and not exposed to cybercrime and data breaches that so often occur now.”

Jamie Grady , a business leader, wrote, “As technology companies become more scrutinized by the media and government, changes – particularly in privacy rights – will change. People will learn of these changes through social media as they do now.”

Technology use often disconnects or hollows out community

Some respondents commented on rising problems with a loss of community and the need for more-organic, in-person, human-to-human connection and the impact of digital distancing.

Jonathan Grudin , principal researcher at Microsoft, commented, “Social and civic activity will continue to change in response to technology use, but will it change its trajectory? Realignments following the Industrial Revolution resulted from the formation of new face-to-face communities, including union chapters, community service groups such as Rotary Club and League of Women Voters, church groups, bridge clubs, bowling leagues and so on. Our species is designed to thrive in modest-sized collocated communities, where everyone plays a valued part. Most primates become vulnerable and anxious when not surrounded by their band or troop. Digital media are eroding a sense of community everywhere we look. Can our fundamental human need for close community be restored or will we become more isolated, anxious and susceptible to manipulation?”

Rebecca Theobald , an assistant research professor at the University of Colorado, Colorado Springs, said, “Technology seems to be driving people apart, which would lead to fewer connections in society.”

The program director of a university-based informatics institute said, “There is still a widening gap between rural and urban as well as digital ‘haves’ and ‘have nots.’ As well, the ability to interact in a forum in which all members of society have a voice is diminishing as those with technology move faster in the digital forums than the non-tech segment of the population that use non-digital discourse (interpersonal). The idea of social fabric in a neighborhood and neighborly interactions is diminishing. Most people want innovation – it is the speed of change that creates divisions.”

An infrastructure architect and internet pioneer wrote, “The kind of social innovation required to resolve the problems caused by our current technologies relies on a movement back toward individual responsibility and a specific willingness to engage in community. As both of these work against the aims of the corporate and political elite as they exist today, there is little likelihood these kinds of social innovations are going to take place. The family and church, for instance, which must be the core institutions in any rebuilding of a culture that can teach the kind of personal responsibility required, were both hollowed out in the last few decades. The remaining outward structures are being destroyed. There is little hope either families or churches will recover without a major societal event of some sort, and it will likely take at least one generation for them to rebuild. The church could take on the task of helping rebuild families, but it is too captured in attempts to grow ever larger, and consume or ape our strongly individualistic culture, rather than standing against it.”

A researcher based in North America predicted a reining in of the digital in favor of the personal: “Between email and phones, I think we’re close to peak screen time, a waste of time, and it’s ruining our eyes. Just as we have forsaken our landlines, stopped writing letters, don’t answer our cellphones, a concept of an average daily digital budget will develop, just as we have a concept of average daily caloric intake. We’ll have warning labels that rate content against recommended daily allowances of different types of content that have been tested to be good for our mental health and socialization, moderately good, bad, and awful – the bacon of digital media. And people who engage too much will be in rehab, denied child custody and unemployable. Communities, residences and vacation areas will promote digital-free, mindfulness zones – just as they have quiet cars on the train.”

Society needs to catch up and better address the threats and opportunities of tech

Some of these experts said that the accelerating technological change of the digital age is making it difficult for humans to keep up and respond to emerging challenges.

A chair of political science based in the American South commented, “Technology always creates two new problems for every one it solves. At some point, humans’ cognitive and cooperative capacities – largely hard-wired into their brains by millennia of evolution – can’t keep up. Human technology probably overran human coping mechanisms sometime in the later 19th century. The rest is history.”

There is a gap between the rate at which technology develops and the rate at which society develops. We need to take care not to fall into that gap. Louisa Heinrich

Larry Rosen , a professor emeritus of psychology at California State University, Dominguez Hills, known as an international expert on the psychology of technology, wrote, “I would like to believe that we, as citizens, will aid in innovation. Smart people are already working on many social issues, but the problem is that while society is slow to move, tech moves at lightning speed. I worry that solutions will come after the tech has either been integrated or rejected.”

Louisa Heinrich , a futurist and consultant expert in data and the Internet of Things, said, “There is a gap between the rate at which technology develops and the rate at which society develops. We need to take care not to fall into that gap. I hope we will see a shift in governance toward framework-based regulation, which will help mitigate the gap between the pace of change in technology and that in government. At the very least, we need to understand the ways in which technology can extend or undermine the rules and guidelines we set for our businesses, workplaces, public spaces and interactions. To name just one common example, recruitment professionals routinely turn to Facebook as a source of information on prospective employees. This arguably violates a number of regulations designed to protect people from being denied work based on personal details not relevant to that work. How do we unravel this conundrum, bearing in mind that there will always be another social network, another digital source to mine for information about people? Taken from another angle, there is a significant gap between what users understand about certain bits of technology and the risks they take using them. How can we educate people about these risks in a way that encourages participation and co-creation, rather than passivity? As the so-called Gen Z comes of age, we will see a whole generation of young adults who are politically engaged at a level not seen in several generations, who are also native users of technology tools. This could bring about a positive revolution in the way technology is used to facilitate civic engagement and mutually empower and assist citizens and government. Technology provides us with powerful tools that can help us advance socially and civically, but these tools need to be thoughtfully and carefully put to use – when we encode barriers and biases into the applications that people need to use in daily life, whether intentionally or no, we may exclude whole segments of society from experiencing positive outcomes. We are living through a time of rapid and radical change – as always, the early stages feel uncomfortable and chaotic. But we can already see the same tools that have been used to mislead citizens being used to educate, organise, motivate and empower them. What’s needed is a collective desire to prioritise and incentivise this. New Zealand is leading the way with the world’s first ‘well-being’ budget.”

Bulbul Gupta , founding adviser at Socos Labs, a think tank designing artificial intelligence to maximize human potential, responded, “Until government policies, regulators, can keep up with the speed of technology and AI, there is an inherent imbalance of power between technology’s potential to contribute to social and civic innovation and its execution in being used this way. If technology and AI can make decisions about people in milliseconds that can prevent their full social or civic engagement, the incentive structures to be used toward mitigating the problems of the digital age cannot then be solved by technology.”

Gene Policinski , a journalist and First Amendment law expert at the Freedom Forum Institute, observed, “We forget how new the ‘tech revolution’ really is. As we move forward in the next decade, the public’s awareness of the possibilities inherent in social and civic innovation, the creativity of the tech world working with the public sector and public acceptance of new methods of participation in democratic processes will begin to drown out and eventually will surpass the initial problems and missteps.”

Gabriel Kahn , former bureau chief for The Wall Street Journal, now a professor of journalism researching innovation economics in emerging media at the University of Southern California, wrote, “We are not facing a ‘Terminator’-like scenario. Nor are we facing a tech-driven social utopia. Humans are catching up and understanding the pernicious impact of technology and how to mitigate it.”

Kathee Brewer , director of content at CANN Media Group, predicted, “Much like society developed solutions to the challenges brought about by the Industrial Revolution, society will find solutions to the challenges of the Digital Revolution. Whether that will happen by 2030 is up for debate. Change occurs much more rapidly in the digital age than it did at the turn of the 20th century, and for society to solve its problems it must catch up to them first. AND people, including self-interested politicians, must be willing to change. Groups like the Mozilla Foundation already are working on solutions to invasions of privacy. That work will continue. The U.S. government probably won’t make any major changes to the digital elections framework until after the 2020 election, but changes will be made. Sadly, those changes probably will result from some nastiness that develops due to voters of all persuasions being unwilling to accept electoral results, whatever the results may be.”

Valerie Bock of VCB Consulting, former Technical Services Lead at Q2 Learning, responded, “I think our cultures are in the process of adapting to the power our technologies wield, and that we will have developed some communal wisdom around how to evaluate new ones. There are some challenges, but because ordinary citizens have become aware that images can be ‘photoshopped’ the awareness that video can be ‘deepfaked’ is more quickly spreading. Cultural norms as well as technologies will continue to evolve to help people to apply more informed critiques to the messages they are given.”

Bach Avezdjanov , a program officer with Columbia University’s Global Freedom of Expression project, said, “Technological development – being driven by the Silicon Valley theory of uncontrolled growth – will continue to outpace civic and social innovation. The latter needs to happen in tandem with technological innovation, but instead plays catch-up. This will not change in the future, unless political will to heavily regulate digital tools is introduced – an unlikely occurrence.”

A computing science professor emeritus from a top U.S. technological university commented, “Social/civic innovation will occur but most likely lag well behind technological innovation. For example, face-recognition technology will spread and be used by businesses at a faster pace than social and legal norms can develop to protect citizens from any negative effects of that technology. This technology will spread quickly, due to its various positives (increased efficiencies, conveniences and generation of profits in the marketplace) while its negatives will most likely not be countered effectively through thoughtful legislation. Past Supreme Court decisions (such as treating corporations as persons, WRT unlimited funding of political candidates, along with excessive privacy of PACs) have already undermined U.S. democracy. Current populist backlashes, against the corruption of the Trump government, may also undermine democracy, such as the proposed Elizabeth Warren tax, being not on profits, but upon passive wealth itself – a tax on non-revenue-producing illiquid assets (whose valuation is highly subjective), as in her statement to ‘tax the jewelry of the rich’ at 2% annually. Illiquid assets include great private libraries, great private collections of art, antiques, coins, etc. – constituting an assault on the private sector, that if successful, will weaken democracy by strengthening the confiscatory power of government. We could swing from current excesses of the right to future excesses of the left.”

Despite current trends, there is reason to hope for better days

Many of the experts in this canvassing see a complicated and difficult road ahead, but express hope for the future.

Cheryl B. Preston , an expert in internet law and professor at Brigham Young University Law School, said, “Innovation will bring risk. Change will bring pain. Learning will bring challenges. Potential profits will bring abuse. But, as was the decision of Eve in the Garden of Eden, we need to leave the comfortable to learn and improve. If we can, by more informed voting, reduce the corruption in governmental entities and control corporate abuse, we can overcome difficulties and advance as a society. These advances will ultimately bring improvement to individuals and families.”

John Carr , a leading global expert on young people’s use of digital technologies, a former vice president of MySpace, commented, “I know of no proof for the notion that more people simply knowing more stuff, even stuff that is certifiably factually accurate, will necessarily lead to better outcomes for societies. But I do harbour a hope that if, over time, we can establish the idea that there are places on the internet that are reliable sources of information, it will in the medium to longer term help enough people in enough countries to challenge local demagogues and liars, making it harder for the demagogues and liars to succeed, particularly in times of national crisis or in times when war might be on the visible horizon. I used to think that if the internet had been around another Hitler would be impossible. Recently I have had a wobble on that but my optimism ‘trumps’ that gloomy view.”

Mike Douglass , an independent developer, wrote, “There is a significant realization that a stampede to create connections between anonymous people and devices was a bad idea. It’s up to the technologists and – more importantly – those who want to make money out of technology – to come up with a more measured approach. There’s a reason why gentlemen obtained letter of introduction to other gentlemen – one shouldn’t trust some random individual turning up on your doorstep. We need the equivalent approach. I’ve no idea what new innovations might turn up. But if we don’t get the trust/privacy/security model right we’ll end up with more social media disasters.”

Hume Winzar , an associate professor and director of the business analytics undergraduate program at Macquarie University, Sydney, Australia, predicted, “With more hope than evidence, I’d like to think that reason will eventually overcome the extraordinary propaganda machines that are being built. When the educated upper-middle classes realise that the ‘system’ is no longer serving them, then legal and institutional changes will be necessary. That is, only when the managers who are driving the propaganda machine(s) start to feel that they, personally, are losing privacy, autonomy, money and their children’s future, then they will need to undermine the efforts of corporate owners and government bureaucrats and officials.”

Carolyn Heinrich , a professor of education and public policy at Vanderbilt University, said, “My hope (not belief) is that the ‘techlash’ will help to spur social and civic innovations that can combat the negative effects of our digitization of society. Oftentimes, I think the technology developers create their products with one ideal in mind of how they will be used, overlooking that technology can be adapted and used in unintended and harmful ways. We have found this in our study of educational technology in schools. The developers of digital tools envision them as being used in classrooms in ‘blended’ ways with live instructors who work with the students to help customize instruction to their needs. Unfortunately, more often than not, we have seen the digital tools used as substitutes for higher-quality, live instruction and have observed how that contributes to student disengagement from learning. We have also found some of the content lacking in cultural relevance and responsiveness. If left unchecked, this could be harmful for far larger numbers of students exposed to these digital instructional programs in all 50 states. But if we can spur vendors to improve the content, those improvements can also extend to large numbers of students. We have our work cut out for us!”

In the field I follow, artificial intelligence, the numbers of professionals who take seriously the problems that arise as a consequence of this technology are reassuring. Pamela McCorduck

Heywood Sloane , entrepreneur and banking and securities consultant, wrote, “I’m hopeful the it will be a positive contributor. It has the ability to alter the way we relate to our environment in ways that shrink the distances between people and help us exercise control over our personal and social spaces. We are making substantial progress, and 5G technology will accelerate that. On the flip side, we need to find mechanisms and processes to protect our data and ourselves. They need to be strong, economic and simple to deploy and use. That is going to be a challenge.”

Pamela McCorduck , writer, consultant and author of several books, including “Machines Who Think,” commented, “I am heartened by the number of organizations that have formed to enhance social and civic organization through technology. In the field I follow, artificial intelligence, the numbers of professionals who take seriously the problems that arise as a consequence of this technology are reassuring. Will they all succeed? Of course not. We will not get it right the first time. But eventually, I hope.”

Yoshihiko Nakamura , a professor of mechno-informatics at the University of Tokyo, observed, “The current information and communication technology loses diversity because it is still insufficient to enhance the affectivity or emotion side of societies. In this sense I can see the negative side of current technology to human society. However, I have a hope that we can invent uses of technology to enhance the weaker side and develop tomorrow’s technology. The focus should be on the education of society in the liberal arts.”

Ryan Sweeney , director of analytics at Ignite Social Media, commented, “In order to survive as a functioning society, we need social and civic innovation to match our use of technology. Jobs and job requirements are changing as a result of technology. Automation is increasing across a multitude of industries. Identifying how we protect citizens from these changes and help them adapt will be instrumental in building happiness and well-being.”

Miles Fidelman , founder, Center for Civic Networking and principal Protocol Technologies Group, responded, “We can see clear evidence that the internet is enabling new connections, across traditional boundaries – for the flow of information, culture and commerce. It is strengthening some traditional institutions (e.g., ties between geographically distributed family members) and weakening others (e.g., the press). Perhaps the most notable innovation is that of ad hoc, network-centric organizations – be they global project teams, or crisis response efforts. How much of this innovation will make things better, how much it will hurt us, remains an open question.”

A technology developer active in IETF said, “I hope mechanisms will evolve to exploit the advantages of new tech and mitigate the problems. I want to be optimistic, but I am far from confident.”

A renowned professor of sociology known for her research into online communications and digital literacies observed, “New groups expose the error of false equivalence and continue to challenge humans to evolve into our pre-frontal cortex. I guess I am optimistic because the downside is pretty terrible to imagine. It’s like E.O. Wilson said: ‘The real problem of humanity is the following: We have paleolithic emotions; medieval institutions; and god-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.’”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Trust, Facts & Democracy

Most Popular

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Cellphones in Schools: Addiction, Distraction, or Teaching Tool?

research issues technology

  • Share article

“Cellphones are here to stay. More and more work is being done on these communication devices, as they morph into BlackBerries, hand-held calculators, phone banks, digital cameras, radios, and even televisions.”

So warned education professor Bruce S. Cooper and former superintendent John W. Lee as they weighed the place of cellphones in schools —back in 2006.

That was the year that an unevenly enforced 1988 ban on mobile devices in New York City schools sprang back into the public consciousness with a new crackdown . That policy was later dropped in 2015, but it seems everything old is new again. The current New York governor, Kathy Hochul, is now publicly considering a similar statewide ban, as are California Gov. Gavin Newsom and lawmakers in more than half a dozen other states. Several states, including Floridia, Indiana, and Ohio, already passed statewide prohibitions on school cellphone use in the past several years.

Cellphone technology has certainly evolved as predicted over the last few decades (well, mostly; R.I.P. to the now discontinued BlackBerry), but what about the debate over their use in schools?

The popularity of phone bans has yo-yoed in the years since, from a high of 91 percent of public schools prohibiting nonacademic use of cellphones in the 2009-10 school year (the first year the National Center for Education Statistics began tracking such data). That number dipped as low as 66 percent in 2015-16 but has since rebounded to 76 percent in 2021-22 , the latest year data are available.

Back in 2006, one fault line was already emerging between educators concerned about cellphone misuse in class and parents concerned about not being able to communicate with their children.

“Given the potential for abuse, a ban sounds logical,” wrote Cooper and Lee in their 2006 essay. “Yet, in today’s society, cellphones also serve as modern-day umbilical cords, able to link children with their increasingly busy (and worried) parents and guardians.”

If that sounds familiar, it might be because you read reporting just last month from EdWeek Staff Writer Elizabeth Heubeck documenting “ When Schools Want to Ban Cellphones—But Parents Stand in the Way .”

Of course, the debate over cellphones in school has never been as clear-cut as educators vs. parents. Dig deeper into Edweek’s Opinion advice and you’ll find countless educators taking a pro-cellphone line—at least when used responsibly.

Middle school administrator Matt Levinson saw a fork in the road ahead of teachers in a 2009 Opinion essay : “They can continue to fight a losing battle and draw harsh lines in the sand, confiscating cellphones or banning their use during school hours. Or, they can seize the teachable moment, and shift their approaches to embrace technology and engage students with these devices.”

The following year, middle school teacher Paul Barnwell reached a similar conclusion , advising readers that not only can cellphones be put to productive use in the classroom, but that failing to do so may actually be doing students a disservice. How else, he asked, can schools prepare students for the “real world”? (And if that sounds familiar, it might be because you’ve been reading modern arguments over the place of AI in schools.)

But for teachers in schools without a clear cellphone policy, finding those academic applications for smartphones amid the TikTok distractions is no easy task. You could try five tips from high school teacher Curtis White on “ Harnessing the Power of the Cellphone in Class .” Or perhaps check out education consultant Matthew Lynch’s three strict rules for classroom cellphone use .

More recently, a slew of educators shared their own strategies for curbing cellphone misuse, in response to Opinion blogger Larry Ferlazzo’s call for teacher contributions:

  • Classroom Cellphone Use Is Fraught. It Doesn’t Have to Be
  • Should Cellphones Be Permitted in Classrooms? Teachers Offer These Strategies
  • Let’s Not Oversimplify Students’ Cellphone Use

In the past few years, several education researchers have also shared best practices on cellphone use in psychologist Angela Duckworth’s Ask a Psychologist opinion blog. Drawing on his bumpy experiences trying to set boundaries on his own 11-year-old daughter’s smartphone use, education researcher Tom Harrison offered “ 4 Strategies to Help Students Manage Cellphone Use in School .”

In another post, Duckworth reminded readers of some basic self-control tricks to help kids resist the siren song of screen time .

Psychology professor Jean M. Twenge, who dug through data from 11,000 teens to conclude that “not all screen time is created equal,” laid down some do’s and don’ts for cellphone access in the blog.

But not everyone is optimistic on finding a middle ground between endless distraction and productive learning tool. In a widely read 2016 Opinion essay , teacher Steve Gardiner had another word for his students’ relationships to their phones: addiction.

“Addiction is a strong word, but it accurately describes the dysfunctional behavior exhibited by teenagers in my high school English classroom when I ask them to put away their cellphones,” he wrote. Gardiner wasn’t calling for a blanket ban on phones—indeed, he identified some legitimate academic uses of the technology—but rather sounding the alarm on the “obsessive and dependent behavior” undergirding student cellphone misuse.

“We have incentives to promote attendance and graduation,” he concluded, “but many teenagers need help, because their bodies are in the classroom, but their minds are inside their cellphones.”

For some teachers, that cellphone dependency has gotten bad enough to sour them on the profession entirely. That’s the story of high school biology teacher Mitchell Rutherford, who decided to quit teaching in part because of the exhaustion he felt from competing with cellphones for students’ attention.

“I wasn’t emotionally available for myself or my wife,” he told Education Week earlier this month , “because I was pouring my heart into my students that I saw struggling with socializing, anxiety, and focus, which in my opinion is largely caused and certainly exacerbated by intentionally designed addictive cellphone apps.”

Sign Up for EdWeek Tech Leader

Edweek top school jobs.

Close-up stock photograph showing a touchscreen monitor with a woman’s hand looking at responses being asked by an AI chatbot.

Sign Up & Sign In

module image 9

Stanford University

The Ethics of Pediatric Clinical Trials

  • Triya Patel East Hamilton High School

Children are a unique population with developmental and anatomical differences from adults, so they require age-specific medical treatments. In order to develop these specialized medical treatments, clinical trials must be conducted on pediatric patients. However, the participation of children in research and clinical trials presents ethical concerns. This study explores the ethical issues and concerns that arise from pediatric clinical trials through a qualitative phenomenological study based on interviews of medical professionals and physicians. Interviews of guardians of children who have participated in a pediatric clinical trial were conducted as well, with a primary focus on gathering their first hand accounts of pediatric clinical trials. Through a thematic analysis of the two groups of interviews, ethical issues, concerns, and challenges did arise. Limitations and implications from this study suggest future directions for other researchers. Future researchers should continue to look into this topic as healthcare is constantly evolving and changing.

Copyright (c) 2024 Intersect: The Stanford Journal of Science, Technology, and Society

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License .

  • Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
  • Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
  • Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access ).

Information

  • For Readers
  • For Authors
  • For Librarians

More information about the publishing system, Platform and Workflow by OJS/PKP.

  • Maps & Directions
  • Search Stanford
  • Terms of Use
  • Emergency Info

© Stanford University , Stanford , California 94305 .   Copyright Complaints

IMAGES

  1. Top 8 Technology Trends & Innovations driving Scientific Research in 2023

    research issues technology

  2. Technological & Scientific Advancements

    research issues technology

  3. Major Tech Issues of the World and How to solve them

    research issues technology

  4. (PDF) Emerging Issues in Science and Technology Vol. 2

    research issues technology

  5. 177 Best Technology Research Topics To Use In 2023

    research issues technology

  6. Humanizing Science and Engineering

    research issues technology

VIDEO

  1. Tech Updates: Tesla Subscription, AT&T Ups Fiber Speeds & Massage Robot

  2. Problems in Research

  3. Discover the Latest Technologies for Ecology Surveys

  4. Introduction to Research: Issues and Challenges

  5. Devices and Networking Summit

  6. STR Virtual Symposium: New Modes of Strategizing

COMMENTS

  1. A comprehensive study of technological change

    New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential. "The rate of improvement can only be ...

  2. Technology News, Research & Innovations

    Technology News. Read the latest technology news on SciTechDaily, your comprehensive source for the latest breakthroughs, trends, and innovations shaping the world of technology. We bring you up-to-date insights on a wide array of topics, from cutting-edge advancements in artificial intelligence and robotics to the latest in green technologies ...

  3. Technology

    It suggests that accidents of vehicles equipped with Advanced Driving Systems generally have lower occurrence chance than human-driven ones in most scenarios. Mohamed Abdel-Aty. Shengxuan Ding ...

  4. The Internet and the Pandemic

    Results from a new Pew Research Center survey of U.S. adults conducted April 12-18, 2021, reveal the extent to which people's use of the internet has changed, their views about how helpful technology has been for them and the struggles some have faced. The vast majority of adults (90%) say the internet has been at least important to them ...

  5. Science, technology and society

    Codes of conduct should help scientists navigate societal expectations. Scientists are increasingly expected to incorporate socio-political considerations in their work, for instance by ...

  6. Internet & Technology

    Americans' Views of Technology Companies. Most Americans are wary of social media's role in politics and its overall impact on the country, and these concerns are ticking up among Democrats. Still, Republicans stand out on several measures, with a majority believing major technology companies are biased toward liberals. short readsApr 3, 2024.

  7. Technological Innovation: Articles, Research, & Case Studies on

    New research on technological innovation from Harvard Business School faculty on issues including using data mining to improve productivity, why business IT innovation is so difficult, and the business implications of the technology revolution.

  8. Information technology

    Information technology is the design and implementation of computer networks for data processing and communication. ... Research suggests it's hard to change people's political opinions, but ...

  9. Information Technology: News, Articles, Research, & Case Studies

    Information Technology. New research on information technology from Harvard Business School faculty on issues including the HealthCare.gov fiasco, online privacy concerns, and the civic benefits of technologies that utilize citizen-created data. Page 1 of 60 Results →. 23 Apr 2024.

  10. Technology: News, Articles, Research, & Case Studies

    New research on technology from Harvard Business School faculty on issues including technological innovation, technology networks, and technology adoption. Page 1 of 341 Results → 18 Jun 2024

  11. 8 charts on technology use around the world

    Internet users are defined as people who say they use the internet at least occasionally or have access to the internet via a smartphone. In most countries surveyed, around nine-in-ten or more adults are online. At the upper bound, 99% of South Koreans are online. Comparatively fewer adults are online in India (56%), Nigeria (57%) and Kenya (66%).

  12. Thinking Through the Ethics of New Tech…Before There's a Problem

    Third, appoint a chief technology ethics officer. We all want the technology in our lives to fulfill its promise — to delight us more than it scares us, to help much more than it harms. We also ...

  13. Top 10 tech trends for next 10 years (according to McKinsey)

    Faster digital connections, powered by 5G and the IoT, have the potential to unlock economic activity. So much so that implementing faster connections in "mobility, healthcare, manufacturing and retail could increase global GDP by $1.2 trillion to $2 trillion by 2030." 5G and IoT will be one of the most-watched tech trends for the next decade.

  14. Scientific breakthroughs: 2024 emerging trends to watch

    After years of research, setbacks, and minimal progress, the first formal evidence of CRISPR as a therapeutic platform technology in the clinic was realized. Intellia Therapeutics received FDA clearance to initiate a pivotal phase 3 trial of a new drug for the treatment of hATTR, and using the same Cas9 mRNA, got a new medicine treating a different disease, angioedema.

  15. These are the Top 10 Emerging Technologies of 2024

    With AI expanding the world of data like never before, finding ways of leveraging it without ethical or security concerns is key. Enter synthetic data, an exciting privacy-enhancing technology re-emerging in the age of AI. It replicates the patterns and trends in sensitive datasets but does not contain specific information that could be linked to individuals or compromise organizations or ...

  16. How Is Technology Changing the World, and How Should the World Change

    Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing ...

  17. New technology helps solve the unsolvable in rare disease diagnoses

    The innovative technology, launched in 2022, regularly compares patient genomic sequencing data with newly published global research discoveries, with the goal of pinpointing previously elusive ...

  18. Top 10 Issues Facing Technology in 2024

    Spatial Computing Makes New Strides. Spatial computing is swiftly making the lane change from sci-fi theory into reality. Statista estimates that 1.4 billion devices will be enabled with spatial computing by 2024. Spatial computing is expected to explode as advances in technology allow for the realization of spatial-computing applications.

  19. Issues in Science and Technology

    For better or worse, finding workable solutions to significant problems among people who share land, traditions, and values may be easier and more effective than global and national efforts. For the scientific enterprise, the devolution of big policy to small places poses new challenges around establishing spaces for democratic decisionmaking ...

  20. The 10 biggest issues IT faces today

    7. Implementing security as quickly as the tech. Ricki Koinig, CIO of Wisconsin's Department of Natural Resources, says one of the most significant issues she faces today is "delivering more ...

  21. Owen Coté, military technology expert and longtime associate director

    As Haun notes, "Owen's expertise, with a near encyclopedic knowledge of innovations in military technology, coupled with a gregarious personality and willingness to share his time and talent, attracted dozens of students to join in a journey to study important issues of international security.

  22. 54 Most Interesting Technology Research Topics for 2023

    Artificial intelligence technology research topics. We started 2023 with M3GAN's box office success, and now we're fascinated (or horrified) with ChatGPT, voice cloning, and deepfakes. While people have discussed artificial intelligence for ages, recent advances have really pushed this topic to the front of our minds.

  23. A deadly new strain of mpox is raising alarm

    Trudie Lang, a professor of global-health research at the University of Oxford, suggests that although there are uncertainties, the vaccine is safe, easy to use and worth trying.

  24. Clinical highlights: Milestones, programs & technology

    The unit is the largest in inland Northern California accredited by the National Association of Epilepsy Centers as level 4, a highest-level designation only given to facilities with the most sophisticated intensive neurodiagnostic monitoring technology and the full array of treatments for epilepsy and other seizure disorders. Alongside the new ...

  25. Technology: Research Problems Motivated by Application Needs

    Chapter 1 identifies opportunities to meet significant needs of crisis management and other national-scale application areas through advances in computing and communications technology. This chapter examines the fundamental research and development challenges those opportunities imply. Few of these challenges are entirely new; researchers and technologists have been working for years to ...

  26. 14 Tech-Related Ethical Concerns And How They Can Be Addressed

    6. Ad Fraud. Big Tech platforms make huge profits as advertisers spend money to reach their audiences, and they have an ethical responsibility to provide accurate data on whether ads are reaching ...

  27. 14 Major Tech Issues & How To Solve Them

    Too Much Focus on Automation. Data Mixups Due to AI Implementation. Poor User Experience. 1. Employee Productivity Measurement. As most companies switched to 100 percent remote almost overnight, many realized that they lacked an efficient way to measure employee productivity. Technology with " user productivity reports " has become ...

  28. 5. Tech causes more problems than it solves

    Tech causes more problems than it solves. A number of respondents to this canvassing about the likely future of social and civic innovation shared concerns. Some said that technology causes more problems than it solves. Some said it is likely that emerging worries over the impact of digital life will be at least somewhat mitigated as humans adapt.

  29. Cellphones in Schools: Addiction, Distraction, or Teaching Tool?

    Cellphone technology has certainly evolved as predicted over the last few decades (well, mostly; R.I.P. to the now discontinued BlackBerry), but what about the debate over their use in schools?

  30. The Ethics of Pediatric Clinical Trials

    Children are a unique population with developmental and anatomical differences from adults, so they require age-specific medical treatments. In order to develop these specialized medical treatments, clinical trials must be conducted on pediatric patients. However, the participation of children in research and clinical trials presents ethical concerns.