The impact of rapidly advancing digital technology and its potential harms are under scrutiny by experts who argue that the current scientific research system is inadequate. In a report published in the journal Science, Dr. Amy Orben from the University of Cambridge and Dr. J. Nathan Matias from Cornell University highlight the challenges faced by academic researchers in holding tech companies accountable regarding the effects of their products.
The researchers emphasize that technology firms often delegate the responsibility of product safety testing to university and charity-based scientists who have limited resources. This contrasts with other industry practices where safety assessments are typically conducted internally. Additionally, these tech companies oftentimes restrict access to critical data, further hindering transparent evaluation.
Orben and Matias call for reformation in how technological impacts on issues like mental health and discrimination are evaluated. They propose expediting research processes and involving the public in creating registries of technology-related harms. This approach could enable policymakers to develop interventions and safer technology designs more concurrently with evidence collection.
“Big technology companies increasingly act with perceived impunity, while trust in their regard for public safety is fading,” stated Orben. “Policymakers and the public are turning to independent scientists as arbiters of technology safety.”
Dr. Matias highlighted the rapid evolution of technological products, complicating scientific research efforts: “Technology products change on a daily or weekly basis, and adapt to individuals. Even company staff may not fully understand the product at any one time, and scientific research can be out of date by the time it is completed.”
The paper suggests establishing public incident reporting registries akin to successful models in environmental toxicology and automotive safety. It further introduces a "minimum viable evidence" system, where the threshold for demonstrating potential technological harms is adjusted to allow parallel intervention testing.
“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” said Orben.
The researchers also advocate learning from sectors like "Green Chemistry," which motivates safer market alternatives via independent assessment of chemical products.
“The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” Orben stated.
Matias concluded: “When science about the impacts of new technologies is too slow, everyone loses.”
The report argues for an overhaul of the current system to better equip society in managing emerging risks linked with digital technology, especially as artificial intelligence further integrates into societal frameworks.