stack-of-paper.jpg

As if the fight against fake information wasn't enough to worry about, there are increasingly worried calls from scientists for better ways to deal with a veritable tidal wave of legitimate coronavirus research. The phenomenon, termed an "infodemic" by the World Health Organization[1], has made it difficult for researchers to fully digest rapidly evolving discoveries, rendering some ongoing research obsolete even before it's through peer review.

The crush of research over the past months is the particular result of the urgency among researchers to publish results that might be helpful to clinicians, but the difficulty of collating and accessing a growing body of scientific literature is nothing new. Now there's a call for new techniques, from centralized databases to AI/ML technologies, to help scientists keep abreast of and incorporate findings from new research into ongoing work.

In an opinion article[2] in the journal Patterns, Carnegie Mellon University[3]'s Ganesh Mani, an investor, technology entrepreneur, and adjunct faculty member in the school's Institute for Software Research, and Tom Hope, a post-doctoral researcher at the Allen Institute for AI, issued just such a call.

"Given the ever-increasing research volume, it will be hard for humans alone to keep pace," they write in the article.

They point to the coronavirus research deluge in particular. The scientific response during the pandemic is an exemplar of the growing problem. By mid-August, more than 8,000 preprints of scientific papers related to the novel coronavirus had been posted in online medical, biology, and chemistry archives. Scores more papers dealt with related resarch, such as quarantine-induced depression. In the field of virology, the average time to perform peer review and publish new articles dropped from 117 to 60 days on average.

It now seems increasingly attractive and perhaps necessary to combine human

Read more from our friends at ZDNet