Beyond the hype, AI promises leg up for scientific research

Beyond the hype, AI promises leg up for scientific research

Beyond the hype, AI promises leg up for scientific research PlatoBlockchain Data Intelligence. Vertical Search. Ai.

The last decade has seen great strides in the application of artificial intelligence to scientific discovery, but practitioners need to know when and how to improve their use of AI and must challenge poor data quality.

From drug discovery, material science, astrophysics and nuclear fusion, scientists using AI are seeing results in improved accuracy and reduced experimental time.

Published in research journal Nature today, a paper from a team of 30 researchers from around the globe assesses the progress made in the much-hyped field, and understands what needs to be done.

Marshalled by Hanchen Wang, post-doctoral fellow at Stanford Computer Science and Genentech group, the paper points out AI can help with “optimizing parameters and functions, automating procedures to collect, visualize, and process data, exploring vast spaces of candidate hypotheses to form theories, and generating hypotheses and estimating their uncertainty to suggest relevant experiments.”

For example, in astrophysics, an unsupervised learning technique for neural networks for screening out noise, variational autoencoders have been used to estimate gravitational-wave detector parameters based on pretrained black-hole waveform models. “This method is up to six orders of magnitude faster than traditional methods, making it practical to capture transient gravitational-wave events,” the paper says.

Another example comes from attempts to achieve nuclear fusion. Google DeepMind research scientist Jonas Degrave developed an AI controller to regulate nuclear fusion through magnetic fields in a tokamak reactor. The researchers showed how an AI agent could take real-time measurements of electrical voltage levels and plasma configurations to help control the magnetic field and meet experimental targets.

“[The] reinforcement-learning approaches have proven to be effective for magnetic control of tokamak plasmas, where the algorithm interacts with the tokamak simulator to optimize a policy for controlling the process,” the paper says.

Though promising, the application of AI in science must rise to a number of challenges to become more widespread, the paper argues.

“The practical implementation of an AI system involves complex software and hardware engineering, requiring a series of interdependent steps that go from data curation and processing to algorithm implementation and design of user and application interfaces. Minor variations in implementation can lead to considerable changes in performance and impact the success of integrating AI models within scientific practice. Therefore, both data and model standardization needs to be considered,” it said.

Meanwhile, there is a problem in reproducing results assisted by AI owing to the random or stochastic approach to training deep learning models. “Standardized benchmarks and experimental design can alleviate such issues. Another direction towards improving reproducibility is through open-source initiatives that release open models, datasets and education programmes,” the research papers add.

It also points out that Big Tech has the upper hand in developing AI for science in that “the computational and data requirements to calculate these updates are colossal, resulting in a large energy footprint and high computational costs.”

Big Tech’s vast resources and investments in computational infrastructure and cloud services, are “pushing the limits on scale and efficiency.”

However, higher-education institutions could help themselves with better integration across multiple disciplines while also exploiting unique historical databases and measurement technologies that do not exist outside the sector.

The paper calls for the development of an ethical framework to guard against the misapplication of AI in science and better education in all scientific fields.

“As AI systems approach performance that rivals and surpasses humans, employing it as a drop-in replacement for routine laboratory work is becoming feasible. This approach enables researchers to develop predictive models from experimental data iteratively and select experiments to improve them without manually performing laborious and repetitive tasks. To support this paradigm shift, educational programmes are emerging to train scientists in designing, implementing and applying laboratory automation and AI in scientific research. These programmes help scientists understand when the use of AI is appropriate and to prevent misinterpreted conclusions from AI analyses,” it says.

The paper notes that the rise of deep learning in the early 2010s has “significantly expanded the scope and ambition of these scientific discovery processes.”

Less than a decade later, Google DeepMind claimed its AlphaFold machine-learning software rapidly predicted the structure of proteins with decent accuracy, potentially a leap forward in drug discovery. For academic science to apply similar techniques across a vast range of disciplines, it needs to get its act together to compete with the deep pockets of Big Tech. ®

Time Stamp:

More from The Register