Clicky
Artificial intelligence

“Artificial intelligence could soon predict disasters and pandemics”

Damage caused by the 1964 Good Friday earthquake and tsunami in Alaska.— Unsplash

Predict the timing and size of natural disasters is a fundamental goal for scientists. However, there is not enough information to predict them reliably because they are statistically so rare.

There are now techniques to predict them, according to academics from Brown University and the Massachusetts Institute of Technology, using artificial intelligence.

In a recent study published in the journal Computational science of naturethey managed to avoid the huge data requirement by combining statistical algorithms, which require less data to create correct predictions, with efficient machine learning (an application of AI).

“You have to realize that these are stochastic events,” study author George Karniadakis, professor of applied mathematics and engineering at Brown, said in a university statement.

“An explosion of a pandemic like COVID-19, an environmental disaster in the Gulf of Mexico, an earthquake, huge California wildfires, a 30-meter wave that capsizes a ship – these are events rare and because they are rare, we don’t have a lot of historical data.”

“We don’t have enough samples from the past to predict them further into the future. The question we address in the article is: what is the best possible data we can use to minimize the number of data points which we need ?”

The team found that sequential sampling with active learning was the best method.

These algorithms have the ability to study incoming data and learn from it in order to identify additional data points that are equally important or more significant. In other words, more can be accomplished with less knowledge.

A kind of artificial neural network called DeepOnet, which uses interconnected and stacked nodes to mimic neural connections in the human brain, is the machine learning model they used.

This tool combines the functionality of two neural networks into one, processing data on both networks.

Ultimately, this allows massive amounts of data to be examined in a very short time while generating massive amounts of data in response.

Using DeepOnet and active learning approaches, the researchers were able to show that even in the absence of a large amount of data, they can reliably identify early warning signs of a catastrophic event.

The goal is to actively search for events that will signify the unusual occurrences, not to gather all the data and enter it into the system, Karniadakis explained.

He added that while there may not be many examples of the actual event, these precursors could exist. We can identify them using math, and when combined with real events, they will help in training this data-hungry operator.

The group even found that their approach can outperform traditional models, and they agree that their framework can set a standard for more accurate predictions of rare natural occurrences.

They found that by looking at likely conditions over time, they could predict when damaging waves more than twice the size of nearby waves would form. The team’s paper explains how scientists could plan future experiments to reduce expense and predict even more accurately.

Leave a Reply