When, where, and how big: for scientists chasing earthquakes, a useful forecast needs to bring a three-part answer. Until recent research breakthroughs, the biggest question mark has always been the “where.”
The tricky thing with earthquakes — as opposed to, say, a lightning strike — is that they consist of multiple events. A large earthquake generates so much stress that the resulting changes produce aftershocks, which ripple out and distribute tension from the initial quake.
Often, aftershock events can be just as serious as the earthquake itself, hitting already weakened infrastructure without warning.
To Catch an Aftershock
Most aftershocks can be found across the fault rupture area of the main earthquake shock. Their pattern and distribution are extremely useful in determining and confirming the main shock’s actual area of slippage.
Tracing the rate and magnitude of an aftershock pattern and its temporal decay can be done with a series of very well-established empirical laws. Bath’s law, Omori’s law, and the Gutenberg-Richter law all break out the particulars of an aftershock’s projected behavior, but scientists still struggle to forecast their spatial distribution.
Because of this uncertainty, and like much of the living earth, aftershocks still fall in the “unpredictable” category of dangerous scientific phenomena - and they can rattle affected areas for months.
The Missing Piece: Isolating Aftershock Location With AI
In a new paper published in Nature this August, researchers reveal that pinpointing aftershock locations could be more viable than ever before with the help of deep learning technology and AI.
By capitalizing on the vastness of existing databases and sheer horsepower of modern computing, the scientists trained a neural network specifically for the study. The AI was built to look for patterns in existing main shock-aftershock events, then develop predictions based on the information.
The result? Location determinations nearly twice as accurate as the most useful previous research model (0.849/1.000 as opposed to 0.583/1.000).
Machine Learning and the Future of Scientific Data
Machine learning abandons traditional rules-based programming for experience-based decision making. Today’s science finally possesses:
- Massive amounts of information. Earthquake data isn’t the only colossal archive we’ve accumulated. As the Internet of Things (IoT) becomes increasingly integrated into science, medicine, business, and daily work, we are seamlessly — and effortlessly — collecting millions of points of critical data with every project, job, and weather event.
- Previously unfathomable computing power. The volume of evaluations, transactions, or “experiences” needed for machine-based learning demands cutting-edge power — which we now have.
- Efficient, highly advanced neural networks. In language previously reserved for the organic brain’s synapses, today’s complex neural networks are increasingly capable of large-scale analysis and managing vast quantities of data points.
Seismic events, for example, involve far too many variables in very complex datasets. From energy waves to ground makeup, making sense of it is a tall order.
The team on the Nature study applied a factor called the “von Mises yield criterion” to their deep learning model to help break out predictions. Previously, it had been used in sciences such as metallurgy to determine a material’s breaking point under stress. Now, geologists can utilize the complex calculation in an entirely new way.
While the new research model certainly isn’t ready to deploy (it’s too slow, for instance, to work in real time), we can safely say that it presents much promise for machine learning potential and discoveries to come.
Image Credit: Vladiczech/Shutterstock.com