Research: Predict Earthquakes, Tsunamis and Volcanoes, as Well as Find Oil Deposits
A DCSC case story, provided by Professor Hans Thybo, University of Copenhagen
Continues recordings of seismic waves from earthquakes on a global scale is now being collected, stored and analysed in quantity and quality never seen before. The community has begun to share these long data streams over the global ICT infrastructure. A typical broadband seismograph station will record around 3 Gigabytes of information per year. If this dataset should be available online together with the data from, ideally, some thousands of stations, it corresponds to some 10.000 Gigabytes of information per year.
Despite being theoretically possible, there still remain several unresolved database related and technical problems, before a global homogenous streaming system can be in place. But in turn, computer models can then analyse the huge datasets, better models can be built, and new understanding can be acquired, hopefully leading to better understanding and prediction of earthquakes tsunamis, volcanoes and the like. Danish geologists aim to be part of the global community now taking onboard the instrument of Scientific Computing.
The global oil exploration the seismic industry continuously has some 100 seismic vessels gathering data. Their purpose is to produce images of the subsurface structure, primarily in sedimentary basins. Modern seismic vessels are equipped with 610 streamers and several airgun arrays. The streamers are typically 6 km long cables containing hydrophones every half meter for recording seismic waves generated by the airguns on a, basically, continuous basis. To degrade the amount of data and to improve the signaltonoise ratio, individual hydrophones are usually coupled in arrays, such that each streamer effectively has 2400 active channels.
Hence, a modern seismic vessel produces a data stream of 5.000 Gigabytes per hour. A large survey takes about 2-3 months, such that the total amount of data acquired is of the order of 5 million Gigabytes. This amount of data ideally has to be available simultaneously to the seismic processing team, making it a significant scientific computing challenge, which Danish researchers do not intend to miss out on.
Read more about the research: