The galactic-level task took around 100 million CPU hours and the concerted efforts of 300-plus researchers to coalesce into the released image. But how does one “see” a black hole that’s so massive its gravitic forces trap even lightspeed-moving particles? Well, one can actually see the contours of the black hole by paying attention to the comparably minute amount of light that actually manages to escape its event horizon. To create it, the researchers made use of the interferometry, radio wave-based scanning power of the EHT array, which includes eight radio telescopes around the globe. But scanning impossibly distant celestial bodies comes with a number of caveats, such as exposure time (in this case, the cosmic equivalent of photographing a tree with a 1 second shutter speed in a windy day) and other elements such as data noise, particle interference and celestial bodies. All of which has to be accounted for.
To that effort, the researchers created a simulation library of black holes that leveraged the known physical properties of black holes, general relativity, and a number of other scientific areas. The idea was that this library could parse the enormous amount of data captured by the EHT array into an actual, viewable image – but to do so, an enormous amount of power and computing was not only necessary – it was mandatory.
“We produced a multitude of simulations and compared them to the data. The upshot is that we have a set of models that explain almost all of the data,” said Charles Gammie, a researcher at the University of Illinois at Urbana-Champaign. “It’s remarkable because it explains not only the Event Horizon data, but data taken by other instruments. It’s a triumph of computational physics.”
The vast majority of the required computing hours – around 80 million – were run on TACC’s Frontera system, a 23.5 petaflops, CentOS Linux 7-based Dell system currently ranking 13th on supercomputing’s Top500 list. Frontera leverages 448,448 CPU cores courtesy of 16,016 units of Intel’s Xeon Platinum 8280 chips, a Broadwell-class CPU leveraging 28 Intel cores running at 2.7GHz. The remainder 20 million simulation hours were computed on the NSF’s open Science Grid, which leverages unused CPU cycles in a distributed computing fashion to unlock compute capabilities without the need to costly deploy supercomputers and related infrastructure.
“We were stunned by how well the size of the ring agreed with predictions from Einstein’s Theory of General Relativity,” added Geoffrey Bower, an EHT project scientist with the Institute of Astronomy and Astrophysics of Taipei. “These observations have greatly improved our understanding of what happens at the very center of our galaxy and offer new insights on how these black holes interact with their surroundings.”
The researchers’ efforts are sure to redouble after the endeavor’s success, and they’re now planning on doing something even more extraordinary: rather than a single still image, the next step is to film the black hole throughout a period of time, capturing the Dance of the simultaneously wave and particle-like photons to showcase the black hole’s dynamics. One can only wonder how many millions of CPU hours that effort will take.