Machine learning is increasingly used in particle physics
Experiments at the Large Hadron Collider every second give out about a million gigabytes of data. Even after the reduction and compression, the data received by the LHC in just an hour, by volume, are comparable with the data received by Facebook for the whole year.
Fortunately, particle physics specialists do not have to deal with these data manually. They work together with a variety of artificial intelligence, which is trained to conduct independent analysis of data using machine learning technology.
"Compared with traditional computer algorithms that we develop for conducting a certain kind of analysis, we make the algorithm of machine learning so that he decides which analyzes to deal with, which as a result saves us countless man-hours of development and analysis," says physicist Alexander Radovich from the College of William and Mary, working in the neutrino experiment Nova.
LHCb in an experiment that can shed light on why in the universe of matter there is much more than antimatter, MO algorithms take at least 70% of such solutions, says scientist Mike Williams of the Massachusetts Institute of Technology, who works for LHCb, one of the authors of the aforementioned report. "MO plays a role in almost all aspects of working with data in the experiment, starting from triggers, and ending with the analysis of the remaining data," he says.
Machine learning demonstrates significant advances in analysis. On the huge ATLAS and CMS detectors at the LHC, thanks to which the Higgs particle was discovered, there are millions of sensors whose signals must be brought together to produce meaningful results.
"These signals form a complex data space," says Michael Kagan of the US Department of Energy's SLAC National Accelerator Laboratory, working on the ATLAS detector, and was involved in the creation of the report. "We need to understand the relationship between them to draw conclusions - for example, that a certain trace of a particle in the detector left an electron, a photon or something else."
MO is also advantageous for experiments with neutrinos. NOva, which serves Fermilab , studies how neutrinos pass from one species to another when traveling through the Earth. These are neutrino oscillations of are potentially capable of revealing the existence of new types of neutrinos, which, according to some theories, may turn out to be particles of dark matter. NOV detectors search for charged particles that appear when neutrinos collide with a material in a detector, and the MO algorithms determine them.
From machine learning to in-depth training
Recent advances in MO are often called in-depth training, and it promises to further expand the scope of MO in particle physics. Under the GO usually refer to the use of neural networks: computer algorithms with architecture, inspired by the dense thick neurons of the human brain.
These neural networks are independently trained in certain analysis tasks through training, when they process test data, for example from simulations, and receive feedback on the quality of their work.
Until recently, the successes of neural networks have been limited, as they were very difficult to train, says co-author Kazuhiro Tero, a researcher with SLAC, working in the neutrino experiment MicroBooNE, which studies neutrino oscillations within the framework of the short-term Fermilab program. The experiment will become part of the future Deep underground neutrino experiment . "These difficulties limited our ability to work with the simplest neural networks in depth in a couple of layers," he says. "Thanks to the advancement of algorithms and computing equipment, now we know much more about how to create and train more capable neural networks with hundreds or thousands of layers."
Many breakthroughs in GO are caused by commercial developments of technological giants and the explosion of data that they created over the past two decades. "For example, NOVA uses a neural network, made in the likeness of the architecture of GoogleNet," says Radovic. "This improved the experiment to an extent that could be achieved only by increasing the collection of data by 30%."
Fruitful soil for innovation
The MO algorithms become more and more complex and fine-tuned day by day, opening unprecedented opportunities for solving problems in the field of particle physics. Many of the new tasks for which they can be applied are related to computer vision, as Kagan says. "It's like recognizing faces, but only in particle physics, image properties are more abstract and complex than ears or noses."
The data of some experiments, for example, NOvA and MicroBooNE, can easily be turned into real images, and AI can be used immediately to determine their characteristics. On the other hand, images based on the results of experiments at the LHC must first be reconstructed on the basis of an intricate set of data obtained from millions of sensors.
"But even if the data does not look like images, we can still apply methods from computer vision if we process the data correctly," Radovic says.
One of the areas in which such an approach can be very useful is the analysis of particle jets that occur in large quantities at the LHC. Jets are narrow jets of particles whose traces are extremely difficult to separate from each other. Technology of computer vision can help to understand these jets.
Another new application of GO is the simulation of data on particle physics, predicting, say, what will happen in particle collisions at the LHC, which can be compared to real data. Such simulations usually work slowly and require incredibly large computing powers. AI could also conduct such simulations much faster, which in the end could be a useful addition to traditional research methods.
"Just a few years ago, no one could have imagined that deep-seated neural networks could be trained so that they could" see "the data on the basis of random noise," Kagan said. "Although this work is still at a very early stage, it already looks rather promising, and is likely to help solve data problems in the future."
The use of healthy skepticism
Despite obvious breakthroughs, MO enthusiasts often face skepticism on the part of their colleagues, in particular, since the MO algorithms for the most part work as a "black box", giving little or no information about how they came to a definite conclusion.
"Skepticism is very healthy," says William. "If we use MOs for triggers that discard certain data, such as LHCb, then we need to approach this issue very carefully and very high ticks the bar."
Therefore, in order to strengthen the MO position in particle physics, it is necessary to constantly try to improve understanding of how the algorithms work, and if possible to cross-compare with real data.
"We need to constantly try to understand what the computer algorithm is doing, and evaluate its results," says Therao. - This is true for any algorithm, not only MO. Therefore, skepticism should not hamper progress. "
Rapid advancement already allows some researchers to dream about what might become possible in the near future. "Today, for the most part, we are using MO to look for features in our data that can help us answer some questions," says Therao. "And in about ten years, the MO algorithms may be able to independently raise their own questions and understand that they have discovered a new physics."