These new tricks are still able to outwit the videos from Deepfake

 3r3147. 3r3-31. These new tricks are still able to outwit the videos from Deepfake  3r3147.
 3r3147. A few weeks a computer science specialist Sywei Lu[Siwei Lyu]watched the deepfake videos created by his team with agonizing anxiety. These fake movies, created using a machine learning algorithm, showed celebrities doing things that they wouldn't do. They seemed to him strangely frightening, and not only because he knew that they were fake. “They look wrong,” he recalls his thoughts, “but it’s very difficult to pinpoint exactly what makes up such an impression.”
 3r3147.
 3r3147. But once a childish memory arose in his brain. Like many other children, he played peepers with the children. “I have always lost such competitions,” he says, “because when I looked at their unblinking faces, I felt very uncomfortable.”
 3r3147.
 3r3147. He realized that these fake films caused him similar discomfort: he lost his eyes to these movie stars, because they did not open and close their eyes as often as real people do.
 3r3147. Programs 3-331-332.
Deepfake
they take at the entrance a lot of images of a particular person - you, your ex-girlfriend, Kim Jong-un - to be seen from different angles, with different facial expressions, saying different words. Algorithms learn how this character looks like, and then synthesize the knowledge gained in a video showing how this person does what he has never done. 3r3331. Pornography 3r3-33132. . 3r3333. Stephen Colbert
saying the words of John Oliver. President, 3r3335. warning
about the dangers of fake videos.
 3r3147.
 3r3147. These videos look convincing for a few seconds on the screen of the phone, but they are (yet) not yet perfect. They can be seen signs of forgery, for example, in a strange way, constantly open eyes, resulting from the shortcomings of the process of their creation. Looking at the DeepFake giblets, Lui realized that among the images from which the program studied, there were not too many photos with closed eyes (you do not save yourself a selfie on which you blink?). “It becomes a distortion,” he says. Neural network does not understand blinking. Programs may also miss out on other “physiological signals inherent in humans,” says 3–3–341. work 3r3-3322. Liu, describing this phenomenon - breathing at a normal rate, or the presence of a pulse. And although this study focused on videos created with the help of specific software, it is generally recognized that even a large set of photographs may not be able to adequately describe a person’s physical perception, therefore any software trained on these images will be imperfect.
 3r3147.
 3r3147. The revelation about the blinking revealed a lot of fake videos. But a few weeks after Luy and the team posted a draft of the work online, they received an anonymous letter, which contained links to the next fake video posted on YouTube, where the stars opened and closed their eyes in a more normal way. Fake creators have evolved.
 3r3147.
 3r3147. And this is natural. As noted by Lü in r3r351. Interview 3-333132. The Conversation, “a blink can be added to the deepfake videos by including images with closed eyes in the base, or using videos for training.” Knowing what a sign of a fake is, you can avoid it - this is “only” a technical problem. Which means that fake videos will get involved in an arms race between creators and recognizers. Studies like Lüy’s work can only complicate the life of fake manufacturers. “We are trying to raise the bar,” he says. “We want to complicate this process and make it more time-consuming.”
 3r3147.
 3r3147. Because now it is very simple. Download the program, google photos of celebrities, feed them to the entrance of the program. She digests them, learns from them. And although she is not yet completely independent, with a little help she bears and gives birth to something new and fairly realistic looking.
 3r3147.
 3r3147. “It’s very blurry,” says Lui. And he does not mean images. “This is the line between truth and fake,” he clarifies.
 3r3147.
 3r3147. This is both annoying and not surprising to anyone who has recently been alive, and sat on the Internet. But this is especially worried about the military and intelligence agencies. In particular, this is why Liu's research, like some other works, is funded by the program from 3-336365. DARPA
under the name MediFor - Media Forensics[судебная экспертиза СМИ].
 3r3147.
 3r3147. The MediFor project was launched in 201? when the agency noticed an increase in the activity of fake manufacturers. The project is trying to create an automated system that studies three levels of counterfeit signs and provides an outstanding “assessment of the reality” of an image or video. At the first level, dirty digital traces are searched for - the noise from the camera of a particular model or compression artifacts. The second level is physical: not the lighting on the face, the reflection does not look like it should look like with this lamp arrangement. The latter is semantic: comparison of data with verified real information. If, for example, it is stated that the video that captured the game of football was filmed in Central Park at 14 o'clock on Tuesday, October ? 201? does the sky condition coincide with the weather archive? Put all the levels together and get an estimate of the reality of the data. DARPA hopes that by the end of MediFor, prototypes of systems will appear that can be used for large-scale tests.
 3r3147.
 3r3147. However, the watch is ticking (or is it just a repetitive sound created by an AI trained on data related to time tracking?) “In a few years, you will be able to face such a thing as fabrication of events,” says Durpa’s program manager Mat Turek. “Not just a single image or edited video, but several images or videos trying to convey a convincing message.”
 3r3147.
 3r3147. At the Los Alamos National Laboratory, an informatics specialist Juston Moore imagines a potential future more vividly. Suppose: we inform the algorithm that we need a video in which Moore robs the pharmacy; we introduce it in the video of the security system of this institution; send him to jail. In other words, he is concerned that if the standards for verifying evidence will not (or will not) develop in parallel with the manufacture of fakes, it will be easy to expose people. And if the courts cannot rely on visual data, it may turn out that real evidence will be ignored.
 3r3147.
 3r3147. We come to the logical conclusion that to see once will be no better than a hundred times to hear. “It may happen that we will not trust any photographic evidence,” he says, “but I don’t want to live in such a world.”
 3r3147.
 3r3147. Such a world is not so incredible. And the problem, according to Moore, extends far beyond the replacement of individuals. “Algorithms can create images of persons not belonging to real people, they can strangely alter images, turning horse to zebra "Says Moore. They can delete portions of images and 3r33939. remove
from the video. objects in the foreground.
 3r3147.
 3r3147. Maybe we will not be able to deal with fakes faster than they will do. But maybe it will work out - and this opportunity provides motivation for Moore's team on researching digital methods of studying evidence. The Los Alamos program, which combines knowledge of cyber systems, information systems, theoretical biology and biophysics, is about a year younger than DARPA. One approach focuses on “compressibility”, on cases where the image does not contain as much information as it seems. “In essence, we are repelled by the idea that all AI image generators have a limited set of things that they can create,” says Moore. “So even if the image seems rather complicated to me, you can find a repeating structure in it.” When re-processing pixels there is not so much of everything.
 3r3147.
 3r3147. They also use sparse coding algorithms to play the game with matches. Suppose we have two collections: a bunch of real images, and a bunch of artificially created AI images. The algorithm studies them, creating what Moore calls the “vocabulary of visual elements,” noting that the artificial pictures have a common thing, and that the real images have something in common. If Moore’s friend retweets Obama’s image, and Moore considers that the image was made using AI, he will be able to drive it through the program and find out which of the dictionaries it will be closer to.
 3r3147.
 3r3147. Los Alamos, where one of the world's most powerful supercomputers is located, downloads resources to this program not only because someone can substitute Mura through a fake robbery. The mission of the laboratory is “to solve problems of national security with the help of scientific excellence”. Their central task is nuclear safety, ensuring that bombs do not explode when they should not, and will explode when they should (please do not), as well as help in non-proliferation. All this requires a general knowledge of machine learning, since it helps, as Moore says, “to draw big conclusions from small data sets”.
 3r3147.
 3r3147. But besides all this, businesses like Los Alamos should be able to believe their eyes — or know when they don’t need to be trusted. Because what if you see satellite images of how a country mobilizes or tests nuclear weapons? What if someone forges sensor readings?
 3r3147.
 3r3147. This is a frightening future that the work of Moore and Liu should ideally avoid. But in a world where everything is lost, to see doesn’t mean to believe, and seemingly absolute measurements can be fake. Everything digital is in doubt.
 3r3147.
 3r3147. But maybe “doubt” is the wrong word. Many people will take fakes at face value (remember Photo 3r3-3322. Sharks in Houston?), Especially if the content matches their beliefs. “People will believe in what they tend to believe,” says Moore.
 3r3147.
 3r3147. The likelihood of this is higher for the general public watching the news than in the area of ​​national security. And in order to prevent the spread of misinformation among us, simpletons, DARPA offers cooperation to social networks, in an attempt to help users determine that the credibility of the video where Kim Jong-un is dancing macaroons is rather low. Turek points out that social networks can spread the story that disproves the video as quickly as the video itself.
 3r3147.
 3r3147. But will they do it? Exposing is a
process. troublesome
(though not such 3r3131. ineffective 3r3-33132., as rumor says). And people need to truly show the facts before they can change their opinion about fiction.
 3r3147.
 3r3147. But even if no one can change the opinion of the masses about the truthfulness of the video, it is important that people who make political and legal decisions - about who is carrying rockets or killing people - try to use machines to separate the obvious reality from the AI ​​sleep. 3r3143. 3r3147. 3r3147. 3r33140. ! function (e) {function t (t, n) {if (! (n in e)) {for (var r, a = e.document, i = a.scripts, o = i.length; o-- ;) if (-1! == i[o].src.indexOf (t)) {r = i[o]; break} if (! r) {r = a.createElement ("script"), r.type = "text /jаvascript", r.async =! ? r.defer =! ? r.src = t, r.charset = "UTF-8"; var d = function () {var e = a.getElementsByTagName ("script")[0]; e.parentNode.insertBefore (r, e)}; "[object Opera]" == e.opera? a.addEventListener? a.addEventListener ("DOMContentLoaded", d,! 1): e.attachEvent ("onload", d ): d ()}}} t ("//mediator.mail.ru/script/2820404/"""_mediator") () (); 3r3141. 3r3147. 3r3143. 3r3147. 3r3147. 3r3147. 3r3147.
+ 0 -

Add comment