The good, the bad and the future of deepfakes

Facebooktwitterredditpinterestlinkedinmail

The race between creating and eliminating deepfakes is in full swing. Technology is becoming more and more accessible and deepfakes are becoming increasingly difficult to distinguish from real ones.

  • What exactly are deepfakes? A refresher
  • How are deepfakes made?
  • The Good – An Optimistic View
  • The dangers of deepfakes
  • What can we do to distinguish fake from real?
  • The future of fake – and other considerations
Artificial Intelligence- Why it is Essential for Survival? - Next Tech  Magazine

Despite a huge increase in positive applications, the dangers of deepfakes continue to raise widespread concern as they become more widely known and better understood. We are inundated with content describing how rapidly this deep learning technology is being developed, that deepfake tech is becoming more sophisticated and easier to access, and what the risks are when this technology falls into the wrong hands. Like it or not, and as disturbing as the negative consequences of using deepfakes may be, they are and will remain a part of our lives. And even though deepfakes receive mostly negative publicity, there are also many reasons to be excited about this technology and its many positive applications. Deepfake technology, for example, makes it possible to create completely new types of content and democratize access to creation tools – which until recently were either too expensive or too complicated for the average person. The use of artificial intelligence to create realistic simulations could actually be a positive development for humanity.

What exactly are deepfakes? A refresher

Giving a comprehensive definition of deepfakes is not easy. The term deepfake combines the words deep (from deep learning) and fake (fake). We know that deepfakes are made possible by deep learning technology, a machine learning technique that allows computers to learn by following examples. Deepfake technology uses a person’s behavior – such as the voice, image and typical facial expressions or body movements – to create completely new content that is virtually indistinguishable from authentic content. This technology can also be used to make people say or do things in videos that they never said or did, or to replace someone in a video with another person, or to create video content featuring important political figures or celebrities or even with people who don’t exist at all. The manipulation of existing – or the creation of new – digital images is not new. In fact, AI-generated pornographic content first surfaced in late 2017. The creation of this type of video material initially took at least a year and was done by experts in high-tech studios. But thanks to the rapid development of deepfake technology in recent years, this can now be done a lot faster and easier and the results are much more convincing. The term deepfakes was originally used for specific pornographic content, but now it is applied much more broadly to describe many different types of AI-generated or synthetic video content. AI-generated pornographic content first surfaced in late 2017. The creation of this type of video material initially took at least a year and was done by experts in high-tech studios. But thanks to the rapid development of deepfake technology in recent years, this can now be done a lot faster and easier and the results are much more convincing. The term deepfakes was originally used for specific pornographic content, but now it is applied much more broadly to describe many different types of AI-generated or synthetic video content. AI-generated pornographic content first surfaced in late 2017. The creation of this type of video material initially took at least a year and was done by experts in high-tech studios. But thanks to the rapid development of deepfake technology in recent years, this can now be done a lot faster and easier and the results are much more convincing. The term deepfakes was originally used for specific pornographic content, but now it is applied much more broadly to describe many different types of AI-generated or synthetic video content. But thanks to the rapid development of deepfake technology in recent years, this can now be done a lot faster and easier and the results are much more convincing. The term deepfakes was originally used for specific pornographic content, but now it is applied much more broadly to describe many different types of AI-generated or synthetic video content. But thanks to the rapid development of deepfake technology in recent years, this can now be done a lot faster and easier and the results are much more convincing. The term deepfakes was originally used for specific pornographic content, but now it is applied much more broadly to describe many different types of AI-generated or synthetic video content.

Artificial intelligence brings new vision to healthcare

How are deepfakes made?

To create a realistic deepfake video of an existing person, a neural network must be trained using video images of this person, including an extensive range of facial expressions, in all kinds of different light and from every angle imaginable, so that the artificial intelligence gains a deep ‘understanding’ of not only the appearance but also the ‘essence’ of the person in question. The trained network is then combined with techniques such as advanced computer graphics to place a made-up version of this person on top of the person in the original video, as it were. While this process is much faster than it was a few years ago, truly credible results are still quite time consuming and complicated. However, cutting-edge technology, such as Samsung AI technology developed in a Russian AI lab, makes it possible to create a deepfake video with just a handful of images – or even just one.

The Good – An Optimistic View

While the not-so-kosher uses of deepfakes are quite frightening, this technology also offers many benefits and we regularly find new, positive uses for deepfaketech. Think, for example, of editing video images without having to do reshoots, or  bringing it back to life’from artists who are no longer with us. For example, researchers at the Samsung AI lab in Moscow recently succeeded in converting Da Vinci’s Mona Lisa into video. Through deep learning technology, they managed to make this famous lady move her head, mouth and eyes. Deepfake technology was also used at the Dalí Museum in Florida to display a life-size deepfake of surrealist artist Salvador Dalí that features several quotes he has written or spoken during his art career. With deepfake technology, we can experience things that never existed, or see all kinds of future possibilities before us. In addition to the many different possible applications in art and entertainment, this technology can also do all kinds of impressive things in education and healthcare. Below are a few more interesting examples of this groundbreaking technology.

Speech manipulator converts text to speech

Adobe’s VoCo software – still in the research and prototype phase – lets you convert text into speech and edit it, just as you would images in Photoshop. Suppose you want to comment on a film clip by, for example, David Attenborough or Morgan Freeman. With VoCo, this is now possible without having to spend a fortune hiring the real voice actors. The software allows you to modify an existing audio recording of a person by adding words and phrases, without the original narrator ever saying them. During a live demo in San Diego, an Adobe employee transformed a digitized recording of a man who had originally said “I kissed my dogs and my wife” to “I kissed Jordan three times.” A 20 minute speech recording was used to arrive at this result. The transcribed version of this recording was then modified and converted into the new voice clip at the touch of a button. As impressive as this technology may be, these kinds of developments could further exacerbate the already problematic situation of fake news and further undermine public trust in journalism. However, Adobe has announced that it is taking action to address these potential challenges. Such developments could further exacerbate the already problematic situation of fake news and further undermine public trust in journalism. However, Adobe has announced that it is taking action to address these potential challenges. Such developments could further exacerbate the already problematic situation of fake news and further undermine public trust in journalism. However, Adobe has announced that it is taking action to address these potential challenges.

How Businesses Can Benefit From Artificial Intelligence in 2021?

Convincing dubbing through automated facial resuscitation

Synthesia, an AI software company founded by a team of researchers and entrepreneurs from Stanford, Cambridge, University College London and the Technical University of Munich, introduces a new kind of media – facial resuscitation software – that enables automated and highly persuasive dubbing. The AI ​​startup was put on the map with the release of a synthetic video in which David Beckham talks about the deadly disease Malaria in nine different languages. This technology can be used in a variety of ways and offers creators around the world an enormous amount of additional possibilities. Synthesia and the international news service Reuters recently teamed up to create the world’s first synthesized, newsreader-spoken newscasts. For this they used basic deepfake technology, with which they made new newscasts based on pre-recorded clips from a newsreader. By far the most remarkable thing is that this technology makes it possible to automatically generate news items that can also be personalized for individual viewers. Synthesia’s technology can be used for training purposes to develop video modules in more than 40 languages ​​and create or modify content easily and quickly. With this technology, you can also turn text and slides into video presentations in minutes, without the need for video editing skills. This is useful for purposes such as business communication, among other things. Synthesia’s technology can be used for training purposes to develop video modules in more than 40 languages ​​and create or modify content easily and quickly. With this technology, you can also turn text and slides into video presentations in minutes, without the need for video editing skills. This is useful for purposes such as business communication, among other things. Synthesia’s technology can be used for training purposes to develop video modules in more than 40 languages ​​and create or modify content easily and quickly. With this technology, you can also turn text and slides into video presentations in minutes, without the need for video editing skills. This is useful for purposes such as business communication, among other things.

With deepfakes anyone can dance like a pro

Tinghui Zhou, CEO and co-founder of Humen AI, a dance deepfakes startup, has teamed up with his research colleagues at UC Berkeley to develop technology that lets anyone dance like a pro. Think, for example, of the impressive dance moves of Bruno Mars. For this, the gentlemen used a type of artificial intelligence called GANs (generative adversarial networks), with which you can ‘read’ someone’s dance steps, copy them and ‘paste’ them on a target body. The system can be used for all kinds of dance styles – such as ballet, jazz, modern or hip-hop. First, videos of the source dancer and the target dancer are recorded. Then the images of the dancers turned into stick figures. After that, the swap takes place through a neural network synthesis video of the target dancer based on the stick figure movements of the sourcedancer – and voila! All you need for this are some video images and the right AI software. It’s impressive work and traditionally this kind of video manipulation with a whole team would take you several days. Humen AI aims to turn the dance video gimmick into an app and eventually develop a paid service for advertising agencies, video game developers, and even Hollywood studios. Ricky Wong, co-founder of Humen AI, says: “With three minutes of motion images and material from professionals, you can make anyone dance. We try to bring joy and fun to people’s lives.” Zhou adds, “The future we envision is one where anyone can create Hollywood-level content.”

Smart assistants and virtual people

Smart assistants like Siri, Alexa, and Cortana have been around for a while and have improved a lot in recent years. However, they still feel somewhat like a new user interface that should give you exact instructions, rather than a virtual creature that you can interact with naturally. And one of the most important steps in creating credible virtual “human” assistants that we can interact with is the ability to mimic facial expressions, body posture, gestures and voices. These so-called virtual persons are slowly but surely becoming mainstream – think of digital influencers for example – and with them we communicate just like we do with real people. And while digital influencers don’t really respond to you in their own words, because their content is created by storytellers, they herald a future of “natural” interaction with real virtual creatures. With deepfake technology trained with countless examples of human behavior, we could give smart assistants the ability to make and understand high-quality conversations. And thanks to the same technology, even digital influencers can develop the ability to respond visually – in real time – in credible ways. Welcome to the future of virtual people. And thanks to the same technology, even digital influencers can develop the ability to respond visually – in real time – in credible ways. Welcome to the future of virtual people. And thanks to the same technology, even digital influencers can develop the ability to respond visually – in real time – in credible ways. Welcome to the future of virtual people.

Deep generative models offer new possibilities in healthcare

Deepfake technology can also offer many benefits in other sectors, such as healthcare. The tech can be used to synthesize realistic data to help researchers develop new treatment methods for diseases so that they are no longer dependent on patient data. Work in this area has already been conducted by a team of researchers from the Mayo Clinic, the MGH & BWH Center for Clinical Data Science, and NVIDIA, who have collaborated on using GANs (generative adversarial networks) to create synthetic brain MRI scans. develop. The team trained its GAN with data from two brain MRI datasets: one contained about two hundred MRIs showing tumors and the other thousands of MRIs showing signs of Alzheimer’s. According to the researchers, algorithms trained with a combination of “fake” medical images and 10 percent real images became just as adept at detecting tumors as algorithms trained only with real images. In their paper the researchers say: “Data diversity is critical to success in training deep learning models. Medical imaging data sets are often unbalanced because pathological findings are generally rare, which poses quite a few challenges when training deep learning models. We propose a method to generate synthetic MRI images of brain tumors by training a GAN. This provides an automatable, low-cost source of diverse data that can be used to complement the training set.” Because the images are generated synthetically, you no longer have to deal with privacy or patient data challenges. The generated data can be easily shared with different medical institutions, creating an endless variety of combinations that can be used to improve and speed up the work. The team hopes the model will help scientists generate new data that can be used to detect anomalies more quickly and accurately.

The dangers of deepfakes

As exciting and promising as deepfake technology may be, these developments also pose several serious challenges. The most important of these is the distribution of pornographic material featuring persons who have not given their consent. And according to a DeepTrace report, a whopping 96 percent of the deepfakes currently found online are made up of this type of material. There have also been several reports of deepfake audio being used for identity theft and extortion. The use of deepfakes potentially poses a huge security and political destabilization risk, as the technology can be used to spread fake news and lead to an increase in cybercrime, revenge porn, harassment, abuse and (fake) scandals. There is also a good chance that video images and audio files will soon no longer be allowed to be used as evidence in court, as they will become almost indistinguishable from the real thing. according to Brookings Institution, the social and political dangers of deepfakes include “disrupting democratic discourse; rigging elections; decreased trust in institutions; declining journalistic quality; exacerbation of social divisions; undermining public security; and inflicting hard-to-repair damage to the reputation of prominent individuals.” Deepfakes can also cause serious financial problems. Some examples include a British energy company that was tricked into making a $243 million fraudulent wire transfer and an audio deepfake used to defraud a US CEO out of $10 million. And here are some more important examples of the dangers of deepfakes.

New Year’s video speech leads to attempted military coup poging

The fact that more and more – and increasingly sophisticated – deepfakes are circulating on the internet can mean that any video that seems slightly outlandish can cause chaos. An example is the New Year video speech by Gabon’s President Ali Bongo in 2019. The president had not been seen in public for several months and the lack of answers from the government led to more and more speculation and doubt. The video subsequently caused growing suspicion among people in Gabon and international observers about the president’s well-being. Although the purpose of the video was to speculation about the poor health of the president of the world to help this plan failed because Bongo’s opponents were not convinced of the video’s authenticity. The opposition believed there was something odd about the president’s locomotion in the video footage. A week after the video’s release, Gabon’s military called for a coup d’état, which ultimately failed. Hany Farid, a computer science professor who specializes in digital forensics, said: “I just watched several other videos of President Bongo and they don’t resemble the speech patterns in this video, and even his appearance doesn’t look the same” . Farid added that he could not give a definitive assessment but that he felt “something was not right”.

Deepfakes as blackmail material for cheerleaders

A Pennsylvania woman was recently arrested for creating deepfakes of underage cheerleaders. The victims were her daughter’s rivals for the local cheerleading squad. With the fake images, the 50-year-old mother tried to put the girls in a bad light. Using photos and videos that the teens had shared on social media, the woman created fake photos and videos, in which it appeared that the girls were drinking alcohol and taking drugs naked. The woman then sent these deepfakes to the coaches to get the teens disqualified. The fake material was also sent to the girls themselves with a message urging them to commit suicide. According to the American media, the daughter herself would not have known about her mother’s actions. The mother is being charged with cyber abuse and related crimes. With regard to the first victim says Matt Weintraub, DA of Bucks County: “The suspect edited a real photo with some photoshop app to make it look like this teenage girl had no clothes on. But it was a social media screenshot showing the teen wearing swimsuits.”

Deepfake bots on Telegram create nude photos of women and children

Last year, more than 100,000 fake nude photos were generated by an ecosystem of bots at the request of Telegram users . The foundation of this ecosystem is an AI-powered bot that allows users to “strip” the clothing of images of women so that they appear naked. according to a report from the visual threat intelligence firm Sensity, “most of the original images appeared to have come from social media pages or directly from private communications, which the individuals in question probably didn’t know were being targeted. While this case mostly involved individuals, we also identified a significant number of social media influencers, game streamers and celebrities in the entertainment industry. In addition, a limited number of images appeared to be underage, suggesting that some were primarily using the bot to generate and distribute pedophile content.” The deepfakes have been shared on various social media platforms with the aim of public shaming, revenge or extortion. Most deepfake bots use DeepNude technology, but we see more and more similar apps popping up on the internet. All you have to do is upload a photo and then you’ll get a manipulated image back in minutes. Unfortunately, since Telegram uses encrypted messages, users can easily create anonymous accounts that are virtually impossible to trace. And while encryption technology is meant to protect users’ privacy and evade surveillance, it’s not hard to see how you can use these features for shady ends as well. Unfortunately, users can easily create anonymous accounts that are virtually impossible to trace. And while encryption technology is meant to protect users’ privacy and evade surveillance, it’s not hard to see how you can use these features for shady ends as well. Unfortunately, users can easily create anonymous accounts that are virtually impossible to trace. And while encryption technology is meant to protect users’ privacy and evade surveillance, it’s not hard to see how you can use these features for shady ends as well.

What can we do to distinguish fake from real?

As it stands, the number of deepfake videos circulating online has increased at an astonishingly estimated 900 percent annual rate. As technological advances have made it increasingly easy to produce deepfake content, more and more experts are wondering how we can curb the malicious use of this technology. One of the ways to do this – as in the case of cybercrime and phishing – is to raise public awareness and educate people about the dangers of deepfakes. Many companies have now launched technologies to recognize fake content, prevent its distribution or verify authentic content through blockchain or watermarks. However, the downside is that these detection and authentication methods can also be used by those same malicious actors to create even more convincing deepfakes. Here are some examples of technologies that have been developed to combat the misuse of deepfakes.

Deepfake policies of social media platforms

Social networks play the most important role in preventing deepfakes from being used for malicious purposes. Deepfakes are currently seen by social media platforms as any other content that is misleading or could lead to people being duped or otherwise disadvantaged. The policy of Instagram and Facebook for example to ‘manipulated media removal, excluding parodies. YouTube bans manipulated content that is misleading or poses serious risks, and TikTok removes “digital counterfeits” — including false health information — that are misleading and can cause harm. Reddit removes content that deceptively or deceptively impersonates people or entities, but makes an exception for satire and parody. However, as the number and quality of deepfakes continue to increase, it is unclear how social networks will be able to maintain these policies in the future. One thing they could do is automatically label deepfakes, whether or not they are harmful, so that at least more awareness is created.

Spotting super realistic deepfake images

Researchers at the University of Buffalo have developed an ingenious new tool that allows them to spot super-realistic deepfakes. In their paper tell the researchers about the method they have developed to distinguish authentic images from images generated by deepfake technology. They do this by carefully studying the eyes of the person in the picture. What the researchers found is that the reflections in both eyes of the person in an authentic photo due to the same lighting conditions are usually identical. With manipulated images, however, this is usually not the case. The tool has so far succeeded in recognizing images generated by deepfake technology in 94 percent of the cases. Incidentally, the tool is most accurate with photos taken with the portrait setting, which is often the case with close-up portrait photos.

Real attendance guarantee

In the fight against the abuse of deepfakes, it is critical that you can verify that the person you think you are dealing with online is actually real – and this can be done with iProov Genuine Presence Assurance. The iProov system uses biometric scans that can identify whether the person in question is indeed a living person and not a photo, video, mask, deepfake or other method to circumvent a (biometric) security system. The system works on mobile devices, computers or in unattended kiosks and is used by organizations around the world, such as the National Health Service (NHS) in the UK. The NHS has opted for iProov biometric facial authenticationto improve users’ onboarding experience. Thanks to iProov’s Flashmark facial authentication technology, remote users can securely log into the NHS app to make appointments, access medical records and request repeat prescriptions. The process consists of submitting an ID photo and positioning the face on the screen. After a short series of flashes, the user’s identity is verified and he or she can use the NHS app.

Deepfake Antivirus

Sensity, an Amsterdam-based company developing deep learning technologies for monitoring and detecting deepfakes, has developed a visual threat intelligence platform that applies the same deep learning processes used in creating deepfakes. The system combines deepfake detection with advanced video forensic analysis and monitoring capabilities. The platform is a kind of antivirus for deepfakes and monitors more than 500 sources on the open and dark web where the chance of finding malicious deepfakes is high. It warns users when they view anything that may be AI-generated synthetic content and provides detailed ratings and threat analysis. When you upload URLs or your own photo and video files, Sensity analyzes them to get the latest, detect AI-based media manipulation and synthesis techniques, including fake human faces in social media profiles, dating apps or online financial services accounts. Sensity also provides access to the world’s most comprehensive deepfake database and other visual media targeting public figures, including insights into the sectors and countries most affected by this technology.

The future of fake – and other considerations

Pandora’s box has been opened and it seems that the race between creating deepfakes and detecting and preventing them will intensify in the future. Deepfake technology is becoming more and more accessible and it is becoming easier for ‘the average person’ to create deepfakes themselves. In addition, it is also becoming increasingly difficult to distinguish deepfake content from authentic content. Deepfakes will continue to evolve and spread. And challenges like the lack of detail in the synthesis will no doubt be overcome in the short term. Furthermore, improvements in neural network structures and advances in hardware are expected to significantly reduce training and delivery times. There are already new algorithms that can generate increasingly realistic – and almost real-time – outputs.

And while the use of deepfakes for good is rapidly increasing in industries such as entertainment, news and education, these developments will simultaneously lead to even more serious threats. Think of increasing crime, the dissemination of fake information, synthetic identity fraud, election manipulation and political tensions. Another aspect to consider is that deepfakes also have a very negative impact on our freedom of choice and identity. Using a photo you can actually make someone do all kinds of things – which in reality never happened at all – without anyone’s permission or even knowing anything about it.

It is clear that the misguided, deceptive use of deepfake technology needs to be curbed and tech experts, journalists and policy makers will play a crucial role in this. They are the right people to inform the public about the possibilities and dangers of synthetic media such as deepfakes. And if we teach ourselves to only trust content from solid, verified sources, we may discover that the good use of deepfakes outweighs the bad. With greater public awareness, we can mitigate the negative impact of deepfakes, find ways to deal with them, and in the future even see that we can also take advantage of the possibilities of deepfake technology.

Facebooktwitterredditpinterestlinkedinmail