A Q&A with Ness’s CTO on the Evolution of Deep Fakes

Deep fakes are becoming a topic of discussion not only in the enterprise, but also among consumer audiences. In the following Q&A, we asked Moshe Kranc, our Chief Technology Officer, to provide his perspective on the computer-generated audio or video forgeries that depict a person saying or doing something that never actually happened.

Why do you think there are growing concerns around deep fakes?
Concerns are evolving because the technology has matured to a point where it can produce high quality fakes that are indistinguishable from the real thing. There’s also a low barrier to entry – many products are emerging that can produce convincing fakes at low cost and with minimal user effort.

What potential value might deep fakes-related technology provide to enterprises and what are some of the ethical considerations in using them responsibly?
There are many positive uses of deep fakes that can benefit enterprise organizations because they enable hyper-personalization. Consider the impact of a sales video where a celebrity endorser addresses the customer by name, in the customer’s native language. Or a video of Jeff Bezos personally explaining Amazon bills to customers, down to the smallest details. Hyper-personalization can also be valuable in new employee training, where training videos address the specific concerns of each new employee.

Of course, it’s important for all organizations using deep fake technology to be transparent about its use so they don’t mislead those benefiting from the technology.

What dangers of deep fakes do enterprises need to be aware of and how might they protect themselves from potential problems?
As media consumers, enterprises will have to exhibit a great deal of caution in their reaction to any information they receive. Deep fake technology is the death of “seeing is believing.” From now on, an enterprise must relate to any video or audio content the way we relate to a magician’s sleight of hand – are they tricking me and how did they do it?

An enterprise may also become the victim of a deep fake attack that threatens to destroy its reputation. These fakes may be of such high quality that their “fakeness” can’t be easily proven. This will present a major challenge as the enterprise is forced to defend its reputation by convincing the public that the incriminating media is not real. In an era of social media “echo chambers,” relying on the public’s ability to ignore what they see or hear will be a very difficult task.

Does any existing or proposed regulation apply to the creation or use of deep fakes?
Many experts believe that existing laws provide enough protection against deep fakes, e.g., harassment, “false light” defamation, copyright infringement. In addition, several lawmakers have introduced bills specifically aimed at anyone who knowingly creates or distributes a fake.

How do you expect the value chain of tools for creating deep fakes and detecting them to evolve over the next several years?
Expect a cat and mouse game between technologies that generate fakes and technologies that detect fakes. In the end, deep fake generation will win, and the fake will truly be indistinguishable from the authentic.  At that point, trust no longer resides with the content, so an enterprise’s trust in a particular piece of content must be generated in some other way, such as brand trust or some type of external authentication that is linked to the content.