There is a common saying "technology is a double-edged sword". Recent advances in Generative Artificial intelligence (GAI) technology drives this point home more so than anything else. Deepfakes use GAI to create audio and video of such high quality that an untrained eye and ear cannot distinguish between a real and a fake. This can quickly mushroom into a societal menace that can get out of hand.
Adori's platform uses GAI to create audio or video from an idea or convert blogs to podcasts with just a few clicks. This is an excellent opportunity for anyone who aspires to publish their ideas or written material as audio podcasts or as a video on YouTube. However, this also poses a serious risk. When GAI enters the equation in video editing and distribution, it induces reasonable concerns and heightens fears. Visual and audio features can be easily manipulated using GAI.
This automation, although immensely beneficial, can also have profound negative implications, such as breaching copyright rules and causing a severe financial impact on content creators and owners. Furthermore, deep-fake video and audio can lead to fraud, damage company reputation, defame people, and erode public trust in government. This has the potential to weaken journalism and endanger public safety, causing serious liability to the individual creator or the company.
One of the ways companies have been fighting deepfakes is to monitor and detect them before any damage can be done. These advanced detection technologies can be a useful tool in the short term to help discerning users identify deepfakes. However, keeping pace with the crooks, who always appear to be one step ahead, will be a major challenge. It is not practical to monitor every corner case and ensure 100% deep-fake detection. We have to understand and be ready to respond to deepfakes that slip through existing detection methods. In the longer term, we must seek stronger methods for monitoring and certifying the authenticity of both written and visual multimedia content. There are few tools today to help assure readers that the media they’re seeing online came from trusted sources.
Instead of rounding off all the crooks in the neighborhood, a better approach would be to build an impenetrable lock in the content itself. Created on its foundational and patented technology, Adori’s new audio and video authenticator is a powerful solution to analyze any content and guarantee its authenticity and ownership.
The underlying technology in the authenticator has two parts. The first is an encoder that adds metadata in the physical layer of the audio or video stream. This encoded metadata is not human audible but is machine readable. The second is a decoder, which can exist in media players, browsers, apps, etc. It can also be accessed via an API. It validates the content creator by decoding and matching the metadata with that of the creator. This technique authenticates content with a very high degree of accuracy.
An advanced error coding system is used to make sure that there are no false positives. Adori's encoding technique survives modulation, demodulation, transcoding and speaker/microphone attacks to the media, providing a shield for authentic sources of content.
For more details on embedding this revolutionary capability into your multimedia platforms, please contact Adori for a demo and more information.
References:
1) Patent: Interactive Entertainment System, US11,133,883 - Sept 28, 2021
2) Patent: Audio Encoding For Functional Interactivity, US10,839,853 - Nov 17, 2020