Facebook’s AI Can Tell Where Deepfakes Come From

SaveSavedRemoved 0
Deal Score0
Deal Score0

[ad_1]

AI is an amazing piece of technology, but it seems that as useful as it is, it can also be harmful. A good example would be using it to create deepfakes, where AI is “abused” to create fake photos or videos of people that are highly convincing to the untrained eye. This is why companies like Adobe have developed tools that can detect these sorts of fakes.

However, detecting a manipulated photo or video is one thing and only scratches the surface. This is because media files are so easily shared that sometimes it can be hard to trace it back to the source, but Facebook thinks that they might have found the answer. The company, together with Michigan State University, have developed an AI that is not only capable of detecting deepfakes, but can discover where it came from by reverse engineering it.

According to the researchers, “We begin with image attribution and then work on discovering properties of the model that was used to generate the image,” the team continued. “By generalizing image attribution to open-set recognition, we can infer more information about the generative model used to create a deepfake that goes beyond recognizing that it has not been seen before.”

This doesn’t just work on a single deepfake, but it can also compare and trace similarities across a series of deepfakes, meaning that this system could be used to trace groups of manipulated images back to a single generative source, making it potentially easier to track coordinated misinformation campaigns.

Filed in General. Read more about and . Source: engadget

[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

My Gadget Bay
Logo
Register New Account
Reset Password
Compare items
  • Total (0)
Compare