If you’re not sufficiently concerned about people using AI tools to create convincing fake audio and video, now the Computer Vision Lab at Nottingham University has developed an AI system capable of creating fairly accurate 3D faces from single photographs. I uploaded one of my own to the demo tool and a few seconds later it produced the following model (a GIF of captured screen video of me rotating the 3D model):
Imagine what AI can do with multiple images and videos of you (from your social media posts, mobile phone’s images and videos library, surveillance images, etc.). Among other possible take-aways is the need for vigilance and cynicism. If you see or hear something in digital media (online or in media sent to you via email, IM, etc.) that is too terrible, wonderful or just shocking to be true, it probably isn’t. For now, at least, it’s still possible to detect forged media (and fake news, but you probably don’t want to) but soon it will require AI tools to spot the work of other AI tools and we’ll then have to decide which AIs to believe. The make/detect forgeries arms race is accelerating.
Okay, still smarting from me suggesting you may not want to detect when the news you enjoy and agree with is fake? Check out the following video and exercise your media literacy by researching cognitive biases.
Related links (interesting examples of cognitive bias and trolling in many of the comments)