One of the things that caught my eye at Nvidia’s flagship event, the GPU Technology Conference (GTC), was Maxine, a platform that leverages artificial intelligence to improve the quality and experience of video-conferencing applications in real-time. Maxine used deep learning for resolution improvement, background noise reduction, video compression, face alignment, and real-time translation and transcription. In this post, which marks the first installation of our “deconstructing artificial intelligence” series, we will take a look at how some of these features work and how they tie-in with AI research done at Nvidia. We’ll also explore the pending issues and the possible…
This story continues at The Next Web
No comments:
Post a Comment