0

My Cart

0
Subtotal: $0,00
No products in the cart.

3D Modeling News from the Web

3D Modeling and 3D Design News from the good 'ol internet

See GANverse3D recreate KITT from Knight Rider in 3D | CGBucket

Share on Social Media

 
Nvidia has posted a fun demo showing how GANverse3D, its new deep learning engine, was used to recreate KITT from Knight Rider – the “first AI car” – in 3D.

The technology, due to be added to Nvidia’s Omniverse collaboration platform, can generate a textured 3D model of any type of real-world object on which it has been trained, from a single 2D image.

Uses StyleGAN as a synthetic data generator to train an inverse graphics framework
Nvidia’s method uses StyleGAN, its open-source image synthesis system, to generate multi-view images from real-world reference data: in this case, photos of cars available publicly online.

The synthetic images, which infer how the real cars would look when seen from a series of standard viewpoints, can then be used to train Nvidia’s DIB-R inverse graphics framework.

In its turn, the framework infers a 3D model from the 2D images.

In the case of the KITT demo, the inverse graphics framework was trained with 55,000 car images generated by the GAN: a process that took 120 hours on four V100 data center GPUs.

The trained framework took just 65ms to generate a 3D model from a source image of KITT on a single V100.

Nvidia then created a driving sequence using the model inside Omniverse, including converting the original predicted textures into higher-quality materials.

You can read a longer overview of the workflow on in a post on the Nvidia Developer blog, along with a link to the research paper on which it is based.

Not ready for hero renders just yet, but could be used as background content for arch viz
For production work, the 3D models of KITT and the other cars shown in the video leave a lot to be desired.

They’re not particularly detailed, and they’re often distorted, particularly around the roof – a part of a car not often shown in online photos.

However, they were generated from single source images, without any human input, and provide a starting point that could be refined manually.

Nvidia suggests that they could even be used as is as “ambient and background content” in situations where a higher-quality asset, like a stock 3D model, would be “overkill”: for example, to populate large parking lots seen in the background in architectural visualisations.

Available soon as both source code and an Omniverse app
Nvidia aims to release the source code for GANverse3D publicly this summer. The firm also plans to add the technology to Omniverse as part of an ‘AI playground’ app.

Read more about the making of Nvidia’s Knight Rider demo on the firm’s developer blog

Tags: , , , , , , , , , , , , , , , , , , , , ,

This post was automatically crawled from the internet. (What’s this?)

Leave a Reply

Your email address will not be published.

I am . Designer

Sign up for a Free CGBucket Vendor Account

More Posts and 3D News

Success!

Thanks for signing up!
You have been added to our awesome newsletter.

Request Received

Thank you for taking interest in CGBucket Business 3D Services.

We have received your quotation  request and will respond within 3 Business Days.