[ Sharing ]  NVIDIA Omniverse FAQ - Application
  Comments:

NVIDIA Omniverse FAQ - Application

  By : Leadtek AI Expert     111

More users start to explore NVIDIA Omiverse platform through Open Beta Trial, and share the problems they've faced on NVIDIA Developer Forum.

Let's check what are the solutions to 5 frequently asked questions.

How to simulate crowd in Omniverse?

There are two different ideas that you may try.

The Omniverse Unreal Connector can export level sequence and animation data. If your cars and pedestrians are moving using Level Sequence, then the meshes, materials, keyframes, and skel mesh animation should all export as USD and you might just have what you’re looking for. If you’re using a lot of Blueprint and other systems within Unreal to drive the motion that don’t export properly, you can always use the sequence recorder (or track recorder) to save actors’ motion in a level sequence. Once you’ve got a level sequence typically the export is smooth.

Another option would be something like Atoms Crowd 3, at one point the NVIDIA engineer tested their crowd export to USD from their Unreal plugin and it worked really well.


Is currently any solution to load a text, TTS, generate a wav. and play the metahuman reading the text with lipsync in real time in Unreal?

NVIDIA will be releasing a new version very soon which supports TTS streaming into Audio2Face.

Regarding the real-time animation play on metahuman, it’s not under support yet. The closest solution is exporting the blendshape animation (using blendshape conversion) and loading it into a Metahuman in Unreal.



When is emotion control coming to Audio2Face?

NVIDIA is currently testing emotion control and it is due for release in Q1 of 2022.


Is it possible to create an interface with Omniverse, capable of interacting in real time by voice (TTS /STT) with a training developed for example in IBM Watson Assistant?

Short answer is yes you can do this in Omniverse. One additional piece that would make it easier for this is allowing streaming TTS to drive Audio2Face so your digital avatar can be driven with TTS directly out of the box. Its a feature NVIDIA is adding now and will release in a later version of the product.

NVIDIA doesn’t have an easy way to achieve what you are asking, but NVIDIA do have most of the parts in terms of having a “interacting in real time by voice (TTS /STT) with a training developed for example in IBM Watson Assistant?”. NVIDIA offers Riva for conversational AI part and you can use Audio2Face to show the facial animation of the generated voice result of Riva.



There are workstation launcher and enterprise launcher. Is workstation launcher OVI and enterprise launcher OVE? What’s the difference?

The workstation launcher is the Omniverse for Individuals (Open Beta).

The Open Beta NVIDIA provides today is an Open Beta of Omniverse for individuals so those two things are currently the same.

An artist could use Omniverse for individuals (Open Beta) for personal projects and connect to a licensed OVE nucleus for work (using up one of the paid app licenses of that contract).


The Enterprise launcher is the Omniverse for enterprises. 

OVE includes an enterprise Nucleus that is not part of Omniverse for Individuals. NVIDIA Omniverse Enterprise is a new platform that includes: Omniverse Nucleus server, which manages USD-based collaboration; Omniverse Connectors, plug-ins to industry-leading design applications; and end-user applications called Omniverse Create and Omniverse View.



Explore NVIDIA Omniverse on Leadtek website.




Comments as following