Documentation / Unity Package

Table of Contents

Getting Started

After recording in the Capture Suite, you are now ready to create an experience in Unity. Soar supports Unity versions 2020.2 and newer. If building on Windows 10 or Windows 11, ensure your system is x64. Soar does not support x32 Windows or any x32 build devices. If building on Mac, ensure you are on macOS 10.15 (Catalina) and newer.

Based on what you set the output resolution to for your compressed capture in the Capture Suite and how many triangles it has, you may be able to run multiple instances in your scene. With the default 2048 x 1024 resolution and at 200K triangles, you may find you can run 2-4 instances, depending on the specific client device and how much RAM it has. For the Oculus Quest and Meta Quest 2 (formerly known as the Oculus Quest 2), we recommend 150K triangles.

The video width and video height resolution is set within the Output section in the Capture Suite. The default setting is 2048 x 1024.

You can view your capture's triangle count on the Capture Suite playback screen, in the preview window during recording, and in the preview window during raw capture import. In order to modify the triangle count, you can decrease the volumization resolution setting in the Capture Suite, found within the Volumization section. The default volumization resolution setting is 256.

Important to note, utilize the Setup Wizard when importing the package. This can save you lots of time troubleshooting.

Compatibility

Unity provides the option of selecting from two render pipelines — HDRP and URP — as well as the standard renderer, Core. On Apple devices, we support all three of these. On Windows, we support Core and URP (not HDRP). On Android devices, including the Oculus Quest and Meta Quest 2, we support Core and URP (not HDRP). On the Oculus Quest and Meta Quest 2, URP render scale must be set to 1. Also, Low Overhead Mode MUST be disabled for the Oculus Quest and Meta Quest 2.

Settings Specific to Oculus Quest and Meta Quest 2 builds:

Meta Quest Specific Settings

There are multiple graphics APIs to choose from in Unity. At the moment, we support Metal (Apple), OpenGL Core (Windows), and OpenGL ES 3.2 (Android/Oculus Quest and Meta Quest 2) — not Direct3D or Vulkan. Since there is no Direct3D support, this precludes us from being able to support Quest devices tethered to Windows, as well as HDRP on Windows for the time being.

We require a modern NVIDIA GPU on Windows as outlined here. Please make sure your drivers are up-to-date, as older drivers have been reported to cause playback issues.

The compatible build devices for the Unity Package are as follows:

Apple

macOS 10.15 (Catalina) and newer

iPhone X and newer running iOS 13+

iPad Pro models running iPadOS 13+

Apple TV 4K models running tvOS 14+

Android

Android 10+ (API 29+)

Oculus Quest

Meta Quest 2

Windows

Windows 10

Windows 11

For deploying a scene with up to three or four simultaneously-playing captures to devices, we recommend the iPhone 12 Pro, iPhone 12 Pro Max, iPhone 13 Pro, iPhone 13 Pro Max, and only iPad Pro models due to memory requirements. All other supported iPhone and iPad models, as well as supported Apple TV models, support up to two simultaneously-playing captures. Android devices will vary by maker, but you should be able to run a similar amount of instances as you can on iPhone devices. Windows machines and Macs can support up to three. We recommend only one instance for the Oculus Quest and Meta Quest 2.

The triangle count and the video width and video height of your capture will determine your performance and the number of instances you can play. The triangle count for a capture is visible within the Capture Suite on the playback screen, within the preview window during recording, and in the preview window during raw capture import. Soar recommends keeping your capture below 200K triangles (150K triangles for the Oculus Quest and Meta Quest 2). In order to reduce the triangle count, you can decrease the volumization resolution setting in the Capture Suite, found within the Volumization section. The video width and video height settings are set within the Output section in the Capture Suite prior to export. The default setting is 2048 x 1024.

In order to avoid any script compilation errors, the Mathematics and High Definition RP package MUST be included in your project.

For the Oculus Quest and Meta Quest 2, if you encounter issues such as crashing on start, ensure your Oculus Quest and Meta Quest 2 is updated to the latest firmware.

For Apple Devices

Unity’s current native silicon build of their editor is in active development and currently still contains a litany of bugs that can cause crashes, and because of this we recommend using a Rosetta build of the Unity Editor when using an M1 device.

Some of our users encountered an issue loading SoarUnityPluginMac.bundle on Apple devices, especially when using M1 devices. To fix this issue, open terminal and run the following command (Xcode Command Line Tools must be installed on your machine for this command to run):

xattr -d com.apple.quarantine<path to bundle>/SoarUnityPluginMac.bundle

Importing the Package (Apple)

To get started, import the Soar Unity Package into a new Unity Project:

  • Click Assets.

Click Assets

  • Scroll down to Import Package, then click Custom Package.

Import Package

  • Navigate to the Soar Unity Package and click Open.

Open Package

  • By default, all assets will be imported from the unity package. Click Import.

Import Package

  • After importing the package, a popup should appear, this is our Setup Wizard used to configure rendering settings depending on your desired rendering platform and also downloads any package dependencies. Select your desired render platform and click "Configure Project."

NOTE: If the Setup Wizard does not appear after importing the package, it can be accessed from Window -> Soar -> Show Setup Wizard

Soar Wizard

  • Now you should see all the imported assets in your projects assets folder.

Imported Package

Importing the Package (Windows)

To get started, import the Soar Unity Package into a new Unity Project:

  • Click Assets.

Assets Tab Windows

  • Scroll down to Import Package, then click Custom Package.

Custom Package Windows

  • Navigate to the Soar Unity Package and click Open.

Open Package Windows

  • By default, all assets will be imported from the unity package. Click Import.

Import Assets Windows

  • After importing the package, a popup should appear, this is our Setup Wizard used to configure rendering settings depending on your desired rendering platform and also downloads any package dependencies. Select your desired render platform and click "Configure Project."

NOTE: If the Setup Wizard does not appear after importing the package, it can be accessed from Window -> Soar -> Show Setup Wizard

Setup Wizard Windows

NOTE: Our plugin is currently only compatible with the OpenGLCore Graphics API on Windows platforms. To configure your project to use OpenGLCore, go to Project Settings and open the Player Settings tab. Under the Other Settings section, navigate to the rendering section. This is how your graphics API settings should look on Windows.

NOTE: While in player settings, ensure that the color space setting is set to gamma for every platform except when building out to Quest devices (within the Android section); it should be set to linear for ONLY Quest devices or else you will render a black mesh.

Windows Graphics API

  • Now you should see all the imported assets in your projects assets folder.

Assets Folder Windows

Now that we have our settings fully set, we can start using our Volumetric Captures!

Importing the Package (Looking Glass)

  • After importing the Looking Glass SDK, all you need to do is drop their prefab in the scene. You can now utilize a compressed capture on a Looking Glass display. Since there are no speakers on the Looking Glass display, audio will not be played.

Rendering Pipelines

NOTE: We recommend using the Soar Setup Wizard to setup your rendering pipelines as it will take care of all package dependencies automatically, but if you would like to do so manually, here are the necessary steps.

We provide materials for each render pipeline provided by Unity. Depending on what pipeline you're using, make sure you use the corresponding material. Failure to select the wrong material will result in the capture not displaying correctly, typically resulting in a black or pink mesh.

Pipeline Materials

Setting up the Universal Render Pipeline (Apple)

Before we start, your Unity project needs to have the Universal RP package included in your project. To include that package, click Window > Package Manager.

From the Package Manager navigate to Universal RP.

URP Package Manager

Our package already includes a configured render pipeline asset to use with the Unity Universal Render Pipeline. To include our pipeline assets with URP, in the top navigation bar click Edit > Project Settings and navigate to the Graphics tab. In the Scriptable Render Pipeline Settings field, add the included Render Pipeline Asset.

URP Graphics Tab

Setting up the High Definition Render Pipeline (Apple)

Before we start, your Unity project needs to have the High Definition RP package included in your project. To include that package, click Window > Package Manager.

From the Package Manager navigate to High Definition RP.

HDRP Package Manager

After including the High Definition RP package, the Render Pipeline Wizard will pop up asking you to fix configuration settings, when this comes up click Fix All.

Fix All Render Pipeline Wizard

While fixing your configuration settings, the Render Pipeline Wizard will ask you to load or create an HDRenderPipelineAsset. The package already includes a configured asset, so click Load One and select the included HDRenderPipelineAsset.

When your project finishes updating, you should see nothing but green check marks.

HDRP Green Checks

Setting up Universal Render Pipeline (Windows)

Before we start, your Unity project needs to have the Universal RP package included in your project. To include that package, click Window > Package Manager.

From the Package Manager navigate to Universal RP.

URP Package Manager

Our package already includes a configured render pipeline asset to use with the Unity Universal Render Pipeline. To include our pipeline assets with URP, in the top navigation bar click Edit > Project Settings and navigate to the Graphics tab. In the Scriptable Render Pipeline Settings field, add the included Render Pipeline Asset.

URP Graphics Tab

Using the SDK

Offline Content

Now that the Soar Unity Package is imported, head to the Streaming Assets folder.

Streaming Assets

Inside the Streaming Assets folder, head to the VOD folder. Drag and drop all of your exported files that you saved and exported from the Capture Suite. When creating our volumetric content, we are most concerned with our .m3u8 playlist file.

BIN File

NOTE: If your capture includes audio, make sure to also include your .m4a audio file and the audio manifest file. Captures include the ability to use Spatial/3D Audio and is automatically applied when available, however, this MUST be configured to be enabled through the Capture Suite.

Audio Files

After importing the files, you are ready to add the content to your Unity scene. Within the Prefabs folder there are various prefabs we are going to use for our offline volumetric content. In any scene using volumetric content we will need to add a Volumetric Controller. The Volumetric Controller is used to keep track of each instance of volumetric content. The Volumetric Model and Volumetric Model Apple TV prefabs are very similar, but we will only use the Volumetric Model Apple TV prefab when building to Apple TV devices. Since we are going to be building to iOS in this scenario, we are going to add the Volumetric Model to our scene.

Volumetric Model

Inside the Prefab is a script titled Volumetric Render. Clicking the button to the right of the text box will open up a file selection window that will allow you to pick which volumetric capture you'd like to use from your Streaming Assets folder.

Canned Script

NOTE: You can add this file selection window to any public string or string array variables with the addition of this attribute above.

Volumetric Stream File

Within the Volumetric Render script, there are functions created to facilitate full playback functionality, which include Play, Pause, and Stop. There is also functionality to enable Auto Play and Auto Loop.

Autoplay/Autoloop

Playback Functions

To get access to and invoke these playback functions we need to first get the reference to the Playback Instance component attached to the gameobject.

public PlaybackInstance instance;

void Start()
{
  instance = gameObject.GetComponent<PlaybackInstance>();
}

Now that we have a reference to the Playback Instace component we have full access to the playback functions that are built into the Playback Instance component.

  • Pause
    public void PlaybackPause()
    {
      instance.Pause();
    }
  • Stop
    public void PlaybackStop()
    {
      instance.Stop();
    }
  • Play
    public void PlaybackStart()
    {
      instance.Start();
    }

Volumetric Behavior

We have added a host of functions and capabilities to give developers more control over the behavior of your volumetric content.

NOTE: For these behaviors to take effect, Auto Loop must be disabled on the Volumetric Render script.

Volumetric Behavior

  • Hide On Complete: When your volumetric content has finished playing through, the model will no longer be rendered, but will remain in the scene.

  • Dispose On Complete: When your volumetric content has finished playing through, the model will be destroyed and removed from your scene.

  • On Completed: This is an event hook where you set a function to be triggered when your volumetric content has finished playing through.

This is an example use-case of this behavior functionality where we have Dispose On Complete enabled and a function call hooked into our On Completed event used to trigger a particle effect.

Conor Exploding

NOTE: Whenever you need to dispose of an instance programitically, never call the Dispose method explicitly. Instead, all that you need to do is use Unity's built in Destroy method as we call dispose automatically whenever an instance object is destroyed.

Rendering Settings

Within our Volumetric Render component we offer a number of options and settings to change the end result of your volumetric capture. These settings can help with optimization on lower end machines or mobile devices.

Depth Scale

Our volumetric captures use the depth images provided by the capture cameras to reconstruct the subject as a volumetric capture, to help with processing time, we give you the ability to change the scale of that incoming depth texture.

Texture Bleed

As a developer you have the option to turn on and off a feature called Texture Bleed Reduction.

Texture bleed allows for more refined application of our textures to the volumetric mesh. This adds a greater level of detail in your model and removes jagged edges at the expense of greater computing power.

Before Texture Bleed applied: Texture Bleed Before

After Texture Bleed applied: Texture Bleed After

PCF Sampling

Another way to control the quality of your capture is with our PCF Sample slider. This allows you to control the amount of samples being used per fragment by our fragment shader. The higher the value the more computing power is needed.

Scrubbing

Our volumetric playback also facilitates playback scrubbing to give the user more control over their volumetric content. To set up playback scrubbing we will first need two things, the length of the volumetric content playing, and the current time of the volumetric content playing. Luckily, we already have two functions made for us that do just that.

  • Get Duration of Content
    instance.FullDuration;
  • Get Current Timestamp
    instance.CursorPosition;

In our example SoarSampleScene we create a basic scrubbing UI using just the UI Slider object built into Unity.

Scrubbing UI GIF

We use the FullDuration property from our instance to set the maximum value of our scrubbing bar and the CursorPosition property to track the time value of the scrubber position.

Next, on the Slider object, we need to add an Event Trigger component so we can trigger functions when we grab the scrubber and let go of the scrubber. These triggers will be End Drag and Begin Drag.

Event Trigger Component

In the PlaybackControls.cs provided in the package, we have the functions written that handle the scrubbing functionality.

  • Seek To Timestamp
    public void SeekToTimestamp()
    {
      instance.SeekToCursor((ulong)scrubbingSlider.value);
      PlaybackStart();
    }
  • Get Slider Handle
    public void GetSliderHandle()
    {
      PlaybackStop();
    }

With these functions set we now have the ability to scrub to a certain time in our volumetric content.

Scrubbing UI In Use

Manipulation Controls are included in the package so you can rotate, translate, and scale the volumetric content.

The volumetric model is configured to appear directly in front of the phone in your AR scene.

ARKit (Apple) and ARCore (Google) will need to be integrated to view the content in Augmented Reality.

Streaming Locally

For local streaming, the workflow is extremely similar. We will be working with the same Volumetric Model prefab, but with one small difference.

Important to note - when you start the local stream, you will have to scrub ahead on the client device to get to the head of the stream. At first this will seem like latency, but it is the client device being "X" seconds behind based on the time between the local stream starting and inputting the URL on the client device. You should notice only a few seconds of latency during streaming due to the HLS protocol.

Volumetric Model

Inside the Volumetric Render script we will replace our master file name with the URL that the volumetric model is being streamed from.

The firewall on your capture PC should be disabled to allow connections for a live stream on your local network.

Capture Suite - in order to stream on your local network to a device, you must select both local server and compressed capture within the output section after you set a capture name and capture path. The port 8080 should suffice. Then select "preview" and "record". On the device that you want to stream to, enter the streaming URL. The streaming URL starts with http://, followed by the local IP address of the computer (found by going to network connections on the PC, clicking into adapter settings, and viewing the IPv4 address) and then adding :port/capture-name_seconds-since-epoch_master.m3u8.

The full capture name, appended by seconds since epoch 1/1/2020 midnight GMT, is found within file explorer at the capture path selected in the output section as soon as you are recording/streaming. Ex: http://192.168.8.67:8080/Test_01642615614_master.m3u8. This is the stream URL you will input into Unity.

NOTE: All volumetric content being streamed from a URL play automatically. When you start the local stream, you will have to scrub ahead on the client device to get to the head of the stream. At first this will seem like latency, but it is the client device being "X" seconds behind based on the time between the local stream starting and inputting the URL on the client device. You should notice only a few seconds of latency during streaming due to the HLS protocol.

Receiver Script

Loading Captures at Runtime

Along with full playback of captured files, we now offer the ability to load new capture clips at runtime with the click of a button. To load a new volumetric clip we need to get a reference to the Volumetric Render component to access the LoadNewClip function.

public VolumetricRender volumetricRender;

void Start()
{
  volumetricRender = gameObject.GetComponent<VolumetricRender>();
}

Now that we have a reference to the Volumetric Render component we can load up our new clip!

  • Load New Clip
    public void LoadNewClip()
    {
      volumetricRender.LoadNewClip(newClipFileName);
    }

The LoadNewClip function takes one argument, the file name or URL of the capture clip you want to load.

We updated our PlaybackControls.cs script to include a button to select the file you want to load from your Streaming Assets folder.

New Clip File Name

Now that our new clip is set, we can play the scene, clip the Load Clip button, and you’ll see your volumetric video change immediately!

Load New Clip Sample

Instance Cloning

Our Instance Clone script allows users to create a large amount of volumetric instances from one resource. This allows for crowd generation of volumetric content without using a large amount of resources.

Instance Clones

Our package contains a sample scene titled SoarVolumetricClones to show how to set up an army of instanced clones. In our sample scene we have a gameobject titled "CloneArmy". All of our instance clone live within this root gameobject.

Scene Hierarchy

Each clone gameobject has the Instance Clone script component attatched to it. To generate the clone from your Volumetric Model gameobject, you need to set the Instance Renderer property to the gameobject in the scene that contains your Volumetric Render component.

Instance Component

When playing the scene you can see the reference the clone object is using through our provided gizmos.

Clone Reference

Chroma Key

We have added a chroma key feature to our volumetric render component to allow users to chroma key out specific colors on your volumetric capture.

Looking at the updated Volumetric Render component we can see 4 main components of the chroma key feature. These 4 components are the Chroma Color, Gain, Bias and our Override Chroma checkbox.

Chroma Key Settings

Chroma Color is the color that is selected to be keyed out.

Gain changes how severe it de-weights against the chroma distance; also controls de-saturation.

Bias is the base level cut-off; how far is this color away from the selected chroma key.

Chroma key is disabled by default, if you would like chroma keying applied to your volumetric capture, you must click the Override Chroma checkbox.

Included in the package is a capture that has chroma keying applied. Here we can see the model without chroma keying applied.

Capture No Chroma Applied

After clicking the Chroma Override checkbox, we can now see the desired colors applied to our model!

Capture Chroma Applied

Volumetric Capture Preview

To help with faster iteration and development, we added a feature within our SDK to create an animated preview of your volumetric capture. All that’s needed is an extra script on your gameobject.

On the same gameobject that has the Volumetric Render component, click Add Component and search for Create Preview.

Create Preview Script

Looking at the Create Preview component we can see that it is very simple.

Create Preview Component

The Create Preview component will create a fully animated preview of whatever capture is set in your Volumetric Render script. All you have to do is set the Create Preview bool to true, and your capture will begin animating while the unity app is in edit mode!

Create Preview Sample

We included full playback and scrubbing controls in our preview script to give you full control of your preview.

Create Preview Playback

The Create Preview component also gives you the ability to tie your preview with the Unity Timeline by including a Playable Director to help speed up iteration. To attatch a Playable Director, set the "Attatch To Director" boolean to true. This will cause an object field to appear within the component, allowing you to set a Playable Director in the Create Preview component.

Attatch Director to Preview

Dynamic Lighting

NOTE: On Apple products we support the Core, URP and HDRP pipelines. On Windows we currently support only Core and URP.

Dynamic lighting can be enabled through a checkbox on the Volumetric Render script. This can not be enabled at runtime, to enable lighting, you must check the box before playig the scene. We offer standard dyanmic lighting and also per-pixel dynamic lighting.

Enable Lighting

As a develper you have the ability to adjust ambient lighting color/intensity and also the color/intensity of the textures themselves, allowing for more customization of the look of your volumetric capture. These values are accessed through the Material that is applied to the ModelMesh gameobject.

Dynamic Lighting:

Dynamic Lighting

Ambient Color/Intensity:

Ambient Color Intensity

Texture Color/Intensity:

Texture Color Intensity

These three values can be used together to get some very unique looks for your model! Colored Capture

Timeline Playback

Our volumetric captures have the ability to be integrated with the Unity Timeline allowing for frame specific sequencing to help make volumetric filmmaking even easier!

Included in the Unity Package is a sample scene titled SoarTimelineSample demonstrating our Timeline integration.

In our SoarTimelineSample scene all of our Timeline components and scripts are set up on the Director gameobject.

Timeline Scene

In our sample we have two Volumetric Models in our scene, the Timeline contains two Volumetric Render Tracks (one for each model) and each track has a Volumetric Render Clip for the Volumetric Model.

Each Volumetric Render Clip has a dropdown where you specify the capture file you intend this clip to use. This allows you to create multiple clips on one Volumetric Render Track and load different capture clips on one Volumetric Render Track!

Clip Attributes

NOTE: When set in the Timeline, Volumetric Models need the Auto Play and Auto Loop boxes unchecked to allow the Timeline to control model playback.

Unchecked Boxes

In our sample we specify our Playable Director component to Play on Awake, so when you begin playing the editor, you will see the cursor move through the Unity Timeline and will play the Volumetric Model when reaching the Volumetric Render Clip.

Timeline Playback

We allow scrubbing in the Unity Timeline allowing you to move between clips at runtime and move through the Volumetric Models frame by frame.

Timeline Scrubbing

To add your own Volumetric Render Tracks and Volumetric Render Clips go through the following steps.

  • Right click the Track area in the Timeline and add a new Volumetric Render Track.

Add New Track

  • With the Volumetric Render Track highlighted, right click the Clip area within the Timeline and click Add Volumetric Render Clip.

Add New Clip

  • Next, we need to add a third Volumetric Model and connect the Volumetric Render component with the third Volumetric Render Track.

Connect to Track

  • Set the clip to the end and Play the editor, when the timeline cursor hits it, your new clip will play!

Play New Clip

Soar VFX Graph

NOTE: Unity's VFX Graph is currently only supported by the Universal Render Pipeline and High Definition Render Pipeline. VFX Graph will NOT work with Unity's Standard Render Pipeline (Core).

Before using the VFX assets included in the package, Unity's Visual Effect Graph package and Unity's Mathematics must be included in your project.

We can download both of these right from the Unity package manager.

VFX Graph Package Manager

After downloading the Visual Effect Graph package, we need to set some preferences to get our shader graph working. To set these preferences, go to Unity -> Preferences -> Visual Effects.

VFX Preferences

The package contains a scene titled SoarSampleVFXScene. In order to see the VFX graph in action, open up that scene.

The VFX graph included in the package is responsible for re-creating the volumetric mesh using a particle system.

VFX Graph Demo

Let us take a closer look at the SoarVFXGraph.vfx asset to get a better understanding of how it is working.

There are 4 major components of the SoarVFXGraph, broken down into the "Spawn", "Initialize", "Update" and "Output" context blocks.

Spawn Initialize Node

In the VFX Spawn node we are passing our volumetric mesh to a vertex count operator, which we then pass to our Spawn node to tell our VFX graph to only spawn one particle for every vertex on our mesh.

In the VFX Initialize Particle node we then place each particle at the same position of the vertex to completely recreate our mesh using the VFX Graph and its particle system.

Now let us take a look at our Volumetric VFX block.

Output Node

This block is responsible for taking the data from our SDK and uses custom hlsl code to set the color of each particle to match the color of the rendered mesh. You do not want to change the connected values or hlsl code in this block.

Now that we understand how the SoarVFXGraph works, we can start playing around with the update node.

Update Node

The Update node within the VFX Graph is where we can really start to experiment with the artistic looks of our model. We have a Gravity node set in our update node, but it is disabled. To see the effect gravity will have on your particles, enable the node to see the effect in action.

VFX Graph Gravity

Now our particles fall as soon as they spawn! However, because the lifetime of our particles is set to a random value between 0.25 and 1 second, the particles respawn very quickly, and maintain the shape of our model.

However we can add different force operators to our Update node to change how our particles move. Try replacing the Gravity operator with a Turbulence operator and see how that affects our particles.

Update Node Turbulence

Quite a different effect than the constant falling of the Gravity operator.

VFX Graph Turbulence

These built-in force operators can be heavily customized to suit your creative needs!

Oculus Quest and Meta Quest 2 Integration

Included in our package, we provide a sample scene showcasing how to use our volumetric content on an Oculus Quest or Meta Quest 2 device. Here we will go over the SoarSampleOculusScene and how to set up your project settings.

When opening up the scene you'll see it set up very similarly to all other samples. The main difference here is instead of a traditional camera we are using a customized Oculus Interaction Rig prefab and a separate Volumetric Model Oculus prefab.

Soar Oculus Prefabs

The Volumetric Model Oculus and SoarInteractionRigOVR prefabs will have script compilation errors if you are missing necessary plugins. Integrate the following plugins and asset packages to take care of these compilation errors.

  • Oculus XR Plugin Oculus XR Plugin
  • XR Plugin Management XR Plugin Management
  • Oculus Integration (Found in Unity Asset Store) Oculus Integration Assets

NOTE: This scene is currently only compatible with Oculus/Meta devices and can not be used with Unity's OpenXR plugin.

Now that all of these have been added and we have no more script compilation errors on our prefabs, we need to check out our project settings.

First, make sure that you are initializing the oculus plugin on the Android build platform through XR Plugin Management.

Initialize Oculus

To build to Oculus devices, we need to have the color space set to Linear.

Linear Color Space

For the our SoarSDK we need to Require ES3.2.

Require ES3

For Quest devices we need to set the minimum API to Android 10 (API Level 29)

Set Android API

Lastly, we need to set our target architecture to only devices with 64bit architectures.

Set Target Architecture

Now that we have our project set up, we can go ahead and build out to our Oculus Quest or Meta Quest 2 device!

Oculus Built

Movement Controls:

  • Grab objects by pointing cursor at object and hold down left or right trigger.
  • Rotate object by grabbing with Right controller and move thumbstick left or right.
  • Move object toward and away from you by grabbing with Left controller and move thumbstick forwards or backwards.
  • Scale object by grabbing object with both controllers, move controllers away from each other to scale up, move controllers toward each other to scale down.

Included in the scene are simple interactive buttons that allow you to control playback of your volumetric model and a basic slider allowing you to scrub to certain times in playback.

Scripting movement controls was done using Oculus Integration and this simple functionality can be found in the MoveWithRay.cs file.

For more information on the Oculus Interaction SDK and how to build upon this core functionality go to the Oculus Developers website here.

Build Process

After you are satisfied with your application and want to test it on your iPhone/iPad, you can start the Build process to Xcode.

First we need to add the necessary shaders to our Graphics settings so they will always be included within the project when we build it. To do this click Edit, then Project Settings.

Project Settings

After opening the project settings, go to the Graphics tab.

Graphics Setting

Scroll down to Always Included Shaders. In here, we will add the SoarShader and the ColorCameraDepth shader.

Include Shaders

Now that we have the shaders included in the project, we can officially start the build process.

First, click File, then Build Settings.

Build Settings

Ensure that the correct scene is selected in Scenes in Build. If not, click Add Open Scenes.

Add Open Scenes

Make sure your platform is iOS. If it is not, click iOS, then click Switch Platform.

Switch Platform

Then, click either Build to just build the application or Build and Run to build the application and launch Xcode.

Build

After Unity is finished building the application, Xcode will launch.

Xcode Build

Ensure you have added an Apple ID which can be used to build the application. If not, view the Apple documentation in order to setup a Developer Account. Add developer Account

Attach a compatible device and click the Play Button to build the application to the device. Start Build

Utilities

We've developed a suite of utilities and gizmos to help make project setup, debugging, and analysis of volumetric content a little bit easier for developers.

Debugging

On the CreateSDK script attatched to the Volumetric Controller object there is a dropdown menu that allows developers to select what type of logs are sent over from our SDK to the unity console.

Log Level

Below is a quick summation of the different log levels and what they output.

  • None: Displays no logs.
  • Warning (Default): Display any warnings issued from plugin.
  • Error: Displays all critical errors issued from plugin.
  • Info: Display information that may be useful to the user (ex: when a file is opened).
  • Trace: Displays deep debugging information that may be useful to a more technical user.

Capture State

We have provided a small debug tool to help track the "state" of your volumetric capture instance. If you encounter any playback issues, the first thing you want to do is check to see the current state of your playback instance.

Here's a breakdown of each state and what they mean:

  • INVALID_DATA_ERROR: There is invalid data within the files so processing can no longer proceed.
  • IO_ERROR: There was an IO error attempting to access the file or stream, so processing can no longer proceed.
  • READY: The instance is ready to play and provide data.
  • DECODE_CATCH_UP: The instance is behind on decoding and needs to catch up, this will pause playback. One rather common reason this state occurs is that your mesh is too dense; you need to reduce triangles. This can be done prior to exporting in the Capture Suite by reducing the volumization resolution.
  • BUFFERING: The instance is fetching data from the file or stream, this will pause playback.
  • FETCHING_MAIN_MANIFEST: The instance is reading the primary manifest at startup.
  • INTIIALIZING: The instance is performing initialization.
  • CLOSING: The instance is in the process of shutting down.
  • CLOSED: The instance has been closed.
  • SUSPENDING: The instance has an active suspend request, and will suspend on next update.
  • SUSPENDED: The instance is currently suspended, and will pause playback.
  • SUSPEND_RESUMING: The instance has requested a resume, will return to a valid state on next update.

The state shown in the image above is simply a Unity canvas text element with a script attatched to update the text based on the volumetric models current state.

Capture State Script

Gizmos

Within the Unity scene view there are gizmos displayed on the Volumetric Model gameobject to give the user a visualization of the capture setup.

Gizmos

These gizmos display the bounding box of the capture setup and the location of all the cameras used during recording, labeled by index.

From the Playback Instance component, you have the ability to change which cameras are displayed.

Edit Gizmos

Soar Setup Window

We offer a wizard to help developers setup their project quickly and easily. On package import, a setup window will pop up that will ask you which render pipeline you plan on using.

Setup Window

After selecting your target Render Platform, click configure and our wizard will download any package dependencies and will configure all options automatically.

You can also access this setup wizard manually by going to Window -> Soar -> Show Setup Wizard.