Documentation / Capture Suite

Table of Contents

Getting Started

Prior to reading the documentation, ensure you have read through and have adhered to the Hardware Requirements found here. Also, if you haven't done so already, read through the Environment Setup documentation here. Many issues that affect calibration and capture quality stem from the capture room so it's important that your environment adheres to the guidelines outlined in that documentation.

Important to note, use the Key Settings section found in this documentation here as a referral section on how to modify the quality of your capture. This section also outlines a handful of the core components found within the Capture Suite.

Installation Requirements

The Capture Suite only supports x64 Windows 10 and 11.

Your NVIDIA GPU should have the Studio driver installed, not the Game Ready driver.

Capture Suite requires the Microsoft Visual Studio C++ Redistributable, which is not bundled with its installer. Download the x64 version here.

We also require the Azure Kinect SDK, available here.

Ensure your Azure Kinects are up-to-date with the latest firmware. Instructions on how to do this can be found here.

If for any reason you need to reset the Azure Kinect firmware, instructions can be found here.

If you have an issue with recording a compressed capture, you will have to update your NVIDIA driver by going here. We recommend installing the studio driver, not the game ready driver.

On a fresh installation, the Capture Suite installer will enable the following settings automatically to ensure maximum performance (computer restart required):

  • Graphics Settings -> Hardware-accelerated GPU Scheduling.
  • Graphics Settings -> Graphics Performance Preference -> Browse and Select Soar Capture Suite -> High Performance.
  • Power & Sleep -> Additional Power Settings -> Power Mode is set to High Performance.

You will need to restart your computer for these changes to go in effect. Confirm these settings are set correctly after restarting your computer.

If you encounter rendering speed issues (especially with hardware sync enabled), you may have an issue with your RAM clock speeds. You can use CPU-Z to view this, which can be downloaded here. Within the Memory tab, you want to make sure that you are not running in single channel mode (dual or quad mode, depending on your memory configuration) and that your DRAM Frequency is the maximum supported by your RAM. For example, if you're running in dual channel with 32GB of RAM at 3600Mhz, your DRAM frequency should be 1799 MHz. In order to modify the timings, you need to do this in the BIOS. You can enable the XMP Profile (DOCP/EOCP equivalent) within the BIOS which will set the ideal settings. Consult your motherboard manual to make sure your RAM is in the correct slots for dual or quad channel mode. The image below shows CPU-Z and correct settings for a machine with 32GB of RAM at 3600Mhz. Below is an image of CPU-Z. CPU-Z

If you find you are getting corrupt JPEG errors (that are not due to faulty cables/hardware) and have rendering issues after enabling these settings (especially after enabling Hardware Sync), you might have to modify a setting or two in your BIOS. If you head into your BIOS, enable the XMP profile (DOCP/EOCP equivalent) so that you are running the optimal settings for your RAM. Also, force PCIe to Gen 3 as Gen 1 will not run well - this setting may be named differently if not using a motherboard found on Soar's required hardware page.

Azure Kinect Power Light Indicators:

  • Not Lit: The device is not powered and not connected to the PC. Make sure that the round power connector cable is connected to the device and to the USB power adapter. Also, ensure that the USB-C cable is connected to the device and to a USB 3.0 port on your PC.
  • Solid White: The device is powered on and working correctly.
  • Flashing White: The device is powered on but doesn't have a USB 3.0 data connection. Make sure that the round power connector cable is connected to the device and to the USB power adapter. Make sure that the USB-C cable is connected to the device and to a USB 3.0 port on your PC. Connect the device to a different USB 3.0 port on the PC. On your PC, open Device Manager (Start > Control Panel > Device Manager), and verify that your PC has a supported USB 3.0 host controller.
  • Flashing Amber: The device doesn't have enough power to operate. Make sure that the round power connector cable is connected to the device and to the USB power adapter. Make sure that the USB-C cable is connected to the device and to a USB 3.0 port on your PC.
  • Amber, then Flashing White: The device is powered on and is receiving a firmware update, or the device is restoring the factory settings. Wait for the power indicator light to become solid white.

More Azure Kinect troubleshooting instructions can be found on Microsoft's website here.

The Capture Suite can support up to 10 cameras, performance may vary as this is dependent on your hardware specs. This includes using 3 StarTech USB 3.0 PCIe cards for USB bandwidth (no cameras plugged directly into the motherboard) and hardware that falls on the stronger side as found on the hardware requirements page; Threadripper Pro 5000 32+ Cores, RTX 3090, 64GB of DDR4 RAM - also necessary to utilize color resolutions above 1080p for all 10 cameras. The maximum value for the delay from primary setting found within each camera tab (within Color Controls) is 1450 for most depth modes (320 x 288, 640 x 576, 512 x 512) and 2390 for depth mode 1024 x 1024.

Starting the Capture Suite

Licensing

Capture Suite requires a license key. We support both offline and online activations.

For online activation, paste the key into the license entry field when prompted.

For an offline activation, launch the Capture Suite and enter the license key you were given — then select "Offline Activation Request". You will be prompted to save a TXT file. Send the TXT file to Soar at licensing@streamsoar.com. We will send back a DAT file. Return to Capture Suite and click on "Activate Offline License". You will be prompted to load the DAT file and the license will activate.

Main Screen

When launching the Capture Suite, all of your connected cameras should show up on the camera tab bar at the very top of the window. Upon initial launch, these tabs will be grey. A grey tab signals that a camera has both of its color and depth modules disabled. In order to enable the cameras, click into any camera tab and open the color and depth sections. If you would like to enable all cameras at the selected resolution, select apply to all next to the resolution setting, and click enabled. Do this for both the color and depth. 1080 and 640 are the recommended resolutions for color and depth respectively. The camera tabs are color coded.

  • Green: Calibrated
  • Yellow: Calibrating
  • Red: Not Calibrated

These tabs will also show notifications if a color or depth camera is disabled or if the camera has a temperature out of the acceptable range, which may diminish quality. They can be reordered, but the order cannot be saved so that it persists across Capture Suite launches. If you are missing a camera that is connected, click the button marked "Refresh" found on the capture tab. If the camera is plugged in but does not appear, it is not communicating properly with your computer. You may have a USB bandwidth issue or a faulty USB extension cable. Consult the troubleshooting guide.

Clicking "Log", found on the capture tab, will open a window flagging potential issues as you use the Capture Suite.

If you modify any setting in the Capture Suite, you must save it via the Profile section, or by clicking ctrl + s. You can save to the default path, which will load up at every Capture Suite launch. You can also click "Save Named" which will allow you to save multiple profiles for future loading.

Also, in order to manually input a value for a setting, ctrl + click on the setting.

Calibration

Before you can record content, you need to calibrate your cameras. Ideally, this process is done each time you use the Capture Suite. Even if cameras do not move, you might find calibration can drift a little after people repeatedly walk in the area. Ensure your cameras are setup as recommended in the environment setup guide.

Calibration in this context refers to the process of computing the camera extrinsics, or where they are in space relative to the calibration cube — including how they're oriented.

Ensure the "P" side panel (2 squares over 3 squres) is facing your front high camera. This is how your subject should orient themselves. Front Face

Things to Check Before Calibrating

  • Are lens flares visible on the color camera feed? If so, adjust the cameras and lights so that they are not.
  • Are the cameras more or less vertical? Straighten them if not.
  • Are you in a room with a lot of fluorescent lighting? If so, turn the lights off and use LEDs. If that's not possible, turn as many off as you can, as calibrating in dim lighting is okay provided you have enabled adaptive thresholding.

Performing the Calibration

Enabling Hardware Sync and Setting Delay

We strongly recommend using hardware sync which dramatically lowers depth noise, resulting in a more accurate calibration and higher quality content.

To use hardware sync, first ensure that your hardware sync cables are wired correctly. Your primary camera should have a 3.5mm sync cable inserted into only the sync out port. Your last subordinate camera should have a 3.5mm sync cable inserted into only the sync in port. Every other camera in your sync chain should have 3.5mm sync cable inserted into both the sync in port and sync out port. If there is an issue with sync when you go to enable the setting, there will be a pop-up explaining the issue. Alternatively, if there is no pop-up, but your rendering stops and your camera feeds freeze, confirm you sync chain is wired correctly and fully plugged in.

If you're using hardware sync, you must also set the "delay from primary" setting for each camera which offsets the emissions and depth exposure to avoid cameras interfering with one another. "Delay from primary" is found in the color controls section (within the color section) inside each camera tab. After setting the delay from primary setting, ensure exposure and white balance are set to manual. If you enable hardware sync before adjusting these settings, you may have to restart the Capture Suite to have hardware sync work properly.

Delay From Primary

Soar recommends an offset of 160 microseconds. Your primary camera should be set to 0, the next camera in your sync chain should be set to 160, the following camera in your sync chain should be set to 320, so on and so forth. The above picture references the seventh camera in the sync chain, out of a total of 8. As you can see, this is set to 960. After setting these values, save your settings in the profile section on the capture tab.

Important to note, if you decide to remove a camera from your sync chain, you must unplug this camera from your computer and restart the Capture Suite. The camera should not appear in the tab row at the top. Update your sync chain and delay from primary settings accordingly.

Hardware Sync

Use the checkbox "Enable hardware sync" on the capture tab to turn it on. Confirm hardware sync is running correctly by inspecting the infrared views for each camera tab. You should not see large pulses of light from any camera, which is interference (see image below). Performance should be a consistent 30 FPS, provided your volumization resolution setting is not set too high (hardware dependent). 256 should be 30 FPS on most hardware configurations.

In order to diagnose other potential hardware sync and hardware issues, check out the troubleshooting guide.

Interference

Now you are ready to start calibrating. Head to the calibration section on the capture tab. You must accurately measure your calibration cube and input the marker width, preferably in millimeters. You can measure the width by going from edge to edge on any side. You can measure the top marker by measuring from the center of the face of the marker on a side, straight upwards to the center of the top marker face. If using a Soar calibration cube, the default values will work, but you will want to measure the cube to confirm. The default value for width is 216.1mm and the default value for top marker offset is 148.3mm. Adaptive thresholding will be useful if you are calibrating in a brightly lit environment or a darker environment. Keeping the adaptive threshold value at 0 is ideal for most environments and lighting conditions. In brighter environments, you may want to use a negative value if your calibration is not yielding quality results. In darker environments, you may want to use a positive value.

Calibration

Ensure the calibration cube is nearly centered in the depth view of the camera - you want ample headroom between the calibration cube and the top of the depth view. The default cube width value when utilizing a Soar cube is 216.1mm, but confirm this is accurate by measuring the width of your calibration cube, which is edge to edge on any side (white side to white side, not just the marker). The top marker offset is 148.3mm. This can be confirmed by measuring from the center of the face of the marker on any side, straight upwards to the center of the top marker face. If hardware sync is enabled and you're content with the camera setup, click "calibrate all cameras" to start calibrating. This process will take about one minute if your "required samples" setting is set to 20; increasing this value will add time to calibration, but may potentially improve calibration results. If a camera tab is yellow, this means that it is currently calibrating. If a camera tab stays yellow for more than one minute, click into the camera tab and inspect the color camera feed. You should see a colored outline around the marker face. It's important that this marker outline remains straight and does not bounce around. If the marker outline does not appear, lighting may need to be added so that the camera can get a better view. Also, if looking at a lower corner camera, you will notice it can see two markers. The marker it uses will have a colored dot on its face in the color feed. Once all tabs turn green, you can click the preview button to see your calibration.

After calibrating, have your subject stand in the space of the calibration cube. Your result should look something like the image that follows. Good Capture

Accessing the Calibration Data and other Capture Suite Settings

Json

Settings for Capture Suite are stored in the path C:\Users\PCName\Soar\AppData\Roaming\Soar as JSON. AppData is a hidden folder. To access this folder, you can type %appdata%/Soar in the top-middle bar (not the search bar) in Windows File Explorer to easily access it. Here, you will find Capture Suite settings, calibration extrinsics, calibration pinhole intrinsics, world view projections, the color info structure, and the depth info structure.

Key Settings

If adhering to Soar's recommended camera configuration and environment guidelines, default settings should suffice. In order to manually input a value for a setting, ctrl + click on the setting. Right click on any setting to reset it to it's default value. In order to save settings, head to the Profile section and save to the default path, which will load up on every Capture Suite launch. You can also click ctrl + s. Alternatively, you can click "Save Named" which will allow you to manually save a profile which can then be manually loaded later on via "Load Named".

Capture

Refresh - clicking this button should make camera tabs appear; if they do not appear, you have a hardware issue and you should consult the performance/hardware section within the troubleshooting guide. The most likely culprit is an issue with your USB bandwidth, which can be traced to the StarTech card.

Preview - allows you to see real-time rendering of content; also shows your triangle count. Countdown bar for recording is shown when preview is enabled.

Load Recording - can load up a compressed capture to view in playback.

Enable Hardware Sync - strongly recommended when capturing; ensure the delay from primary setting is set correctly for each camera and that no interference is seen in the infrared views (bright spots) as you will notice arms and legs disappear. Exposure and White Balance must be set to manual.

Processing (Experimental Features)

Can be used after enabling Experimental Features within the UI Settings section. Effects are best seen when in Point Cloud mode.

Temporal Depth Filter

Smoothens out temporal noise. This will help reduce jitter along the silhouette for captures when the subject is standing still. Can be modified on a raw capture.

Favor New Data - 0 keeps old depth data as long as possible, 1 throws old depth data away. The faster the movements, the closer to 1 this setting should be set to. Default setting is 0.5.

Maximum Change - when depth value changes by a certain amount, throw old data away. Default setting is 2. This value is in centimeters.

Clear Filter - clears out previous settings.

Background Removal

This setting will remove the background and floor from your scene. After clicking Calibrate Background, anything within the bounding box will be deemed "empty" - you will want to conduct this after removing the calibration cube from the scene and ensuring no one is standing there. This will NOT work for trying to remove items from the volume as the space will be "empty", but will take up space. This must be done prior to calibration. You can save and export the background file. This effect can be seen in the depth view of individual cameras. This can be imported for a raw capture and you can also run the background calibration on a raw capture. In order to do this, you need about 20 frames - calibrate the background (on the Display only mode), click play, and then save your background calibration. It is recommend to have about a second of "empty volume" at the beginning of the scene so that you can calibrate the background, but if you forget to do this and want to salvage the clip and remove the background, you can calibrate the background with someone in the scene. Make sure "Only Floor" is enabled.

Note 1: Azure Depth Engine settings must stay at default values for this feature to work effectively.

Note 2: Saving/Loading Background - after calibrating the background and saving the file, you can load a raw capture (paused/frame export disabled) and load the background data. If you are not in raw import mode, you can load the background data within the Import Raw Capture section and it will also apply to the preview window. After exiting raw import mode or compressed playback mode, you will need to re-load the background calibration data from within the Import Raw Capture section or re-calibrate the background.

Note 3: Make sure your calibration and your background calibration have the SAME extrinsics - meaning, if you conduct a new camera calibration, you will need to conduct a new background calibration.

Only Floor - will only remove the floor. If disabled, anything in the scene at time of background calibration (such as the calibration cube) will be deemed "empty" and will not render. This can be seen in the depth view of a camera.

The section below shows the depth view with both no floor calibration and floor calibration (Only Floor) enabled - this concept extends to the background as well.

Filtering Aggressiveness - threshold to decide what is background. Default setting is 10.

Smart Hole Filling

This is a new hole filling feature which is meant to be an improvement over the original hole filling method. These settings can also increase noise along the silhouette and in other areas. Can be modified on a raw capture.

Same Surface Threshold - do we treat surfaces as the same or different; Default setting is 0.050. This value is in meters.

Maximum Smart Fill Length - maximum length a row of pixels that will do smart hole filling, then reverts back to original hole filling. Default setting is 0.015. This value is in meters. This setting can be helpful in bringing back hair on subjects where it is invalid in the depth view.

Smooth Gradient Estimation - smoothing factor; closer to 0, will make average and closer to 1, will take values right at edge. Default setting is 0.50.

Estimate Curve - when disabled, regions are filled with lines and when enabled, the lines are curved.

Volumization

Volumization Confidence - how confident do we have to be that a point is inside the volume; modifying this value (typically decreasing), along with potentially increasing distance falloff, may help with noise around arms and legs. Lowering this value may bring some volume back to your content if it appears too skinny or chopped off. Can be modified on a raw capture. Default setting is 0.525.

Camera Facing Confidence Gain - how much do we boost confidence based on the surface facing directly to the camera; modifying this value (typically increasing) may help with noise around arms and legs. Can be modified on a raw capture. Default setting is 10.000.

Distance Falloff - how aggressively do we drop confidence away from depth samples; modifying this value (typically increasing), along with potentially decreasing the volumization confidence, may help with noise around arms and legs. This setting also impacts the smoothness of the silhouette edges. Can be modified on a raw capture. If utilizing smoothing, increasing this value a bit may help reduce shrinking and background bleed. Default setting is 0.650.

Volumization Resolution - increases the triangle and vertex count so that visual fidelity increases; quality will increase, but so will bandwidth and performance will take a hit depending on your hardware. As this setting increases, the maximum vertices setting needs to be increased. Your triangle count, found in both the preview window (and in raw capture mode) and the playback screen, will show how many triangles are in your capture. Soar recommends no more than 200K triangles (150K triangles for the Oculus Quest and Meta Quest 2) for playback on mobile devices - this triangle count will impact your performance and the amount of instances you can have up at once. Can be modified on a raw capture. Default setting is 256.

Maximum Vertices - maximum number of vertices allowed to be generated during volumization; needs to be increased as the volumization resolution setting is increased. If set too high, the Capture Suite may crash. Can be modified on a raw capture. Default setting is 262144.

Minimum Coverage - ensures geometry won't be generated in areas where less then the selected number of cameras can see in their field of view; if artifacts are present within the bounding box, modify this value - also ensure that subject is not clipping out of the depth view of any camera. Can be modified on a raw capture. Default setting is 3.

Distance Clamp - clamps the distance falloff related to surface generation for confident input; decrease if there's a large variance of distances in the scene. Can be modified on a raw capture. Default setting is 0.250.

Unconfident Distance Clamp - clamps the distance used for distance falloff related to surface generation for unconfident input; should be kept around default if there are lots of invalid areas (black pixels) in the depth image. If depth quality is good and there are not many invalid silhouette areas, consider raising a bit. Can be modified on a raw capture. Default setting is 0.005.

Smoothing Mode - reduces with silhouette flicker. As you go from 3 x 3 x 3 to 5 x 5 x 5 to 7 x 7 x 7, the size of the smoothing kernel is increased, but also the performance cost as well. If you are experiencing shrinking and background bleed, try increasing the distance fall-off a bit.

Bounding Box - crops out part of the scene, in meters; enlarging this box will make resolution decrease and mesh may appear blocky. This is because the resolution is divided along the longest side of the bounding box, which then ensures the resolution is kept equal in the other dimensions so that it's all uniform. Can be modified on a raw capture.

Cap at Bounding Box - will cap the volumized mesh where it passes through the bounding box; fills in areas such as bottom of shoes. This should be enabled if you intend to use an OBJ sequence in Arcturus' HoloEdit.

Vertex Solving

Max Iterations - will increase the smoothing of volumetric content. Default setting is 20.

Reference Confidence - will take into account the areas that have point cloud information so the software will smoothen them less. Lowering this value can help reduce noise along the silhouette. Default setting is 0.950.

Volumization Rendering

Fade Distance - modifies the blending between color textures. Default setting is 0.011.

Color Camera Confidence - how much do we only use the most confident color camera for texturing; lowering this value after calibration helps in assessing calibration quality. Default setting is 14.000.

Chroma Key - helps to reduce green spill when utilizing a green screen; only for texture, not geometry.

Gain - how severe it deweights against the chroma distance; also controls desaturation. Default setting is 1.500.

Bias - base level cutoff; how far is this color away from the selected chroma key. Default setting is 0.100.

Show Bounding Box - enabling this shows the bounding box within the preview window.

Point Clouds - shows point clouds and is helpful in diagnosing calibration issues.

Texture Bleed Removal - helps reduce texture bleed and improves quality of content around hands and torso.

Calibration

Marker Width - width of calibration cube. Default setting for a Soar Calibration Cube is 216.100mm.

Top Marker Offset - when utilizing a camera facing the top marker. Default setting for a Soar Calibration Cube is 148.300mm.

Required Samples - samples the software takes when calibrating. Default setting is 20.

Adaptive Thresholding - helps calibration in varying lighting conditions; in brighter environments, you may want to use a negative value if your calibration is not yielding quality results. In darker environments, you may want to use a positive value. Default setting is 0.

Manual Refinement - can manually modify the translation and rotation extrinsics of each camera's calibration. You can save (export) and load (import) manual refinements. You can also use the extrinsics visual editor to modify the calibration with on-screen controls. Reset will revert the extrinsic values to pre modified values, while undo will undo the last change you made. Can be modified on a raw capture.

Reset - resets extrinsics values to pre-modification translation and rotation values.

Undo - reverts the last change you made to a translation or rotation value.

Audio

Input Gain - modifies recorded volume, unlike the preview gain which is strictly meant as a preview slider.

Start Monitoring - preview audio; must be disabled for recording.

Import Raw Capture

Process with Display - confirm you have a capture name and capture path set, then select your output format (only one) within the output section.

Display Only - will not export to your output format; meant as a way to view content and modify settings.

After modifying a setting within raw import mode, you need to resubmit the frame to view the change - click resubmit or use hotkey 'r'.

Output

Local Server - in order to LAN stream, this setting must be enabled, along with compressed capture; view the local stream section within this documentation to understand the full flow for local streaming.

Compressed Capture - for recording a file to use in Unity.

Video Width and Video Height - Default setting is 2048 x 1024.

Vertex Quantization - controls the geometry quality/bandwidth ration. Default setting is 3.

Max Bitrate - maximum bitrate of video part of capture. Default setting is 10000Kbps. This can be increased to improve texture resolution, but file size will also increase.

Quality - quality of video recording; the lower the value, the higher the quality. Default setting is 23.

Raw Capture - after recording, can import into Capture Suite to tweak settings and output to a compressed capture or mesh sequence.

Mesh Output - after recording in raw capture, can export to ASCII OBJ, ASCII/Binary PLY, and GLB/GLTF sequence, as well as export the raw color feeds; can also export to MVE and per-camera point clouds. Unwrapped textures will generate the UV map alongside your mesh export - this process can be time-consuming depending on the content that you are exporting. OBJs can utilize JPEG texture compression and technically BasisU, but this is rare as most pipelines don't utilize OBJs and BasisU. GLB/GLTF can utilize texture compression for both JPEG and BasisU.

Texture Width - the width, in pixels, of the output texture; default value at 4096 x 4096 should result in no loss of detail - for VFX work, you want as much texture detail and you can always downscale after the export. If you notice resolution issues, this value may need to be increased. It is best practice to modify this value by a power of 2. Default setting is 4096.

Texture Height - the height, in pixels, of the output texture; default value at 4096 x 4096 should result in no loss of detail - for VFX work, you want as much texture detail and you can always downscale after the export. If you notice resolution issues, this value may need to be increased. It is best practice to modify this value by a power of 2. Default setting is 4096.

Gutter - the border, in pixels, between different mesh segments in the texture for the purpose of mipmapping; lowering this value too far might cause issues when zoomed out on a model, whereas with a high res texture, you may want to increase this setting since you may be viewing from very far away and still want no visual issues. Default setting is 16.

UV Unwrapper:

  • Camera Clustering - new and improved method; much faster and higher quality. This is the default.
  • Legacy - old method; slower and lower quality. Use this if you encounter issues with Camera Clustering.

Flattener:

  • Conformal - reasonable quality, but fast. This is the default.
  • Dirichlet - can produce higher quality parameterizations, but may require more iterations to converge/is slower; may find the need to turn maximum solver iterations up.
  • Projective - very fast draft mode that uses the camera perspective, but may have artifacts.
  • Legacy Hybrid - uses the isomap flattening method, is slower than the other flatteners, but could potentially be more robust in some circumstances (may cause extra chart splits).
Maximum Solver Iterations - maximum number of iterations for the flattener to use per chart; if you see circles in the UV map, increase this value. If you notice resolution issues, this value may need to be increased at least 3x its default setting. Default setting is 16.

Convergence Delta - minimum amount of change between iterations required for the flattener to be considered 'converged' and exit early, smaller values will trade performance for more guarantee of convergence/potentially some quality. Default setting is 0.010.

Convergence Energy - minimum value for the flattener 'energy' for the flattener to be considered 'converged' and exit early; smaller values will trade performance for more guarantee of convergence/potentially some quality. Default setting is 0.010.

No Coverage Size Ratio - how much smaller should charts that we think have no coverage by any camera be compared to other charts. Default setting is 0.250.

Very Small Chart - very small charts are too small to be used as initial seeds and also charts near this size are much more likely to be merged. Default setting is 16.

Max Initial Charts Per Mesh Island - for each individual mesh island (i.e. groups of triangles that are continuously edge connected), what are the most seed charts to start with. Default setting is 32.

Chart Clustering Relaxation Iterations - the number of iterations spent trying to create similar clusters for grouping triangles on a mesh island into charts by camera features. Default setting is 64.

GLB/GLTF - web-ready format; can utilize Draco compression, as well as Texture compression (JPEG and BasisU).

Draco Compression:

  • Compression Level - lower number provide lesser compression, but faster encode/decode times.
  • Texture Quantization Bits - lower numbers provide more compression, but lesser visual texture fidelity.
  • Position Quantization Bits - lower numbers provide more compression, but lesser visual geometry fidelity.
  • Generic Quantization Bits - lower number provide more compression, but lesser visual overall fidelity.

Texture Compression - JPEG (OBJ and GLB/GLTF) and BasisU (GLB/GLTF, can technically use for OBJ but pipelines do not really support this).

JPEG - can modify JPEG quality.

BasisU - can modify BasisU quality and KTX2 Supercompression level; only works for GLB/GLTF

UASTC - encoding produces much higher quality output at the cost of larger file sizes.

Screenshot

This can work when preview window is enabled, during a raw capture import, or on a compressed capture playback. For raw capture import and compressed capture playback, if you are recording a GIF or video, content must be playing.

Width/Height - width and height, in pixels, of the output GIF or video.

Capture - if GIF or Video are not selected, will capture screenshot.

Export GIF - export GIF of preview window; must set duration.

Export Video - export video of preview window; must set duration - can play the h264 file in VLC Media Player.

Delay - delay prior to recording GIF or video.

Profile

All settings need to be saved after modifying value; ctrl + s is a shortcut.

Load/Save Named - can save profiles outside of default section so that they can be loaded at any time.

UI Settings

Pop-up log on all errors - ensures that all errors, such as corrupt JPEGs, pop-up immediately so that you are alerted at the time it happens.

Experimental Settings - enables access to preliminary feature sets which may be shipped as main features or removed in the future.

Camera Tab

If a temperature icon appears on a camera tab, this means the camera is out of its operating range - quality may diminish.

View

Depth - black pixels means invalid information; you never want to see this on or behind the cube, as well as on the floor - rug may be needed.

Infrared - bright pulses of light emitting from a camera means hardware sync is not working; you do not want to see bright spots of light anywhere in the room as this may signal multi-path interference which will impact quality of calibration and volumetric content - try re-launching the Capture Suite or modifying the delay from primary setting.

Color

Color Controls

Exposure Time - recommended to be set to manual; may need to be decreased if capturing faster motion.

White Balance - recommended to be set to manual.

Color Post-Process Controls - includes brightness, contrast, saturation, and gain. Can be modified on a raw capture.

Delay from Primary - must be set in accordance with your sync chain; primary camera is set to 0 then subsequent cameras are set to an offset of 160 (0, 160, 320, 480, etc.). Check the calibration section for more information. This setting may have to be modified if interference persists.

Depth

Azure Depth Settings - settings which impact how the Azure Kinect depth engine operates; changes seen within the depth view. You can select Apply to All in the depth section to apply all settings changes (even when clicking Reset to Default Settings) to all connected cameras. These settings do not apply to previously captured raw captures.

Max Confidence - filters depth pixels based on confidence; lower values filter more depth pixels. Default setting is 20.

Infrared Min Threshold - modifies the minimum infrared pixel value required to display a depth pixel. Default setting is 50.

Infrared Max Threshold - modifies the maximum infrared pixel value required to display a depth pixel. Default setting is 14000.

Reflectivity Min Threshold - modifies the minimum reflectivity threshold for a valid pixel; setting to 0 may bring back features such as dark hair. Default setting is 0.

Hole Filling Direction - allows you to fine-tune your hole filling. Selecting near camera will have the foreground spread out, whereas selecting far camera will have the background spread out; far camera is the default setting. If you are experiencing large invalid information in your background, switching to near camera can potentially help reduce issues. Other cameras with a good view may be able to "correct" the bad information.

Camera Configuration

Top Of Camera Screen

Within a camera tab, you will be able to access and modify a variety of controls. To start off, you can give a camera a different name other than its serial number. The serial number is always visible next to the camera name.

You can switch between the color, depth, and infrared views. You can also see the unfilled, filled (which shows depth which is hole-filled), and confidence map in the depth view, provided preview is enabled. The confidence map shows a different view of the room - lighter, yellow areas will be higher quality in depth compared to the darker, blue areas. This view is also useful in seeing noisy areas of your environment.

You can also choose to calibrate a specific camera individually rather than calibrating all cameras simultaneously. This may be useful if only one camera is giving you an issue during the calibration process. Alternatively, you can select live calibration if you would like to continuously calibrate the camera while moving the placement of a camera. More on this feature is detailed in the calibration section within the "Calibration Troubleshooting" area. When calibrating, a button appears next to the calibrate camera button so that you can cancel calibration at any time.

Color

Color

Within the color section you can enable and disable the camera as well as change the camera resolution. While each camera can have a unique resolution, if one camera utilizes 3072p (which is capped at 15 FPS), all cameras will be capped to 15 FPS. Prior to making a selection, you can select apply to all which will go on to apply the resolution setting to all connected cameras. You can also do this for enabling/disabling multiple cameras. Ensure Hardware JPEG Decoding is always enabled.

Color Controls

Color Controls

Inside the color section lives the color controls and color post-process controls. The post-process controls can impact your content in a slightly different manner compared to the Azure Kinect color controls. These four post-process controls can also be applied to raw captures in post by accessing the camera tab in raw import mode. The regular color control settings can be applied to all connected cameras provided you select apply to all prior to modifying settings. Soar recommends using manual exposure and manual white balance to help match the colors between cameras and ensure hardware sync works properly. These settings should be set prior to enabling hardware sync, otherwise, the Capture Suite should be restarted. Unless otherwise noted, the default settings will more or less be suitable for most environments. For details on the "Delay from Primary" setting, see the above section "Enabling Hardware Sync and Setting Delay".

  • Exposure (with auto option): sets exposure time and gain based on lighting conditions of environment. When auto is enabled, the Exposure time and Gain values will be set for you. The unit of measurement is microseconds. Soar recommends this to be set to manual (prior to hardware sync and calibration) and potentially fine-tuned based on recording requirements; if recording faster motion, you may want to lower this value.
  • Gain: in combination with exposure, gain will increase the exposure of the sensor color video. This setting is hidden when auto exposure is enabled.
  • White Balance (with auto option): sets recommended color temperature based on lighting conditions of environment. There is an Auto option. Soar recommends this to be set to manual (prior to hardware sync and calibration) and fine-tuned for your environment.
  • Brightness: adjusts overall brightness of sensor color video.
  • Contrast: adjusts overall contrast of sensor color video.
  • Saturation: adjusts overall saturation of sensor color video.
  • Sharpness: accentuates fine color detail represented in sensor color video.
  • Backlight Compensation: can be enabled if you are shooting in a low or inconsistently lit environment.
  • Powerline Frequency: setting to prevent flickering or banding seen in video that is not compatible with the AC frequencies of the capture space; 60Hz is most common in North America, whereas most other countries have an AC frequency of 50Hz.
  • Delay from Primary: sets the emission and depth exposure delay for each subordinate camera; unit of measurement is microseconds. This is used to prevent interference between the cameras. Proper configuration is required to use hardware sync. Soar recommends a delay multiple of 160 for your sync chain (primary camera is 0, second camera is 160, third camera is 320, etc.). The maximum delay from primary setting for most depth modes (320 x 288, 640 x 576, and 512 x 512) is 1450 microseconds, while the maximum setting for depth mode 1024 x 1024 is 2390 microseconds.
  • Color Post-Process Controls: a separate pass that comes after the color is set on the Azure Kinects which is helpful in grading captures after they've been recorded - can be accessed via the camera tab in raw import mode.

Depth

Depth

The depth section, much like the color section, allows you to enable/disable a camera and modify its resolution. While each camera can have a unique resolution, if one camera utilizes 1024 (which is capped at 15 FPS), all cameras will be capped to 15 FPS. These settings can be applied to all connected cameras, provided you select apply to all prior to making your change.

Azure Depth Settings - settings which impact how the Azure Kinect depth engine operates; changes seen within the depth view. You can select Apply to All in the depth section to apply all settings changes (even when clicking Reset to Default Settings) to all connected cameras. These settings do not apply to previously captured raw captures.

Max Confidence - filters depth pixels based on confidence; lower values filter more depth pixels. Default setting is 20.

Infrared Min Threshold - modifies the minimum infrared pixel value required to display a depth pixel. Default setting is 50.

Infrared Max Threshold - modifies the maximum infrared pixel value required to display a depth pixel. Default setting is 14000.

Reflectivity Min Threshold - modifies the minimum reflectivity threshold for a valid pixel; setting to 0 may bring back features such as dark hair. Default setting is 0.

The Capture Suite has a hole-filling feature which allows you to improve your capture quality. Basic is the default option. This setting can be applied to all cameras if you select apply to all prior to selecting the option.

"Direction" allows you to fine-tune your hole filling. Selecting near camera will have the foreground spread out, whereas selecting far camera will have the background spread out; far camera is the default setting. If you are experiencing large invalid information in your background, switching to near camera can potentially help reduce issues. Other cameras with a good view may be able to "correct" the bad information.

Camera Geometry

Camera Geometry

Inside the camera geometry section you will be able to modify the near plane and far plane, which are measured in meters. These settings are purely for visual purposes in the depth view. You should not have to modify these settings.

The near plane is the minimum distance which the camera will capture; anything past this value will be discarded. The default value is 0.050.

The far plane is the maximum distance which the camera will capture; anything past this value will be discarded. The default value is 10.000.

Device Statistics

Device Statistics

The device statistics section has pertinent information about your connected camera. The camera FPS is displayed here, as well as its calibration state. The most important bit of information in this section is regarding hardware sync. Your sync input/output, sync status, and sync state for the camera will be shown here.

Workflows

Audio

Audio

In order to capture audio, you must connect a USB interface and microphone to the PC via USB audio interface; microphone can be wireless or wired. Within the audio section, select enable audio capture. Then, select the capture device that corresponds to the USB audio interface. Once you have headphones connected, ensure they are selected in the playback device drop down and select start monitoring. You should be able to hear the audio. In order to record, you must disable audio monitoring. The input gain slider allows you to modify the audio level for the recording. The preview gain slider acts only as a volume slider when previewing your audio — it does not impact the actual recorded audio.

If recording raw, confirm you have selected raw audio in the output section. You can select a raw capture format within the audio section - IEEE Float or PCM (16 or 32 bit). When importing the raw file after recording, import your SRD file normally and the corresponding WAV file will be brought in. You can also import custom WAV files recorded externally. After recording a raw capture with audio, you will notice a TML file. This is the Capture Timeline file and will automatically import and sync the previously recorded WAV file.

Checking "enable 3D audio playback" sets a flag to default to using spatialized audio features when playing back with the Soar Unity SDK.

In order to route audio from other mixing programs into the Capture Suite, you must install Jack Audio. The Jack Audio workflow is as follows:

  • Launch Jack Audio Connection Kit.
  • Start Jack (should say started).
  • Open settings and ensure sample rate is 44100.
  • Ensure interface is default.
  • Confirm Jack Audio is still running.
  • Setup Jack Audio in your mixing program then launch Capture Suite.
  • Within the Capture Suite, enable audio capture, select Jack Audio as capture device, and start monitoring. You might have to re-launch Capture Suite.
  • Return to Jack Audio.
  • Open patch bay.
  • Add input.
  • Select the Capture Suite as Client.
  • Add plug.
  • Add output (your mixing program).
  • Add plug.
  • Save.
  • Open up graph.
  • Ensure your mixing program has its output linked to Capture Suite Jack Audio input.
  • Return to Capture Suite.
  • Play your mixing app or get ready to record - you should hear audio in your headphones if you are monitoring.
  • Select preview and then you are ready to record.

Usage with Green Screen

Chroma Key

Soar does not require or recommend a green screen in order to capture volumetric content. In fact, it could potentially hinder your output quality due to green spill. If you are using a green screen for other reasons, Capture Suite has a chroma key setting found within the volumization rendering section to aid in removing any green spill on the mesh. This setting will allow you to choose a color that best represents your green screen.

  • Gain: how severe it deweights against the chroma distance; also controls desaturation.
  • Bias: base level cutoff; how far is this color away from the selected chroma key.

These settings can be set both prior to recording and on playback using our Unity SDK. If recording raw, this setting can be set after capture.

Saving a Compressed Capture

Compressed Capture

If you are happy with your preview, you're ready to start recording. Make sure texture bleed removal (found within the volumization rendering section) is enabled as this will improve quality. In order to record a compressed capture, head to the output section. Type in a capture name, select a capture path, then check only the "compressed capture" checkbox. If your capture name has more than one word, you can separate words by spaces, underscores, or dashes. You cannot modify the capture name after recording.

Important to note - After every fresh launch of the Capture Suite, always do a test recording to make sure everything is running as expected prior to your first real take.

The default settings within the output section should suffice in most cases. You can lower the video width and height to potentially increase instances on the Unity side, but quality may diminish. This setting should be modified by a power of 2 with the width being the larger of the numbers. Decreasing the vertex quantization value will also reduce file size, but may impact quality. Click "preview" then "record". When you are ready to finish recording, click "stop". Your compressed capture will be saved in the specified directory, ready for playback in Capture Suite or in Unity utilizing the Soar Unity SDK.

Confirm your video plays back within the Capture Suite prior to importing into Unity. Your triangle count, found in both the preview window (and in raw capture mode) and the playback screen, will show how many triangles are in your capture. Soar recommends no more than 200K triangles (150K triangles for the Oculus Quest and Meta Quest 2) for playback on mobile devices - this triangle count will impact your performance and the amount of instances you can have up at once.

Saving a Raw Capture

Raw Capture

If you would like to record a raw capture, head to the output section and select only the "raw capture" checkbox. Type in a capture name and select a capture path. If your capture name has more than one word, you can separate words by spaces, underscores, or dashes. You cannot modify the capture name after recording. You can also record audio alongside your raw capture. If you aim to export to a mesh sequence, you can use this raw audio file with your exported mesh sequence. Note that this raw capture will have a very large file size, but will provide flexibility to re-process after the content has been shot. Prior to recording, make sure you have adequate storage space. You can also export to compressed from raw, as well as a variety of mesh sequences.

Important to note - After every fresh launch of the Capture Suite, always do a test recording to make sure everything is running as expected prior to your first real take. Click "preview" then "record". When you are ready to finish recording, click "stop".

Saving a Textured Mesh

In order to generate a mesh export (OBJ, PLY, GLB/GLTF), you must first record a raw capture then select the "mesh output" and unwrapped textures (raw color feed is optional) before loading the file back in; ensure only one output format is selected. Prior to loading your raw capture, type in a capture name and select a capture path. If your capture name has more than one word, you can separate words by spaces, underscores, or dashes. You cannot modify the capture name after exporting. Also, prior to exporting, if you would like to enable texture bleed removal, do so now. You can also modify any other settings just like you would when exporting to a compressed capture from a raw capture.

If exporting an OBJ sequence, you can enable the unwrapped textures. This will also output the PNG and MTL files. The OBJ file will reference the MTL file when it comes to texturing. This process can be rather time-consuming. If utilizing Arcturus' HoloEdit, do not include the MTL file.

If exporting a PLY sequence, you can also enable the unwrapped textures. This will output just the PNG files, so the PLY files will not be textured upon export.

If exporting a GLB or GLTF sequence, you can apply Draco and Texture compression. More information found within the Key Settings section in this documentation.

There is an accompanying JSON file exported alongside your OBJ and PLY sequences. This JSON file is our metadata description and includes information such as timecodes for frames. This file is necessary for other consumers of our data. If utilizing Arcturus' HoloEdit, do not include the JSON file.

If exporting MVE, also known as Multi-View Environment, you can select either OBJ or PLY. This will export each camera's color feed and the geometry (OBJ or PLY) along with a JSON which includes information such as timecodes for frames. The color feeds also have a file with the matrices in them to project back onto the mesh. With MVE, you can use the textures in a VFX capacity for effects.

If you recorded audio at the time of your raw capture, you can use the initially recorded WAV file with these mesh sequences. There is no need to click "include audio" prior to export.

If you are planning to use your OBJ sequence Arcturus' HoloEdit, make sure you have enabled cap at the bounding box prior to export. Also, as mentioned above, you should not include the accompanying JSON or MTL files.

There are a slew of settings that will affect export time and mesh quality. Head to the Key Settings section in this guide to read more within the Output section.

Playing Back a Compressed Capture

Compressed Capture Files

Head to the folder that has your recording. You will note a variety of files, including an MP4, SGV, M4A (if you are recording audio), and a handful of M3U8 files. If you want to load a recording and play it back, select "load recording" inside the Capture Suite. Select the desired master M3U8 file and click the play button when the playback screen loads up.

Capture Suite will append seconds since epoch 1/1/2020 midnight GMT to the filename. This is done so that captures will never be overwritten.

In the preview window (and raw capture mode) and on the playback screen, you will notice the triangle count. The triangle count of your capture will determine your performance and the number of instances you can play. Soar recommends keeping your capture below 200K triangles (150K triangles for the Oculus Quest and Meta Quest 2).

Playback

Your captures are ready to be imported into Unity for playback on a variety of devices. Check out the Unity Package documentation to find out more. Remember, the triangle count of your capture will determine your performance and the number of instances you can play. Soar recommends keeping your capture below 200K triangles (150K triangles for the Oculus Quest and Meta Quest 2).

Playing Back a Raw Capture

Raw Capture File

Head to the folder that has your recording. You will note one SRD file, as well as a WAV file if you recorded audio. You will also notice a TML file which will essentially sync up the raw capture and raw audio file.

Import Raw Capture

In the Capture Suite, click into the "import raw capture section". You are able to load the SRD file here. If you select "include audio" prior to loading the SRD, the recorded audio file will be imported as well. You can also load a WAV file that was not recorded within the Capture Suite by clicking "load custom WAV file". There's an audio offset slider (seconds) which will allow you to help sync up that custom WAV file. Prior to loading your capture, ensure you have storage space for your export. Also, note the settings in this area.

Select "display only" if you want to just watch your raw content. "Process with display" will process the capture per your export settings upon load. If using "process with display", be sure to select the desired file output within the output section - either compressed capture, mesh (OBJ/PLY sequence) + unwrapped textures (only select unwrapped textures if you want texturing on your mesh), MVE, or per-camera point cloud PLY. Check "start paused" if you would like to adjust your capture prior to export. You can also disable a camera by clicking into a camera tab - you will also be able to modify some color controls within each camera tab and see the color/depth/infrared views at time of capture. Ensure you resubmit the frame, or click 'r', after every change to view your changes.

Raw import mode allows you to adjust settings similarly to when you originally made the capture, including setting the capture's bounding box and choosing start and end points. You can also disable a camera and modify color controls all within the camera tab. Remember to resubmit your frame whenever you make a change to see the difference visually by clicking the resubmit button or hotkey 'r'.

The processing for both meshes (OBJ/PLY sequences) with unwrapped textures and MVE format may take a little bit of time. Capture Suite may look like it is frozen, but don’t worry - it's churning; if your progress bar hasn't moved in quite some time, you may have run out of storage or encountered a hardware issue.

After importing, you will notice the triangle count in the preview window. This is important to note as you want to keep this value below 200K (150K triangles for the Oculus Quest and Meta Quest 2) for consistent, smooth playback on your client devices.

Streaming Locally

Local Stream

The firewall on your capture PC should be disabled to allow connections for a live stream on your local network and you want to be hardwired to the router. As this stream is happening locally, your internet upload/download will not have an impact on the stream; routers and PCs have limitations on bandwidth throughput.

In order to stream on your local network to a device, you must select both local server and compressed capture within the output section after you set a capture name and capture path. The port 8080 should suffice. Then select "preview" and "record". On the device that you want to stream to, enter the streaming URL. The streaming URL starts with http://, followed by the local IP address of the computer (found by going to network connections on the PC, clicking into adapter settings, and viewing the IPv4 address) and then adding :port/capture-name_seconds-since-epoch_master.m3u8.

The full capture name, appended by seconds since epoch 1/1/2020 midnight GMT, is found within file explorer at the capture path selected in the output section as soon as you are recording/streaming. Ex: http://192.168.8.67:8080/Test_01642615614_master.m3u8.

Important to note - when you start the local stream, you will have to scrub ahead on the client device to get to the head of the stream. At first this will seem like latency, but it is the client device being "X" seconds behind based on the time between the local stream starting and inputting the URL on the client device. You should notice only a few seconds of latency during streaming due to the HLS protocol.

You can also locally stream previously recorded files, so long as they are present in the capture path folder selected in the Output section. Local Server must be enabled for VOD streaming. Instead of actively recording, select the master file and input that on your client device.