Documentation / Capture Suite

Table of Contents

Getting Started

Overview

This documentation is meant to get you up and running as quickly as possible — meaning by the end of it you'll understand the setup and core workflow for recording content. Our list of hardware requirements can be found here. For a more in-depth understanding of Capture Suite's functionality, head over to the UI Guide. For how to use your content inside Unity, see the Unity Package documentation.

Hardware Requirements

Please review our hardware requirements page to ensure you have everything you need to set up your capture stage and that your computer has all the outlined specifications. We cannot guarantee specifications that don't match what's on that page specifically.

Capture Area Setup

Capture Area

The Azure Kinect cameras come with a white cover that obscures the sync in and out ports. To remove the cover, use the included hex wrench to remove the two screws on the rear of each camera and slide it out.

The optimal setup for a full-body, 8 camera capture is 4 "high" cameras, set at a height slightly above the height of your subject, and 4 "low" cameras approximately 3-4 feet above the ground. There should be 10 feet between each high camera and the camera directly in front of it. High cameras should be tilted slightly downwards. Low cameras should placed in-between high cameras. All cameras should be placed in a vertical orientation with their color lens down.

We strongly suggest a room with minimal fluorescent and outdoor lighting, as it can interfere with the depth sensors. Shiny floors can also cause problems, so we suggest using a rug if you have an especially glossy floor. We recommend using freestanding LEDs to light your subject.

Warm temperatures can negatively impact the quality of your calibration and captures. For this reason, use a temperature controlled room, with quality airflow, with temperatures set between 50 and 75 fahrenheit.

If you elect to record audio and choose to use a microphone on a stand, position the microphone so that it is near the subject being captured but does not fall on the mesh when viewing the preview in the Capture Suite.

The spot at which you place the calibration cube is where the subject should stand when being captured. This should be roughly in the center of your capture area. It’s a good idea to place a piece of tape as a marker wherever the stand is so that you know the spot of calibration. The marker face which has 2 squares on top of 3 squares should face your front high camera.

To help with synchronizing the cameras, daisy-chain the hardware sync cables, also known as 3.5mm audio cables, starting from the front camera’s rear port marked “Out” and going into an adjacent camera’s port marked “In”. With an 8 camera setup, you will be using 7 cables total, as the cameras don’t need to form a completely connected chain.

Each included USB-C cable can be used with an extender as-needed and plugged into a port on a USB PCIe expansion card, avoiding the USB ports on your motherboard. Note that for every 4 cameras we recommend a StarTech USB 3.0 PCIe card with at least 4 dedicated 5gbps channels.

The Azure Kinects should be used with their external power adapters, use extension cords as-needed.

Computer Setup

Confirm your hardware meets the required specifications listed here.

Capture Suite requires the Microsoft Visual Studio C++ Redistributable, which is not bundled with its installer. Download it here.

We also require the Azure Kinect SDK, available here.

Ensure your Azure Kinects are up-to-date with the latest firmware. Instructions on how to do this can be found here.

Wardrobe

When selecting a wardrobe for your subject, avoid materials that absorb infrared light such as shiny, plastic, or reflective items as well as black cloth. These materials will not appear properly in the capture.

Starting the Capture Suite

Licensing

Capture Suite requires a license key. We support offline and online activation. For online activation, paste the key into the license entry field when prompted. For an offline activation, launch the Capture Suite and enter the license key you were given — then select "Offline Activation Request". You will be prompted to save a .txt file. Send the .txt file to Soar at licensing@streamsoar.com. We will send back a .dat file. Return to Capture Suite and click on "Activate Offline License". You will be prompted to load the .dat file and the license will activate.

Main Screen

When launching the Capture Suite, all of your connected cameras should show up on the camera tab bar at the very top of the window. The camera tabs are color coded.

  • Green: Calibrated
  • Yellow: Calibrating
  • Red: Not Calibrated

These tabs will also show notifications if a color or depth camera is disabled or if the camera has a temperature out of the acceptable range. They can be reordered, but the order cannot be saved so that it persists across Capture Suite launches. If you are missing a camera that is connected, click button marked "Refresh" found on the capture tab. If the camera is plugged in but does not appear, it is not communicating properly with your computer. You may have a USB bandwidth issue or a faulty USB extension cable.

Clicking "Log", found on the capture tab, will open a window flagging potential issues as you use the Capture Suite.

Calibration

Before you can record content, you need to calibrate your cameras. Ideally, this process is done each time you use the Capture Suite. Even if cameras do not move, you might find calibration can drift a little after people repeatedly walk in the area.

Calibration in this context refers to the process of computing the camera extrinsics, or where they are in space relative to the calibration cube — including how they're oriented.

Things to Check Before Calibrating

  • Are lens flares visible on the color camera feed? If so, adjust the cameras and lights so that they are not.
  • Are the cameras more or less vertical? Straighten them if not.
  • Are you in a room with a lot of florescent lighting? If so, turn the lights off and use LEDs. If that's not possible, turn as many off as you can, calibrating in dim lighting is OK.

Performing the Calibration

Enabling Hardware Sync and Setting Delay

We recommend using hardware sync which dramatically lowers depth noise, resulting in a more accurate calibration.

To use hardware sync, first ensure that your hardware sync cables are wired correctly. Your primary camera should have a 3.5mm sync cable inserted into only the sync out port. Your last subordinate camera should only have a 3.5mm sync cable inserted into the sync in port. Every other camera in your sync chain should have 3.5mm sync cable inserted into both the sync in port and sync out port. In order to diagnose potential sync issues, check out the tips and tricks section found at the bottom of this documentation.

If you're using hardware sync, you must also set the "delay from primary" setting for each camera which offsets the emissions and depth exposure to avoid cameras interfering with one another. "Delay from primary" is found in the color controls section (within the color section) inside each camera tab.

Delay From Primary

Soar recommends an offset of 160 microseconds. Your primary camera should be set to 0, the next camera in your sync chain should be set to 160, the following camera in your sync chain should be set to 320, so on and so forth. The above picture references the second camera in the sync chain, first camera after the primary camera. As you can see, this is set to 160. After setting these values, save your settings in the profile section on the capture tab.

Hardware Sync

Use the checkbox "Enable hardware sync" on the capture tab to turn it on. Confirm hardware sync is running correctly by inspecting the infrared views for each camera tab. You should not see large pulses of light from any camera, which is interference (see image below).

Interference

Now you are ready to start calibrating. Head to the calibration section on the capture tab. You must accurately measure your calibration cube and input the marker width, preferably in millimeters. Adaptive thresholding will be useful if you are calibrating in a brightly lit environment or a darker environment. Keeping the adaptive threshold value at 0 is ideal for most use cases. In brighter environments, you may want to use a negative value if your calibration is not yielding quality results. In darker environments, you may want to use a positive value. The setting named only take depth from light areas may provide a boost in calibration results as well.

Calibration

Click "calibrate all cameras" to start calibrating. This process will take about one minute if your "required samples" setting is set to 20. Increasing this value may result in a slightly more accurate calibration, but will take longer. If a camera tab is yellow, this means that it is currently calibrating. Once all tabs turn green, you can click the preview button to see your calibration. During calibration, it may be beneficial to view the color feeds of each camera to confirm our software is detecting the marker by noting an outline around it.

After calibrating, click the preview button to see your calibrated content.

Accessing the Settings and Calibration Data

Settings for Capture Suite are stored in %appdata%/Soar as JSON. AppData is a hidden folder, so type that path (%appdata%/Soar) in Windows File Explorer to easily access it. Here, you will find Capture Suite settings, calibration extrinsics, calibration pinhole intrinsics, world view projections, the color info structure, and the depth info structure.

Camera Configuration

Top Of Camera Screen

Within a camera tab, you will be able to access and modify a variety of controls. To start off, you can give a camera a different name other than its serial number. The serial number is always visible next to the camera name.

You can switch between the color, depth, and infrared views. You can also choose to calibrate a specific camera individually rather than calibrating all cameras simultaneously, which is found on the capture tab within the calibration section. During calibration, a button appears next to the calibrate camera button when the camera is calibrating so that you can cancel calibration at any time.

Color

Color

Within the color section you can enable and disable the camera as well as change the camera resolution. While each camera can have a unique resolution, if one camera has a resolution that is capped at a certain FPS, all cameras will be capped to that FPS. Prior to making a selection, you can select apply to all which will go on to apply the resolution setting to all connected cameras. You can also do this for enabling/disabling any given camera(s). Ensure Hardware JPEG Decoding is enabled.

Color Controls

Color Controls

Inside the color section lives the color controls. These settings can be applied to all connected cameras provided you select apply to all prior to modifying settings. Soar recommends using manual exposure and manual white balance to help match the colors between cameras. Unless otherwise noted, the default settings will more or less be fine for most environments. For details on the "Delay from Primary" setting, see the above section "Enabling Hardware Sync and Setting Delay".

  • Post-processing: a separate pass that comes after the color is set on the Azure Kinects, helpful in grading captures after they've been recorded.
  • Exposure (with auto option): sets exposure time and gain based on lighting conditions of environment. When auto is enabled, the Exposure time and Gain values will be set for you. The unit of measurement is microseconds. Soar recommends this to be set to manual (prior to hardware sync and calibration) and potentially fine-tuned based on recording requirements.
  • Gain: in combination with exposure, gain will increase the exposure of the sensor color video. This setting is hidden when auto exposure is enabled.
  • White Balance (with auto option): sets recommended color temperature based on lighting conditions of environment. There is an Auto option. Soar recommends this to be set to manual (prior to hardware sync and calibration) and fine-tuned for your environment.
  • Brightness: adjusts overall brightness of sensor color video.
  • Contrast: adjusts overall contrast of sensor color video.
  • Saturation: adjusts overall saturation of sensor color video.
  • Sharpness: accentuates fine color detail represented in sensor color video.
  • Backlight Compensation: can be enabled if you are shooting in a low or inconsistently lit environment.
  • Powerline Frequency: setting to prevent flickering or banding seen in video that is not compatible with the AC frequencies of the capture space; 60Hz is most common in North America, but most other countries have an AC frequency of 50Hz.
  • Delay from Primary: sets the emission and depth exposure delay for each subordinate camera; unit of measurement is microseconds. This is used to prevent interference between the cameras. Proper configuration is required to use hardware sync. Soar recommends a delay multiple of 160 for your sync chain (primary camera is 0, second camera is 160, third camera is 320, etc.). The maximum delay from primary setting for most modes is 1450 microseconds, while the maximum setting for 1024 x 1024 is 2390 microseconds.

Depth

Depth

The depth section, much like the color section, allows you to enable/disable a camera and modify its resolution. While each camera can have a unique resolution, if one camera has a resolution that is capped at a certain FPS, all cameras will be capped to that FPS. These settings can be applied to all connected cameras, provided you select apply to all prior to making your change.

The Capture Suite has a hole-filling feature which allows you improve your capture quality. Basic is the default option. This setting can be applied to all cameras if you select apply to all prior to selecting the option.

"Direction" allows you to fine-tune your hole filling. Selecting near camera will have the foreground spread out, whereas selecting far camera will have the background spread out; far camera is the default setting.

Camera Geometry

Camera Geometry

Inside the camera geometry section you will be able to modify the near plane and far plane, which are measured in meters.

The near plane is the minimum distance which the camera will capture; anything past this value will be discarded. The default value is 0.050.

The far plane is the maximum distance which the camera will capture; anything past this value will be discarded. The default value is 10.000.

Device Statistics

Device Statistics

The device statistics section has pertinent information about your connected camera. The most important bit of information in this section is regarding hardware sync. Your sync input/output, sync status, and sync state for the camera will be shown here.

Workflows

Audio

Audio

In order to capture audio, you must connect a USB interface and microphone to the PC. Within the audio section, select the capture device that corresponds to the USB interface. Then, select enable audio capture. Once you have headphones connected, ensure they are selected in the playback device drop down and select start monitoring. You should be able to hear the audio. The input gain slider allows you to modify the audio level for the recording. The preview gain slider acts only as a volume slider when previewing your audio — it does not impact the actual recorded audio.

If recording raw, confirm you have selected raw audio in the output section. You can select a raw capture format within the audio section - IEEE Float or PCM (16 or 32 bit). When importing the raw file after recording, import your SRD file normally and the corresponding WAV file will be brought in. You can also import custom WAV files recorded externally.

Checking "enable 3D audio playback" sets a flag to default to using spacialized audio features when playing back with our Unity SDK.

In order to route audio from other mixing programs into the Capture Suite, you must install Jack Audio. The Jack Audio workflow is as follows:

  • Launch Jack Audio Connection Kit.
  • Start Jack (should say started).
  • Open settings and ensure sample rate is 44100.
  • Ensure interface is default.
  • Confirm Jack Audio is still running.
  • Setup Jack Audio in your mixing program then launch Capture Suite.
  • Within the Capture Suite, enable audio capture, select Jack Audio as capture device, and start monitoring.
  • Return to Jack Audio.
  • Open patch bay.
  • Add input.
  • Select the Capture Suite as Client.
  • Add plug.
  • Add output (your mixing program).
  • Add plug.
  • Save.
  • Open up graph.
  • Ensure your mixing program has its output linked to Capture Suite Jack Audio input.
  • Return to Capture Suite.
  • Play your mixing app or get ready to record - you should hear audio in your headphones if you are monitoring.
  • Select preview and then you are ready to record.

Usage with Green Screen

Chroma Key

Soar does not require or recommend a green screen in order to capture volumetric content. In fact, it could potentially hinder your output quality. If you are using a green screen for other reasons, Capture Suite has a chroma key setting found within the volumization rendering section to aid in removing any green spill on the mesh. This setting will allow you to choose a color that best represents your green screen.

  • Gain: how severe it de-weights against the chroma distance; also controls de-saturation.
  • Bias: base level cut-off; how far is this color away from the selected chroma key.

These settings can be set both prior to recording and on playback using our Unity SDK. If recording raw, this setting can be set after capture.

Saving a Compressed Capture

Compressed Capture

If you are happy with your setup and you want to record a compressed capture, head to the output section. Type in a capture name, select a capture path, then check only the "compressed capture" checkbox. The default settings here should suffice in most cases. Click "preview" then "record". When you are ready to finish recording, click "stop". Your compressed capture will be saved in the specified directory, ready for playback in Capture Suite or our Unity SDK.

Saving a Raw Capture

Raw Capture

If you would like to record a raw capture, head to the output section and select only the "raw capture" checkbox. Note that this raw capture will have a very large file size but provide flexibility to re-process after the content has been shot. You can also export to compressed from raw. Click "preview" then "record". When you are ready to finish recording, click "stop".

Saving a Textured Mesh

In order to generate a mesh export (OBJ & PLY), you must first record a raw capture then select the "mesh output" before loading the file back in.

Playing Back a Compressed Capture

Compressed Capture Files

Head to the folder that has your recording. You will note a variety of files, including an mp4, sgv, m4a (if you are recording audio), and a handful of m3u8 files. If you want to load a recording and play it back, select "load recording" inside the Capture Suite. Select the desired master m3u8 file, then click "play".

Capture Suite will append seconds since epoch 1/1/2020 midnight GMT to the filename. This is done so that captures will never be overwritten.

Playback

Your captures are ready to be imported into Unity for playback on a variety of devices. Check out the Unity Package documentation to find out more.

Playing Back a Raw Capture

Raw Capture File

Head to the folder that has your recording. You will note one .srd file.

Import Raw Capture

In the Capture Suite, click into the "import raw capture section". You are able to load the .srd file here. Prior to loading your capture, note the settings in this area. Select "display only" if you want to just watch your raw content. "Process with display" will process the capture per your export settings upon load. If using "process with display", be sure to select the desired file output within the output section - either compressed capture, mesh (OBJ/PLY sequence), or MVE. Check "start paused" if you would like to adjust your capture prior to export.

Raw playback mode allows you to adjust settings similarly to when you originally made the capture, including setting the capture's bounding box and choosing start and end points. Ensure you resubmit your frame whenever you make a change to see the difference visually.

The processing for both meshes (OBJ/PLY sequences) and MVE may take a little bit of time. Capture Suite may look like it is frozen, but don’t worry - it's churning.

Streaming Locally

Local Stream

The firewall on your capture PC should be disabled to allow connections for a live stream on your local network.

In order to stream on your local network to a device, you must select both local server and compressed capture within the output section after you set a capture name and capture path. Then select "preview and record". On the device that you want to stream to, enter the streaming URL. The streaming URL starts with http://, followed by the local IP address of the computer (found by going to network connections on the PC, clicking into adapter settings, and viewing the IPv4 address) and then adding :port/capture-name_seconds-since-epoch_master.m3u8.

The full capture name, appended by seconds since epoch 1/1/2020 midnight GMT, is found within file explorer at the capture path selected in the output section as soon as you are recording/streaming. Ex: http://192.168.8.67:8080/localstream1_01642615614_master.m3u8.

JSON

Tips and Tricks

Artifacts

  • If you notice artifacts such as the ceiling within the bounding box after adjusting your bounding box settings, you may need to increase your minimum coverage setting.

Audio

  • Audio Failed to Open WMF Audio Device - the issue is most likely rooted in other software taking control of your connected audio device. Ensure no other program is running or disable exclusive mode. To disable exclusive mode, Right-click the Speaker icon on the Windows toolbar, and select Open Sound settings. Click Device properties located underneath choose your output device, then click additional device properties located underneath related settings. In the line properties window, click the advanced tab, then uncheck "Allow applications to take exclusive control of this device". Click apply, then click OK.

  • Be sure to test and verify your audio path outside of the Capture Suite.

Calibration

  • Calibrate Floor - if there is an error with floor calibration, try lowering the floor (increasing the min y) then trying again.

  • Environment

    • There should be no IR reflections.

      • Check infrared views of all cameras and ensure you do not see pulses of light coming off of areas such as glass, television screens, and picture frames.
    • If you have multiple calibration cubes, ensure that only one can be seen in the camera views. Another cube in the scene, even if it is 20+ feet away, can potentially be picked up and will yield an incorrect calibration.

    • Be wary of cameras pointing at room corners and at shiny objects as this can lead to multi-path interference.

    • Watch out for lights in the direct background of the cube, either in the ceiling or on the ground.

    • Keep an eye out for invalid pixels in your environment - black pixels in the depth view of a camera. Pixels become invalid when there is not enough dynamic range, multi-path interference, or due to fast motion. If your floor suffers from invalid pixels, consider putting a carpet down which is light and matte - too dark in IR will crush the dynamic range and add noise. Invalid pixels also cause bad depth/depth noise.

  • Lowering the stand on which the calibration cube is placed may yield better results.

  • Software

    • If your content is super skinny, one of the cameras may be mis-calibrated. Enable the point cloud view to find the camera that is the culprit.
    • Ensure the marker outlines (viewable during calibration in the color camera feed for each camera) during calibration do not jump around or extend past the marker.

Capture Room

  • Temperature - we recommend having a temperature controlled room, with quality airflow, where the temperature is set in the range between 50 and 75 fahrenheit. Temperatures towards the upper bounds, and higher, have shown to reduce calibration and capture quality.

  • If you see a lot of invalid pixels in your depth view in the background, this may cause worse calibrations and captures. Enshrouding your capture area with curtains should mitigate this issue.

Corrupt Video

  • Reboot the Camera - if this fixes your corruption issue, then the camera was in a bad state.

  • Reboot the PC - if this fixes your corruption issue, then it was caused by a random Windows issue.

  • Isolate the camera having issues; only connect one camera to the PC and verify if you still receive the error - if this fixes your corruption issue, then you may have a USB bandwidth issue which is solved by utilizing an external USB PCIe card as referenced in the hardware recommendations.

  • Remove the USB extension cable and test with a direct connection - if this fixes your corruption issue, then it was caused by a faulty USB extension cable.

  • Ensure your external USB PCIe cards are powered by SATA cables.

  • Confirm your PC has an appropriately rated power supply and you are not under-powering the computer.

  • Confirm your hardware specs are included in the recommended list from Soar.

Export

  • If exporting to a mesh is very slow, ensure there is no floor in your scene. Also utilizing a higher volumization resolution will result in slower exports.

Hardware

  • If you encounter rendering speed issues, you may have an issue with your RAM clock speeds. Use CPU-Z to look at memory settings; if you're running in single channel with the lowest possible memory clock, you're going to encounter issues. In order to modify this, you must do so within the BIOS.

  • If there is a flicker when viewing content in preview or a flicker in the color camera feed, open the log. If there is a corrupt JPEG error, this might mean the USB extension cable for that camera needs to be replaced. This can also mean your powerline frequency setting is not correct; 60Hz for North America, 50Hz for most other countries.

  • USB Extension Cables

    • Lack of strain relief, carpeted room, or an exposed cable connector could destroy a cable and yield corrupt JPEG errors in the log.
    • Be mindful of electrostatic discharge especially if you are continually grabbing metal microphone stands as you are pulling out, or putting your hand near, the plugs of cables.
  • If all of your connected cameras are not appearing in the Capture Suite, and this persists after clicking the refresh button, you most likely have a USB bandwidth issue. Consult the recommended hardware and ensure you are using a USB 3.0 PCIe card.

Hardware Sync

  • Check infrared views to ensure there are no pulses of light coming from any cameras.

  • Ensure that delay from primary is set for each camera in accordance with documentation.

  • If camera feeds stop when enabling hardware sync, it could be one of a few things. First, ensure hardware sync is wired correctly for each camera and that the cameras are all updated to the latest Azure Kinect firmware. There could be a bad USB extender, which is typically noted by corrupt JPEGs in the log. It can also mean that there is a USB bandwidth issue. Finally, try unplugging your cameras and re-plugging them back into the computer one by one. Sometimes a PC shutdown will help as well.

  • If hardware sync is enabled and you still notice pulses of light from cameras in the infrared view, increase your delay from primary setting until the pulses of light cease.

Lighting

  • Don’t use fluorescent lighting - use LED lighting instead.

  • Confirm there are no lens flares on any of the cameras.

  • Check the camera stands in infrared mode and ensure there are no bright spots on them which can lead to multi-path interference.

  • If calibrating in a very bright space or a very dark space, consider using adaptive thresholding.

    • Just enabling this setting, and keeping it at 0, may do the trick.

      • In dark areas, you might want to use a positive value to make the light areas come through.
      • In light areas, you might want to use a negative value to make the darker areas come through.

Preview Window

  • If your content does not appear, and you are sure it's not a hardware issue (your camera feeds are not frozen), check to see if your volumization resolution is set to 256 and the maximum vertices is set to 262144. If you volumization resolution is higher, you will need to increase your maximum vertices until your content reappears.

  • If you notice lots of noise (ripples) on the calibration cube, this is due to camera placement. Odds are that one or more of your high cameras can see another marker on a glancing angle; this may be very slight. Ensuring the high camera is head on the marker for each face should eliminate this issue.

Rendering Speed

  • If rendering is slow, try decreasing the maximum queued frames down to 2 or 3.