MeTACAST: Target- and Context-aware Spatial Selection in VR
Description:
We propose three novel spatial data selection techniques for particle data in VR visualization environments. They are designed to be target- and context-aware and be suitable for a wide range of data features and complex scenarios. Each technique is designed to be adjusted to particular selection intents: the selection of consecutive dense regions, the selection of filament-like structures, and the selection of clusters—with all of them facilitating post-selection threshold adjustment. These techniques allow users to precisely select those regions of space for further exploration—with simple and approximate 3D pointing, brushing, or drawing input—using flexible point- or path-based input and without being limited by 3D occlusions, non-homogeneous feature density, or complex data shapes. These new techniques are evaluated in a controlled experiment and compared with the Baseline method, a region-based 3D painting selection. Our results indicate that our techniques are effective in handling a wide range of scenarios and allow users to select data based on their comprehension of crucial features. Furthermore, we analyze the attributes, requirements, and strategies of our spatial selection methods and compare them with existing state-of-the-art selection methods to handle diverse data features and situations. Based on this analysis we provide guidelines for choosing the most suitable 3D spatial selection techniques based on the interaction environment, the given data characteristics, or the need for interactive post-selection threshold adjustment.
Paper download: (33.2 MB)
Study materials and data:
Our study materials and data can be found in the following OSF repository: osf.io/dvj9n.
Software:
The software is available at github.com/LixiangZhao98/MeTACAST.
Videos:
paper video:
pre-conference presentation video for IEEE :
25 second paper preview for IEEE :
alternative, more “traditional” paper fast-forward for IEEE :
Get the videos:
- watch the paper video on YouTube
- download the paper video (MPEG4, 152MB)
- watch the pre-conference presentation video on YouTube
- download the pre-conference presentation video (MPEG4, 391MB)
- watch the preview video on YouTube
- download the preview video (MPEG4, 26.6MB)
- watch the fast-forward video on YouTube
- download the fast-forward video (MPEG4, 41.6MB)
Pictures:
The images from the paper are available under a CC-BY 4.0 license, see the license statement at the end of the paper.
Cross-references:
This paper relates to several of our previous publications:
- our Cloud-Lasso selection technique on 2D projections
- our CAST selection techniques on 2D projections
Reference:
This work was done at and in collaboration with the Department of Computing of Xi’an Jiaotong Liverpool University, China.