MeTACAST: Target- and Context-aware Spatial Selection in VR

Description:

 

We propose three novel spatial data selection techniques for particle data in VR visualization environments. They are designed to be target- and context-aware and be suitable for a wide range of data features and complex scenarios. Each technique is designed to be adjusted to particular selection intents: the selection of consecutive dense regions, the selection of filament-like structures, and the selection of clusters—with all of them facilitating post-selection threshold adjustment. These techniques allow users to precisely select those regions of space for further exploration—with simple and approximate 3D pointing, brushing, or drawing input—using flexible point- or path-based input and without being limited by 3D occlusions, non-homogeneous feature density, or complex data shapes. These new techniques are evaluated in a controlled experiment and compared with the Baseline method, a region-based 3D painting selection. Our results indicate that our techniques are effective in handling a wide range of scenarios and allow users to select data based on their comprehension of crucial features. Furthermore, we analyze the attributes, requirements, and strategies of our spatial selection methods and compare them with existing state-of-the-art selection methods to handle diverse data features and situations. Based on this analysis we provide guidelines for choosing the most suitable 3D spatial selection techniques based on the interaction environment, the given data characteristics, or the need for interactive post-selection threshold adjustment.

Paper download:  (33.2 MB)

 

Study materials and data:

Our study materials and data can be found in the following OSF repository: osf.io/dvj9n.

 

Software:

The software is available at github.com/LixiangZhao98/MeTACAST.

Videos:

paper video:

pre-conference presentation video for IEEE VIS:

25 second paper preview for IEEE VIS:

alternative, more “traditional” paper fast-forward for IEEE VIS:

Get the videos:

Pictures:

The images from the paper are available under a CC-BY 4.0 license, see the license statement at the end of the paper.

Cross-references:

This paper relates to several of our previous publications:

Reference:

Lixiang Zhao, Tobias Isenberg, Fuqi Xie, Hai-Ning Liang, and Lingyun Yu (2024) MeTACAST: Target- and Context-aware Spatial Selection in VR. IEEE Transactions on Visualization and Computer Graphics, 30(1):480–494, January 2024.
×

BibTeX entry:


@ARTICLE{Zhao:2024:MTC, author = {Lixiang Zhao and Tobias Isenberg and Fuqi Xie and Hai-Ning Liang and Lingyun Yu}, title = {{MeTACAST}: Target- and Context-aware Spatial Selection in {VR}}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2024}, volume = {30}, number = {1}, month = jan, pages = {480--494}, doi = {10.1109/TVCG.2023.3326517}, doi_url = {https://doi.org/10.1109/TVCG.2023.3326517}, oa_hal_url = {https://hal.science/hal-04196163}, preprint = {https://doi.org/10.48550/arXiv.2308.03616}, osf_url = {https://osf.io/dvj9n/}, github_url = {https://github.com/LixiangZhao98/MeTACAST}, github_url2 = {https://github.com/LixiangZhao98/PointCloud-Visualization-Tool}, github_url3 = {https://github.com/LixiangZhao98/MeTACAST-study}, url = {https://tobias.isenberg.cc/p/Zhao2024MTC}, pdf = {https://tobias.isenberg.cc/personal/papers/Zhao_2024_MTC.pdf}, }

This work was done at and in collaboration with the Department of Computing of Xi’an Jiaotong Liverpool University, China.