..still, not seeing anything on how the interaction (selection) is achieved.
A combination of tracking, focus and movement of the eyes?
Maybe? Still, I interact a bit more with my world than moving my eyeballs around - and I'd expect the same of augmented reality (..as thankfully I'm not a mute quadriplegic).
Their demo (IMO) should have both voice and hand selections. Perhaps voice for most of the menu choices and using hands to rotate the models as desired. I might have even "thrown in" movement around the models (..like the paint program for the Vive), and perhaps the menus "following" the user by rotating horizontally on axis to face the user.