MRL VR + Spatial Computing, Day 2

 

Path to component-ization is becoming clearer.  I can see this becoming a Unity asset to kickstart Vive interactivity.  Just snap on some components to a few things and you'll be able to plug into a simple interaction pattern.

Some observations:

I'm liking this "focal point" concept more and more as time goes on.  It's a powerful and simple idea that's easy to work with both as a developer and user.  Conceptually rock-solid (so far), and the more I lean into it, the more I discover obvious yet still innovative solutions.

The ability to glide an object around against the normal of your focal point was totally accidental in the above code.  This effect will behave differently on non-cube geometry.  Again, though, the principles behind this interaction, while not coded out, are pretty clear as to how they're supposed to work.

Conceptually, whipping the world around the user (instead of pretending to move the body around a world), is much less taxing on the proprioceptors.  No perceptual dissonance, which is nice.

Environment navigation with this method is feels way more natural than the "transportation" pattern of navigation.  After all, in the real world, we always transition through spaces through translation, not teleportation.  This method is hardly disorienting.

Running into issues with rotation against two focal points.  I'm doing just a Unity Quaternion LookRotation, which produces some bad results.  The problem is that two points means that there's an unwanted rotational degree of freedom.  I plan on building around this by planting multiple focal points per controller.

Thanks again to MRL, and also to Dave Tennent for swinging by (and helping w/ a much-needed refactor).

I'm pretty new to Unity w/ regards to source control. Next time I'm in the lab I'll probably make a proper github repo. For now, though, here's the main portion of the code.