This post is on the recent trend of projection in driving HMI. I want to start with NAVDY, a start-up that is challenging distracted driving with a add-on product. The idea focuses on projecting information on to a transparent screen within the frame of the road. In doing so, Navdy hopes they can curb the drivers attention center and use audio and gestural commands to substitute texting and touch screens. As you can see in the video, in theory this seems to work effortlessly.
But if we are talking about people, and all the moments in-between, the extreme cases, I wonder if this solution will create more problems in the moments of uncertainty. For example, using gestures still does remove the drivers hands from the wheel in order to operate some of the functions. On top of this, there is a technological constraint to detect movement and a space in which the action must be completed in. When turning a nob or pushing down a button, there is a precise measurement mentally and technically in order to complete an action. This becomes less apparent within gestural technology.
This also raised the question of whether having everything in front of you is a less distracting than having it off to the side. Although this can be solved with technology that becomes situationally aware, it is a great study in how much information and when does it surface.
BMW is also experimenting with this, and I have heard from drivers that the solution is actually quite nice. But there is much to be flushed out in terms of fidelity and interacting with this new place of feedback.