In this project, I worked in a team of 4 people and aimed for an optimistic resolution of two common problems in automotive navigation systems, and a demonstration of the proposed solution via a low fidelity video prototype. Afterwards, I continued exploring the concept independently, and extended ongoing individual work on it to adapt it for autonomous driving systems.
Current onboard navigation systems are largely based on GPS and inertia sensors and displayed either in 2D, or bird’s-eye 3D view, which lead to a lack of precision. The concept was focused on 2 frequent problems in such navigation systems:
- Since maps in the navigation systems are often not updated real time, these systems often fail to know about recent roadblocks or dead-ends (imposed by temporary construction work, for instance). Once a car arrives at a point where further progress is impeded by such obstruction, one which the GPS system is unaware of, simply turning around or taking a different route often prompts the system to ask the driver to make a U-turn and take the same route. In the absence of a real-time update system, this calls for a way to let the system know that the driver actively chooses not to take the prescribed route.
- Often, the directions presented in 2D or 3D for a navigation system can be confusing due to the placement of the directions on representations of road features on the navigation device. For example, if two exits on a road are very close to each other, but because of landmark placements the visibility of the exits are not clear in real life, it is all too common to take the wrong exit. This can happen with different kinds of road intersections where any kind of tuning maneuver needs to take place. This also calls for a way to address the confusion that arises due to the gap in alignment between the real and the virtual world.
The problem was approached in 2 separate steps.
Communicating with the device
The proposed solution to this was a rotary dial device placed at a convenient location in the vehicle within the driver’s reach. We didn’t explore the ergonomics of the device, but focused on the functionality instead. In the event that the driver faces a roadblock which the navigation device is unaware of, a change in route stimulates the device to prompt a U-turn. The driver often needs to drive quite a long way to get the system choose an alternate route. There is hence the necessity to explicitly tell the device to take a different route. This can be done in advance (even before reaching the blocked junction) so that the system can recalculate the route without having to wait.
The dial was the input device that can be turned in a clockwise or anticlockwise direction to indicate to the device a proactive attempt to take a different route. Each turn in a notched dial meant one exit in the said direction, and a direct press of the dial indicated a U-turn.
This was implemented in the form of a cardboard model prototype of a dial, and the system action was simulated in the form of a low fidelity video prototype to explain the functionality.
Lucid navigation directions
Current navigation systems are very much based on a distinct device, that presents the information either in the form of an LCD screen or a Heads Up Display (HUD). The HUDs in place are currently only able to display the information in 2D. This limits both the current forms of communications of navigation directions to virtual reality (VR), and not augmented reality (AR).
We created a video prototype for a navigation system based on the assumption of the availability of an advanced HUD that is able to project information in 3D, and is able to correct for the driver’s eye movement by adjusting for the parallax effect. (Devices such as the Microsoft Kinect has already shown that this is possible).
Current GPS map data is already detailed enough to be able to guide drivers into specific lanes to prepare them to take an exit. This information can be utilized and integrated within an AR environment where navigational ‘arrows’ are projected on the HUD in a way that is in alignment with the driver’s view of the world ahead of them.
The arrows will grow, shrink, and change perspective, as the car moves through the environment, and adjust for parallax as the driver moves their head. In case of an obstruction ahead, the arrow changes color to indicate a change of scenario and alert the driver to take evasive or precautionary action. The directions are 3D-mapped to the road ahead instead of showing a static direction with the usual countdown to the next significant maneuver, and is therefore able to precisely pinpoint to the driver which maneuver to execute next, and at what time. On arriving at the destination, this system is conceptualized to pinpoint the exact location of the object of interest in the driver’s field of vision.
We performed research and analysis on the quality and positioning of these navigational arrows and created a video prototype for it.
We procured a driver’s perspective from video footage of a GoPro camera mounted on a driver’s head as he drove a car in city and highway traffic. We augmented this footage with the help of Adobe AfterEffects, and Adobe Premiere to create a video prototype of the concept. Because of the difficulty in expressing the complex nature of our concept, we explicitly made our prototype evidently low fidelity so that it is adequately able to express the thought and functionality behind the concept, while making it clear that it doesn’t correspond to the vision of how the final product would look and feel.
CONTINUED INDEPENDENT WORK
While most people are willing to be driven to their destination under the right circumstances by a fully autonomous (NHTSA Level 4) vehicle, lack of trust in the car is one of the major factors in people’s hesitance towards acceptance of this technology. In this context, it is important for the car to give the passengers the sense that they are aware of the car’s intentions, and are able to take control at any time, or even tell it “how” to drive within their comfort zone. My ongoing research involves exploration of concepts in which an autonomous vehicle can communicate subtle behavioral aspects in driving such as intent, which is above and beyond the communication of simple navigational directions.
Different automakers have different ideas when it comes to self-driving cars. While Google’s design concept for an autonomous vehicle does away with the steering wheel and aims for complete autonomy, most automobile manufacturers retain the ability to drive the car manually, and maintain that autonomous driving for passenger cars will be relevant for driving in adverse or tedious conditions, or otherwise when the driver is willing to delegate the task to the car because they prefer to use their time in another way. Additionally, as autonomous passenger vehicle technologies become more common, they will be capable of autonomous driving only under certain conditions and in certain stretches of road, necessitating the driver to take manual control of the car at certain times. This calls for a very effective communication between the car and the driver, especially during the handoff of driving responsibility.
There can be several situations when expression of intent is crucial for the car in attaining the passenger’s trust or confidence. An example might be a scenario when the car would attempt an overtaking maneuver in a narrow two-lane road next to a semi-trailer truck. Even if the car might have deemed it a safe maneuver, the passenger might feel uncomfortable and might choose to wait until the road widens. Another scenario is when the car has chosen a “shortest route” path to a destination which takes it through a neighborhood or a part of town which the passenger would like to avoid.
In such situations, the passenger needs to be able to “tell” the car effectively how they want to be driven in order to maintain a high level of trust and comfort in the car. This communication can be established in three ways: visual, auditory, and haptic. The nuances of the many possible scenarios and the action the car intends to take can be too detailed and information-heavy to adequately express via visual aids. While auditory communication in such situations can augment the feedback, care must be taken that this doesn’t lead to information overload, and that the passenger is not overwhelmed, as it takes away from the conveniences of autonomous driving.
Car to human communication
A simple form of informing the passenger of the car’s intention is to just show the car’s navigation path well in advance, so that the passenger can take adequate actions if they disagree. The concept of the Augmented Reality 3D navigation system explained earlier is a potential way to do this. In a non-autonomous vehicle, this can act as an augmented navigation system which portrays directions and turns mapped to the road based on the driver’s perspective. In a fully autonomous car, this system can be adapted to communicate any future action the car may be taking. In the case of an overtaking maneuver, the arrow may change color and animate forward to indicate this. This is of particular relevance as the directional indicators and arrows mapped to the real world in context in a natural way will aid perception of intent. This can be adapted further for more complicated forms of communication with the passenger. A suitable topic for exploration and discussion in the workshop might be which conditions are crucial for the car to communicate its intent, and how can it be done most effectively.
Surveys and research have shown that visual and auditory feedback are the most preferred ways that humans would like to receive communication from self-driving cars. However, haptic feedback as a means of communication has not been explored extensively, and the ways to achieve this end with modalities other than various forms of vibration is an interesting area to be investigated.
Human to car communication
The other side of this coin is the investigation regarding how a passenger can convey his intentions to the car. This is comparatively more complicated. One of the ways this can be accomplished is with control surfaces that allow communication to the car regarding discrete direction and motion preferences (e.g. joystick, as opposed to a steering wheel and accelerator/ brake pedals). When the passenger is presented with a specific number of options regarding possibilities, they can notify the car of their preference using such a medium. However, the effectiveness of such devices for this purpose remains to be investigated.
“When a human is rarely required to respond, he will rarely respond when required”. – Peter Hancock
Keeping this in mind, it is important to strike the balance between keeping the passenger engaged, and relieving them of driving duties to do other things. This factor will become less important as cars reliably move toward full autonomy without requiring human assistance; however at present, this is of considerable importance as partially autonomous vehicles that depend on human intervention are likely to be widely available in the near future.
Car-Road User Communication
Communication to the outside world
As autonomy becomes more common and such vehicles share the road with other vehicles, pedestrians, or cyclists, it will become important to convey intent with the outside world. In slow speed situations, drivers often rely on eye contact and gestures with each other to communicate. In a non-fully-autonomous, non-connected-car environment autonomous vehicles will need to achieve this communication by other explicit means.
Some auto manufacturers like Audi, Mercedes-Benz, BMW, and Nissan have already explored this kind of communication via laser and LED lights, and projections on the road. However, road surfaces can be unpredictable, and often poorly suited for such important communication. An alternative idea is to display communication information on the windscreen of the autonomous vehicle itself. Investigation needs to be conducted regarding a comprehensive research on what kind of information an autonomous vehicle might need to communicate with the outside world, and what standardized way of doing that might be best suited from a Human-Computer-Interaction perspective.
A continued effort on this research is going to be undertaken in the CHI 2016 Workshop on HCI and Autonomous Vehicles: Contextual Experience Informs Design on May 8, 2016. Please check back for more as this post is updated.
Disclaimer: Video prototype available on request