Deel I
TomTom, synonymous with satellite navigation systems, is making a big push into the autonomous vehicle area where it can leverage its mapping and UX expertise. At the 2018 CES in Las Vegas just-auto's Calum MacRae met with Willem Strijbosch, head of autonomous driving, TomTom to find out about their latest developments.
just-auto: So you've just announced a whole raft of developments in autonomous and what TomTom's going to be doing in the area. Can you take me through each of them and why they are game changing?
WS: There were six announcements, five of them were related autonomous driving. I'll try and recall all five! One of them was about AutoStream, which is a product to get high definition map data into the vehicle in a timely way and only the relevant data. Timeliness and relevance are key. Data needs to be up to date so that is as close to reality as it can be.
j-a: How do you ensure it is up to date?
We have several sources for our HD map database that we then stream with AutoStream
WS: We have several sources for our HD map database that we then stream with AutoStream, we have our own survey vehicles and we have sources coming in from cars on the road - crowd sourcing - that bit is small now but growing fast.
j-a: So anyone who's got a TomTom is contributing to the real time data that you have?
WS: Yes, but not necessarily a TomTom, because we need it from cameras and radars and LiDAR in the vehicles. That data's partially processed in the vehicles but we then process and we translate it into maps.
j-a: So you announced that you partnered with two companies for that, Baidu and Zenuity, are there going to be more partners and why those two first?
WS: There will be more partners and we're first with Baidu and Zenuity because they are long-standing partners of our. We pitched the concept of AutoStream to many OEMs and many other suppliers and there's a lot of positive reception for it. So undoubtedly there will be more.
j-a: And the other announcement was the driver sensing or the driver in the loop inputs wasn't it?
WS: Yes, MotionQ. It's a concept about problems that many people don't think about but that do exist. In this case motion sickness in self driving vehicles and how can you deal with that. And we have a long heritage of course in all sorts of user interface thinking in products. Some say that we already have three passengers in a vehicle having an autonomous experience that doesn't take into account that they may be facing backwards, standing up or they may be without a view out from windows.
MotionQ is a set of visual cues that enable passengers to anticipate an autonomous vehicle's motion, leading to a more comfortable experience
MotionQ is a set of visual cues that enable passengers to anticipate an autonomous vehicle's motion, leading to a more comfortable experience. It does this with intuitive overlays on the central display, communicating the vehicle's intended motion. It's here at CES in the Rinspeed Snap concept. The key is that we're exploring the concept, we're not really saying it's working we're just saying this is a problem, this is a concept that solves it to some extent, but it's one step in the direction.
j-a: How does the autonomous opportunity compare with your traditional business of embedded navigation systems?
WS: What we do in autonomous driving is provide an HD map service - MotionQ's a concept at this stage - including AutoStream and receiving all sorts of data from the vehicle. I'm not going to put any numbers on it, but analysts have for me and some of them are talking of a US$20 billion market. They're not my numbers, but we compare that to the market for standard navigation maps, which is about US$2 billion globally, and we're talking about a different sort of magnitude.
j-a: Do you think the need for good mapping is often disregarded in the race to autonomous because everyone tends to think it's about cameras, LiDAR and radar?
WS: That was probably the case a few years back. You'd have heard people say that humans drive with just our eyes, so we can make a computer do that as well. But that's a false analogy because we humans actually make maps in our heads. As soon as you have a commute with a car, as soon as you've driven it once you start to build a map in your head. You use point objects to localise yourself and if you for example take the exit of the highway to your home, there's multiple lanes at the bottom of the exit, it doesn't matter when it's raining, snowing, full of other cars, you know where you should put your vehicle, which lane you need to take because you've built that map. You're not reprocessing every time all the visual information. And that all translates to computing as well, it's very efficient and it leads to more safety and comfort in the vehicle if you have that map.
j-a: How far do you think you can go with your mapping technology in autonomous, will there be a point when that's it and you can't do anymore development on it?
WS: Nothing is infinite but we are still developing our standard definition map technology and that's 20 years ago, we've done a major overhaul of our whole map making platform. We will continue to improve there.