Picture from Drive.ai
Most researchers working on self-driving cars are focusing on avoiding things like car crashes. This is indeed where autopilot cars excel.
Drive.ai is one of the few start-up companies that has driven the commercialization of autonomous driving technology. Now Drive.ai is peddling equipment to those business giants that will enable the current car to achieve automatic driving. Amazingly, the device also includes human-to-machine interactive components (HRI) that allow cars to communicate directly with people.
Why is the ability of self-driving cars to communicate with people so important?
Let's consider such a problem. What happens when you walk on a crosswalk without traffic lights? A car approaching you may slow down. When you walk in front of the car, you will make eye contact with the driver to make sure they see you and the driver will stop. Now imagine a driverless car in the above situation. Nobody controls, how do you know if this car is:
Did you detect it? Understand what you want to do? Decided to park for you?
Communication like this may happen more often than you think and may involve pedestrians, cyclists or other drivers. Self-driving cars require more complex communication. It even includes injecting such information: "There is an accident in front of you, please slow down."
The first generation of self-driving cars equipped with interpersonal communication components stems from the fact that there is a long transition period from traditional cars to self-driving cars. However, once the roads are driven by self-driving cars, communication between cars and cars is not a big problem.
However, due to the transition period, the universal solution will still be ignored. The example of returning to the sidewalk again, as opposed to helping people pass safely through the sidewalk, is that the universal solution is to ensure that self-driving cars do not hit people on the crosswalk. Â
Picture from Drive.ai
IEEE Spectrum recently interviewed Dr. Carol Reiley, co-founder and chairman of Drive.ai, for the details of the application of driverless vehicle technology for human-vehicle interaction and Drive.ai's one-stop deep learning approach for autonomous driving. Lei Feng Network (search "Lei Feng Net" public concern) is excerpted as follows:
What is unique about the IEEE:Drive.ai's self-driving car approach?
Carol Reiley: I see the self-driving car as the first generation of social robots that most people will communicate with. This kind of robot is not like humans, but it is a smart machine that can be activated by artificial intelligence.
[We must ask ourselves] Once you solve the problem from point A to point B, how do you communicate a driverless car with other people on the road? What does this relationship look like?
What about non-verbal communication at the sidewalk, at the crossroads or when you communicate in depth? How should a self-driving car be expressed to the outside world when the driver is replaced and how should it be communicated to make everyone feel safe and believe it? We believe that this is a form of communication that humans have not yet paid attention to.
IEEE: When it comes to activating intelligent robots through artificial intelligence, how does it develop in the field of automated driving? How does the AI's driverless car now differ from the cars that participated in the 2007 Driverless Auto Race?
Carol Reiley: We have deep learning to build our company. Sebastian (the father of Google’s unmanned vehicle) said before Google developed a driverless car: “Computer vision has no effect. I bet that HD Maps and Lidar are absolutely valid.†The Google Driverless Vehicle project was established as follows: Computer vision does not work.
In 2012, Google revolutionized computer vision and perceived artificial intelligence. Now this industry is also powered by deep learning. In this regard, Google has invested in deep learning a few years ago and can switch between different modes, but it is difficult to fundamentally change this approach. This is one of the advantages of our start-up company: We are building a driverless car company that learns from shallow to deep layers. And we are not just limited to perceptions, but we can also make decisions. This is not only a simple method from one end to the other, but it shows how the AI ​​has changed since the 2007 driverless car competition.
IEEE: What kind of sensor does your driverless car use? What do you think about your camera compared to Lidar?
Carol Reiley: Faced with these issues before we use the deep learning pipeline: what kind of sensors should be placed on cars, how much data I collect, and how many miles I need to drive. For deep learning, we use this method because we want to further promote low-cost sensors. The cheap sensor is the camera, plus deep learning, enough to analyze the image. We also have other sensors in case we need to, but we really want to respect cameras more than most other teams, and deep learning can help this.
Our team uses a variety of low-cost sensors; if Quanergy could get a $100 lidar, that would be great, and we would use it. We are not showing off what we can do with the camera. We are just trying to build a safe, inexpensive system that people can really use.
IEEE: Why is people-vehicle exchange so important for driverless cars?
Carol Reiley: When humans are driving, he will look for all social cues. For example, if you look at a car in front of you, and if its wheels are turning right, you can infer its next move: the car may turn right. People will also try to find other subtle clues to help them drive. These clues will make our car look socially smart, because you can predict the car before the form changes.
We are actually driving unmanned social functions. There are times when people feel confused when there is no language exchange between drivers. When you move people away, these cars need smart driving and can be understood by other people on the road, as well as ensure safety. So what happens between cars and pedestrians at the crossroads?
We are observing how our cars express themselves: LED lights, R2-D2 sounds and turn signals. We are also considering how to make our car communicate with everyone.
Driving is dynamic, there are many people around, and where these people will go is unpredictable. For a self-driving car that needs to make instant decisions, it needs to be very understandable when it switches modes. We need to show the outside world that the car is a self-driving car, indicating our intentions.
IEEE: Does it mean that the exchange of people and vehicles does not mean that the problem of the driving part of the self-driving car has largely been solved?
Carol Reiley: I think most of the energy in this industry is on driving technology here. This is not to say that people-vehicle communication is completely separate from it; I think they are closely related, and both need to progress together. This is not a robot in the lab. We need to consider many issues related to people.
I think that the automotive industry adopts a modular approach, but driverlessness is not a matter of modularity.
What is IEEE's plan for Drive.ai?
Carol Reiley: We are not producing cars; we are providing new equipment for businesses. Therefore, we will choose partners who are interested in shipping or soliciting. Now that the car has arrived at the Drive.ai factory, we have added it to the roof rack, which has sensors and people car AC components and software.
Logically, we see this as the first step in safe driving of self-driving cars. I think that the development of globally self-driving cars will cause tremendous confusion. People do not consider car-related people at all. Even if we solve the problem of self-driving cars, the bigger problem will be humans. You (car makers) must really design autonomous vehicles for human use.
We want to promote this technology quickly and safely. We regard traffic strategy as the first step for us and our partners. And we are also very interested in developing L4 (fully automatic driving) because L3 (sometimes people can drive) can cause confusion.
Picture from Drive.ai
Drive.ai has its own team and we will be testing in California. Finally, Drive.ai will develop from the delivery of goods to the sharing of goods and the satisfaction of both public transport and private cars.
APM offers 3 phase ac power source system. It`s single 3-phase output programmable AC Power Supply which provides with high power density. With high speed DSP+CPLD control, high frequency PWM technology, active PFC design, It is able to provide not only stable DC/AC output power,but also 3-phase / 1-phase output.
The 3 phase ac power supply is featured with high power density, high reliability and high precision, meanwhile it possesses operation interface of touch screen and keys manually. It is able to analog output normal or abnormal power input for electrical device to meet test requirements, which is applicable to electric, lighting, aviation sectors, etc. It could be applied to enterprise`s production test as well.
Some features as below:
- 5.6" large touch color screen, possess complete functions and easy to operate.
- Support for USB data import/export and screen snap from front panel.
- AC+DC mixed or independent output mode for voltage DC offset simulation.
- Capable of setting voltage and current output restriction, support for constant current output mode.
- Capable of setting output slope of voltage and frequency.
- Capable of setting ON/OFF phase angle.
- With reverse current protection to avoid current flowing backward.
- Built-in power meter, which is capable of measuring 5 electrical parameters per phase, including voltage, current, power, etc.
- Support mA current measurement function.
Three Phase AC Source System,Programmable 3 Phase Ac Power Supply,Ac Source System,Three Phase Source
APM Technologies Ltd , https://www.apmpowersupply.com