Gasgoo, a global leading automotive industry information service platform, has kicked off the Gasgoo Awards 2022. For the Top 100 Players of China’s New Automotive Supply Chain, the program will cover ten core segments, namely autonomous driving, smart cockpit, software, automotive chip, electrification, thermal management, body and chassis, interiors and exteriors, lightweight and new materials, as well as service providers.
In autonomous driving segment, the Gasgoo Awards 2022 have attracted 66 companies with 68 technologies to apply for the Top 100 Players of China’s New Automotive Supply Chain. Here are some details about them.
Product: Falcon LiDAR
Falcon is an industry-leading automotive-grade LiDAR developed by Innovusion through positive development. It can detect objects as far as 500 meters, and dark objects with 10% reflectivity up to 250 meters. Falcon can maximize point density in region of interest (ROI) which is adjustable to focus where it matters most to better track objects on the road ahead. High performance LiDAR like Falcon is key to L2+ safe autonomy.
• 500m ultra-long detection range, image-grade ultra-high resolution
• Flexible and adjustable ROI
• 1550nm laser wavelength enables better eye-safety
• Mass production of automotive-grade robust products is ready
Product: ADAS/AD High-Bandwidth In-Vehicle Data Logging offering
Advanced driver-assistance system (ADAS) test engineers record sensor and ground truth data during road testing to verify sensor capabilities and train ADAS and autonomous vehicle (AV) algorithms. Autonomous driving (AD) software demands multiple high-bandwidth sensors, driving exponential data volume and movement growth. To cost-effectively keep up with technology, today’s data-recording solutions must be simultaneously high-performance, forward-thinking, and adaptable.
• Future-Proof Systems—Hardware and software customization, flexibility, and third-party openness
• More than just a Logger—A single unified toolchain for data record, digital twin creation, data replay, software (SIL), and hardware in the loop (HIL)
• Increased Data Quality—Instrument-grade I/O, throughput, timing and synchronization, and edge computing capabilities for smart data reduction
• Maximum Data Security and Reduced Cost of Data—Fully encrypted, enterprise grade storage solution and cost-efficient Storage as a Service (STaaS) model
• Minimum System Complexity—One system for a reduced footprint and power consumption
Product: Floatable HD Camera Connection Solution
The floating solution can achieve malposition connection because of the ingenious internal structure design. It can resolve the cumulative tolerance issue that mainly comes from the customer assembly process and components tolerance, it can avoid assembly problems and potential lens imaging risks on customer side.
The Floating Solution provides malposition connect +/-0.50mm in X, Y, and Z axial, meanwhile, it can ensure High bandwidth (6Ghz at least) and high-reliability performance.
Product: Smart Electric Center
Smart electrical centers are power distribution devices in a vehicle that replace traditional melting fuses with smart fusing and electromechanical relays with solid-state switches. Taking this step can help OEMs optimize cable sizes; reduce system cost, weight, and packaging size; and enable intelligent power management features and diagnostics. Solid-state switching also is quieter and consumes less energy than electromechanical relays, and remains robust for millions of duty cycles.
As consumers demand increasingly advanced functionality from their vehicles, the automotive industry is undertaking the biggest transformation in electrical and electronic architecture in its history. Aptiv’s Smart Vehicle Architecture™ shows us where that journey is headed, and every journey begins with a single step. For some OEMs, that first step may be smart electrical centers.
They also allow for savings in cabling. In the past, wires had to be designed to be larger in diameter than physically needed — typically large enough to comfortably carry 30 percent more current than needed, which would allow enough tolerance for the fuse element at peak load. Smart fusing enables engineers to specify cabling to the physical limit of the load over a specified period of time, which often results in a reduction in the cable size to save cost, weight and space.
Just as importantly, smart electrical centers lead to a much more complete diagnostic picture. A smart electrical center can detect when the wires attached to it are close to failing. It can isolate fault conditions; detect open and short circuits, circuit overloads and underloads; and report all of that diagnostic information back to a central controller — which can in turn communicate that data to the consumer or a dealer for service.
The technology in smart electrical centers is key to ensuring that vehicles meet functional safety requirements. Typically, a smart electrical center is rated ASIL-B as a component, and it can be utilized in conjunction with another Automotive Safety Integrity Level B (ASIL-B)-rated component to form an ASIL-D-rated system. The ASIL-B component rating is accomplished by the improved robustness of solid state components, diagnostics and software implementations.
Product: high resolution automotive image sensor AR0820AT
The AR0820AT is an 8.3-megapixel, 2.1-micron (μm) DR-Pix wafer-stacked automotive sensor with 100 times the perception capability of human eye vision under all conditions. Wafer stacking technology enables compact camera designs. It is optimized for both low light and challenging high dynamic range (HDR) scenes, with a 2.1 µm DR-Pix BSI pixel and on-chip 140 dB HDR capture capability.
The sensor includes advanced functions such as in-pixel binning, windowing, and both video and single frame modes to provide flexible Region of Interest (ROI). The high resolution of this sensor, coupled with the ability to operate effectively in poor lighting conditions, enhances the capabilities of ADAS systems. Higher resolution allows smaller objects and other hazards to be detected and identified by the system at a greater distance, allowing the vehicle to alert the driver or take avoiding action earlier, thereby enhancing road safety. The advanced fault detection features and embedded data on the AR0820AT are key to enabling ASIL B compliant cameras that are critical for higher levels of autonomy.
• The AR0820AT provides high resolution and high dynamic range in a variety of poor lighting conditions, enabling detection and identification of smaller objects and other dangerous situations at longer distances (up to 185 meters), making contributions to enhance ADAS systems and more advanced autonomous driving systems, thereby improving road safety.
• The sensor includes sophisticated onboard fault monitoring and detection to ensure that the image can be relied upon. By moving the responsibility for fault detection from the main processor to the sensor itself, significant processing power is saved and any faults or issues are able to be detected far more quickly, allowing the vision system to return to a safe state before a potentially hazardous event occurs. The inbuilt cybersecurity ensures reliable and secure operation.
• With a scalable family of image sensors from onsemi (of which the AR0820AT is a key part), automotive OEMs are able to deploy the optimum sensor for each application within each vehicle they produce while leveraging software and algorithm development across all devices. This both reduces development time and diminishes design risk.
Product: High-precision Positioning Unit for Autonomous Driving
Bynav high-precision positioning unit is an automotive-grade ASIL-B GNSS/INS receiver based on bynav-developed GNSS ASIC and IMU, it supports dual-antenna RTK positioning and heading and deeply-coupled GNSS/INS algorithm and can effectively tackle with harsh environment such as satellite signal interference or loss, providing stable, continuous, reliable high-precision position and attitude.
Bynav-developed high-precision GNSS ASIC as a typical application of BeiDou high-precision positioning in intelligent vehicles.
With tens of millions of kilometers mileage in automotive area and years of dedicated research&development, Bynav has finished the design and massive production of baseband ASIC, broadband radio frequency ASIC as well as GNSS SoC. In many tests held by clients or third-parties, bynav high precision positioning unit for autonomous driving have shown world-class performance based on the self-developed ASICs.
Deeply-coupled GNSS/INS algorithm to effectively improve positioning accuracy and reliability.
The deeply-coupled GNSS/INS algorithm uses the data from IMU together with integrated navigation system to facilitate RTK ambiguity solution and signal tracking, which effectively improves the positioning accuracy and reliability. The harsh scenarios such as city canyons and overpasses have witnessed accuracy and stability of deeply-coupled algorithm 4-5 times as good as loosely-coupled algorithm’s. In addition, the deeply-coupled algorithm can detect the deceptive signals in an effective way and protect the unit from such interference.
Meeting the requirement of functional safety.
Product: Data Solution of Intelligent Driving System
Accurately collecting data sources in line with customer requirements; efficiently cleansing missing, duplicated or erroneous data and systematically assisting machines to recognize data features. The cleansed data is processed using BasicFinder’s self-developed data annotation tools. OCR, B-Box, semantic segmentation, audio transcription and 3D point cloud tools are used to classify and label data, and to generate machine identification codes that follow machine learning and training requirements. Rigorous QC standards are set to ensure the objectivity and accuracy of the data, resulting in nonpareil AI algorithm performance.
BasicFinder is the 1st AI data company in China with a closed-loop industrial chain. It has “labeling & modeling” dual engines, as well as AI full-life-cycle services and software ecosystems from data labeling to model training.
Managing international businesses and partnering with global vendors enable BasicFinder to achieve global business expansion.
Product: Autonomous Driving Domain Controller
iECU 3.1 is the representative work of Technomous’ high-end autonomous driving domain controller. It was developed by a joint Chinese and foreign team in accordance with the ASPICE Level 2 process, using Nvidia’s latest Orin-X chip as the core SoC, and adopting mass-production-verified European premium OEMs. The heterogeneous multi-chip hardware architecture based on the TSN backbone network, supplemented by the functional safety ASIL-D certified safety software platform MotionWise, empowers customers to realize level 3 and beyond autonomous driving applications. iECU 3.1 will be the first to be integrated into the IM L7 as an autonomous driving ECU module, and will be mass-produced and launched in the middle of 2022.
iECU 3.1 adopts a mature heterogeneous multi-chip hardware architecture that has been verified by Audi A8L zFAS mass production project, supports up to 14 cameras, and the camera has a maximum of 15 million pixels. Rich interfaces and national security chip, higher compatibility and support capabilities, have obvious advantages.
MotionWise is a mass-production-proven safety software platform designed for autonomous driving. MotionWise’s deterministic technology-based solution achieves real-time guarantees, providing load-independent and delay-guaranteed globally available services; compatible with AUTOSAR, supporting application modularity, portability and standardized interfaces; ensuring continuous availability and the highest level of mission-critical functions. The performance supports the hybrid integration of applications with different functional safety levels.
Product: high-precision positioning combined system
Asensing technology’s high-precision positioning system deeply integrates GNSS, RTK， computer vision, vehicle dynamics, and other cutting-edge technologies, providing high-precision, stable and reliable positioning information for intelligent driving to support the intelligent driving capabilities for L2 and beyond.
The first one to own ISO 2626 functional safety certification for high-precision combined positioning combined system in China local market;
The first one to provide the high-precision navigation and positioning SOP solution for OEMs;
Self developed software and hardware and it could be self controllable. Having the automotive grade production lines and also with rich SOP experience;
Based on hardware and software that meets the automotive functional safety requirements，Asensing Technology’s high-precision positioning system can deliver precise, low latency, and robust navigation/positioning information under all scenarios after many years of analysis, test, and validation of autonomous driving scenarios.;
Till 2022 Q1, the system has been incorporated into over 200,000 mass-produced OEMs’ vehicles with L2 and beyond. And the accumulated safe driving mileage has exceeded 20 million kilometers.
Freetech Intelligent Systems
Facing the technology upgrade of ADAS industry and the upcoming demand of large-scale mass production of high-level autonomous driving, Freetech’s new generation of intelligent navigation assisted driving system is based on the new functions of L2++, through the flexible adaptation of a variety of chips platform solution, more optimized sensor configuration, more intelligent AI fusion algorithm, diversified software ecology, to achieve end-to-end autonomous driving from parking, highway to urban in different conditions. The system will be available for mass production delivery to passenger vehicle in 2022, and can be continuously upgraded through OTA.
Based on the self-developed ADC domain controller as the software platform, Freetech’s cost-effective integrated driving and parking solution uses hardware pre-built, continuous optimization of data and OTA to achieve continuous growth of functions and performance, more anthropomorphic system control, and the system can achieve L2.9 level of highway navigation-assisted driving functions (including lane cruise, automatic on and off ramps, automatic lane change, automatic highway switching, traffic congestion driving, etc.), reducing driving fatigue and making the driving experience more comfortable and safer.
Freetech NOA upgrades the front view camera pixel upgrade to 8M, to improve the recognition accuracy of targets and lane lines in front of this vehicle, improve the longitudinal control performance; in addition, we also upgrade the surround view and rear view camera to support target detection, to bridges the blind spot of detection of close proximity to lane targets and lane lines around the body, which can increase the follow-stop auto-start time, improve the success rate of lane change and safety.
Freetech adopts self-developed visual perception and fusion algorithm to realize 360° environment perception; provides real-time lane positioning, fuses TSR function to reduce speed-limit false alarms, and also provides lane level rendering display to realize point-to-point navigation assisted driving on highways and urban elevated areas.
Product: 5G+C-V2X automotive grade module
GM860A-C1AX module supports 5G NR, 4G LTE (Cat.20) and 3G networks. It delivers 900Mbps upload and 2.12GMbps download data rates on 5G networks.
GM860A supports a rich set of internet protocols (PAP, CHAP, PPP) and abundant functions (FOTA, TTS, Remote wakeup etc.), it also provides multiple hardware interfaces includes RGMII, USB, PCIE, 2xUSIM, I2S, etc.
The module supports C-V2X PC5, which provides low-latency and high-reliability information transmission for V2V, V2P, and V2I that ensuring effective information exchange and avoiding collision accidents. The production and design of the module meet the requirements of IATF 16949, and follow the requirements of automobile quality control processes APQP and PPAP. GM860A-C1AX has a wide temperature range, good ESD protection and EMC characteristics, good mechanical performance and reliability, very suitable for automotive products.
GM860A-C1AX module is developed based on Qualcomm SA515M platform. At present, it is the smallest 5G automotive grade module in the industry with size of 46×49×3 mm. Therefore, it has better flatness and also saves space for the customers’ PCB layout. GM860A-C1AX module supports more complete 5G bands, while N41/78/79 supports dual Tx, which provide a better user experience.
Goyu Intelligent Technology
Product: Autonomous Driving-Electronic Control Unit
With the help of high-definition maps, positioning systems, and intelligent sensors such as cameras and radars, combined with multi-sensor fusion technologies for vehicle positioning and environmental perception, the product is redundant and enhanced in perception and decision system to implement the lateral and longitudinal control of the vehicle in real-time, so as to realize intelligent speed limit, cruise, vehicle following, emergency braking, lane-keeping, automatic lane change, overtaking, high-speed switching, and other functions of intercity high-speed trunk line logistics.
• Full-vehicle mass production platform, the power consumption of the whole machine is less than 15W
• The platform is stable and reliable and has been mass-produced on many models
• In-vehicle real-time OS and middleware with the highest safety level and reliability, meeting the safety requirements of functional safety ASIL_D level
• Mature and safe supporting toolchain, flexible in development and adaptation
• Lots of product interfaces, easy for system expansion
Product: Self-driving Intelligent Vehicle Operating System and Computing Base Platform
By combining the experience of ICT and automobile development, AICC has dedicated itself to building a self-controlled intelligent vehicle operating system and computing base platform based on future E/E structure.
iVBB has realized the “double-decoupling” of hardware/software and software/applications. As a cross-platform,cross-car-model, and full-ecological product, it empowers OEMs to build their specific OS and to implement high-efficient, low-cost customized development of autonomous driving applications.
Product: Aqila 1.5µm fiber laser
Hitronics Technologies is the key light source supplier for 1.5µm LiDAR companies worldwide. The Aqila 1.5µm fiber laser developed by Hitronics is the first mass-produced auto-grade LiDAR laser with compact size, cost-effectiveness, and ideal power consumption. With Hitronics’ patented technologies, the Aqila 1.5µm fiber laser realizes the high reliability of auto-grade that meets the requirements of vehicle regulations; meets the requirements of small size and low power consumption of LiDAR light source for automobiles. The unique design makes full use of the advantages of China’s supply chain and solves the problem that the LiDAR light source accounts for a relatively high overall cost of the LiDAR. Its impressive balance of performance with auto-grade reliability, compact size, low power consumption, and high cost-effectiveness, combined with its features of eye-safe, strong atmospheric penetration, and good beam quality, enable diverse applications across ADAS, vehicle-road coordination, robotics, AGVs, surveying and mapping, security, intelligent transportation, etc.
Automotive grade: ISO 16750,IEC 60068, ISTA 3A
Operating temperature: -40 ~ 105 C;
Compact size: 55 x 50 x 19 mm 3;
Low power consumption: ＜10W；;
High cost-effectiveness, suitable for mass production of ADAS vehicles;
Detecting range: [200-1000M]
Eye safety, strong atmospheric penetration, and good beam quality.
Hitronics’ market share of 1.5µm fiber laser is proofing its quality, ranking first of 1.5µm fiber laser on ADAS vehicles in the world. It has been delivered in large quantities in 2022.
Product: training data solution for Autopilot
Multi-channel voice and video capture and annotation in the cabin, covering driving and passenger behavior detection and multi-mode interaction in the car; Design, collection and annotation of ADAS data required by road, driving environment, pedestrians, traffic signs and vehicle characteristics.
Driving and riding behavior detection. Support the development of DMS and OMS systems, which are used to detect and warn driver and passenger behaviors, reduce the rate of driving accidents and create a good travel experience. Support different light environment, different vehicle space acquisition, equipment support visible light, infrared binocular camera, 3D camera etc.
Multi-mode interaction in the car. Support the car in a variety of modal data identification, control, and interaction design and adopt the data processing, cover different positions, languages, subject interactive dialogue situation, and passengers in different positon control of the vehicle intelligent device through gestures, facial gestures, lip interaction, help to build a powerful interactive smart cockpit quickly, Support different car speed, different noise environment, different light, different channels of voice, image, video collection and annotation.
External perception data collection. In addition to conventional road data such as pedestrian, lane line, obstacle, traffic sign and dynamic object tracking, it also covers various scene data such as parking data and high-speed scene data.
Annotation of external perception data. 2D image and video labeling, including semantic segmentation, panoramic segmentation, lane separation labeling, target tracking and other labeling;
Radar point cloud data annotation. 3D point cloud data annotation, including point cloud semantic segmentation, point cloud tracking annotation, point cloud continuous frame annotation, 2D&3D joint annotation, etc.
Product: SmartCam ADAS Solutions
MCU+AI SOC, support L2 function, 1R1V or 1V architecture, CAN*3+Eth*1 access capability
It is cost-effective and has been applied to many OEMs. The main functions of the product solution include but not limited to ICA, TJA, LKA, ACC Stop&Go, LCC, AEB, IHC, TSR/SAS, FCTA, FCW and LDW.
Product: JIMU S3 (1V)
The first Chinese 1V L2 ADAS, based on JMNet® deep learning algorithm self-developed by JIMU Intelligent can realize Level 2 intelligent driving with a single camera, delivering features like FCW (front collision warning), PCW (pedestrian collision warning), AEB (automatic emergency braking), ACC (adaptive cruise control)，LDW (lane departure warning), LKA (lane keep assist), TJA (traffic jam assistance), SLI (speed limit indication), TSR (traffic sign recognition), etc.
Equipped with high-pixel, wide FOV lens (FOV 100 °) and self-developed leading algorithm, the high-performance JIMU S3 (1V) can realize L2 intelligent driving with a camera, which enables lower cost, and better customization & localization when compared with mainstream 1R1V (a radar and a camera) solutions, or 1V solutions from foreign suppliers such as ZF or Valeo.
Further accurately estimate the self-vehicle motion and dense depth information of the road environment, and carry out background modeling and road user motion estimation at the same time, so as to realize general target detection, static scene map reconstruction and more accurate prediction of dynamic target speed and acceleration, so as to meet the positioning, tracking and re recognition of targets within the field of view in the global coordinate system, so as to solve the problems of occlusion and missed detection, and greatly improve the positioning accuracy Reduce the false detection rate. Improve the coverage of perceptual scenes, accurately define perceptual boundaries, and ensure the reliability of perceptual information extraction.
Product: Radar & Visual pixel-level and fusion-based perception system
The G-PAL Radar & Visual pixel-level and fusion-based perception system is developed by Shanghai Geometrical Perception and Learning Co., Ltd (G-PAL). It is an all-weather, high reliability, low-cost, mass-producible, high-performance product designed to meet vehicle specification standards, with the ability to detect targets in a variety of scenarios.
The product uses the self-developed 4D MMW Imaging Radar and camera as the sensing source. It performs pixel-level deep fusion with the camera’s video stream by releasing the radar’s original massive signal-level information and 30,000 points per second point cloud information. In terms of deep fusion technology, the innovative heterogeneous information folding and projection technique is used to combine the signal-level radar information with the image pixel information after spatio-temporal synchronization. The pixel-level fusion detection is achieved with the point cloud information through an optimized deep convolutional neural network to obtain information-rich and highly robust structured target data.
Relying on high-performance perception and fusion capability, the G-PAL Radar & Visual pixel-level fusion-based perception system can accurately detect and track complex scenes with moving and static targets. It can also improve the information richness and robustness of structured data at the perception level of the smart driving system. Moreover, it lays the foundation of perception for the smart driving technology above L2+ towards mass production.
Through pixel-level deep fusion of heterogeneous information, this product enhances the ability to accurately sense moving and static targets in complex scenes and all-weather 7/24 capability compared to a single sensor. Plus, the output of massive signal-level information from radar reduces the information missing under the inherent rules. The detection and miss detection rates of target information are greatly improved compared to target-level fusion systems, solving the confidence problem under the traditional framework.
Product: Full Stack Fusion Computing Solution
2022 is the window period of intelligent driving from L2 to L3/L4. More and more automobile manufacturers begin to lay out higher-level intelligent driving mass production, and the era of automobile intelligence has come quietly. With the improvement of LiDAR hardware technology and the decline of vehicle regulation level mass production and cost, the high-level intelligent driving function promotes the mass production of LiDAR in the field of passenger cars. For passenger cars, using LiDAR as a sensor and fusing other sensing data is a new topic, and there is still a large technological gap in the whole industry in terms of the integration of sensing and computing technologies. In order to solve the problem of mass production of high-level intelligent driving system (NOA) for passenger cars in the main engine factory, and to achieve safe and reliable high-level intelligent driving with the trend of superior LiDAR vehicles, JueFX Technology has launched the “Autopilot Full Stack Fusion Computing Solution”. This scheme can provide customized front-loading and rear-loading positioning algorithms, multi-sensor fusion perception algorithms and dynamic traffic information services for different scenarios and customers, meet the needs of all stages of automatic driving, and upgrade vehicles from low-level assistant driving to NOA.
JueFX Technology’s Full Stack Fusion Computing Solution can provide customized front-loading and rear-loading positioning algorithms, multi-sensor fusion perception algorithms and dynamic traffic information services for different scenarios and customers, meet the needs of all stages of automatic driving, and upgrade vehicles from low-level assistant driving to NOA.
The scheme can provide pure vision-based fusion perception and fusion positioning services for L2 + automatic driving system, which can adapt to mainstream vision SOC and pose sensors. For high-level automatic driving, it can also integrate Huawei MDC, Horizon J5, Nvidia Orin and other high-performance platforms, and deploy products and solutions such as fusion perception and fusion positioning according to different computing needs.
The core competence of this scheme is the fusion perception technology, which is also the core technology line that JueFX has adhered to since its establishment. The fusion perception algorithm fuses the original data of vision, point cloud and millimeter wave, and then carries out 3D tracking to output more accurate and complete perception results. At the same time, with the help of high-precision maps and road-end global samples, the behavior trajectory of traffic participants is accurately predicted.
Product: VCAR AD Data Station
In the field of AD development and testing of intelligent vehicles, ad station Pro shows the ability of multi-channel and multi-class sensors in data acquisition, operation and processing, and provides efficient assistance for users’ algorithm development and verification.
High synchronization accuracy: Up to 500us-1ms synchronization accuracy, supporting PTP and GPS (PPS) synchronization.
High transmission bandwidth: Meet the requirements of raw video data such as raw/yuv.
High processing performance: It can build a rapid prototype of the domain controller, support the opening of data interfaces such as v4l2, socket can, ROS, etc., can directly access the original data of the sensor, and quickly verify the algorithm model in the test quickly iterate and validate.
High adaptability: Suitable for mainstream sensors in the market.
Efficient technical support services: Domestic self-developed brands, fast response, support customized development.
Product: Image-level 1550nm Hybrid Solid-state LiDAR
LSLiDAR through the world’s most cutting-edge’s in-house and self-produced 1550nm fiber lasers, optical fiber devices and device packaging processes, provides autonomous driving with lidars with a longer detection range, denser point clouds, higher accuracy and more in line with auto-grade.
In-house and self-produced high-performance 1550nm fiber laser: In-house capability highly controls the stability and reliability of components and realizes software and hardware Integration.
Image-level point cloud: Provides 128/256/512 line configurations, measuring point rate can reach up to 6.4 million points per second, accuracy is ±2cm, and realize ultra-fine 3D mapping.
Far detection range: detection range is up to 500m, giving autonomous driving sufficient safety redundancy distance (support flexible customization up to 2000m).
Full field of view ROI : 120°×25° wide field of view, angular resolution up to 0.09°×0.05°, flexibly responds to emergencies to ensure autonomous driving safety.
Small and lightweight design: compact interior design allows LS series lidars to have an ultra-thin size of 225x106x45mm
Auto-grade reliability: Meet the auto-grade requirements and functional safety of vibration resistance, impact resistance, temperature, anti-interference, and IP protection level.
1) Product: ICM1.10
• Mobileye eyeQ4 sensing chip
• horizontal viewing angle range 100 °
• 1.7 million pixels
• leading algorithm performance: The score of AEB function in CNCAP2021 is 30.55 points (95.5% score rate), which is better than the domestic general level
• it supports 1v/1v1r/1v3r/1v5r and other compatible schemes. Flexibly adapts to all of the automakers’ vehicle architectures
2) Product: Intelligent Central Gateway Module
5G intelligent central Gateway Module (ICGM) is the third generation 5g intelligent vehicle telematics product developed by DIAS, With versatile functions, it can meet the requirements of telematics (5g), v2x, ADAS, gateway and high-precision positioning,CMS(Camera Monitor System), ETC, non-screen voice recognition and many other functions.
8 TOPS computing power enables it to realize the automatic driving function above L2+. Meanwhile, the embedded automotive gateway and telematics control are integrated, which realizes a more complete automotive gateway.
Multiple communication modes: 5G/4G/3G/2G, C-V2X, BLE;
High integration: on the basis of I-box 5g+v2x+ high-precision positioning, add gateway function and support for intelligent driving;
High precision positioning: RTK centimeter positioning, supporting DR;
Convinient maintenance: local and remote maintenance and modification can be performed, and remote firmware upgrade is supported;
Security Critical: hardware encryption scheme, C-V2X message signature and encryption.
Product: LAPA memory parking
The map of family parking lot and public parking lot is memorized through the mapping method of SLAM at the car end, and the real-time positioning of vehicles is realized based on the built map, so as to realize the memory parking and vehicle call of vehicles in the parking lot.
Support 1-kilometer distance memory parking and vehicle call;
Support indoor and outdoor vertical, parallel, oblique train parking scene memory;
Support wind, sun, light rain, light snow and other weather conditions;
Support mobile remote control, the vehicle from the alighting point to the parking space, from the parking space to the boarding point automatic parking in and out;
Cloud storage of public parking lot paths, vehicle sharing, rich paths, say goodbye to strange parking lot trouble.
Product: intelligent driving high-precision positioning solution
With “high-precision GNSS correction data and high-precision positioning technology” as the core, Sixents provides “high-precision positioning application solutions” oriented to user needs. The intelligent driving high-precision positioning solutions, based on LG69T fusion algorithm, independently developed RTK, DR and embedded algorithm, combined with real-time centimeter-level correction services, to provide customers in the field of intelligent driving from cloud to end of the overall solution. In addition to the correction data service based on N-RTK technology, Sixents is also constantly strengthening its advantages in PPP-RTK technology to meet the requirements of intelligent driving in terms of integrity, functional safety, and data safety.
Meanwhile, Sixents also independently developed SDK products and DR high-precision positioning engine, which are compatible with a number of popular high-precision positioning terminal products in the market, to solve the problems of poor compatibility of high-precision location service adaptation, slow docking process and high cost, aim at improve efficiency and reduce enterprise costs for the customers.
Centimeter-level positioning accuracy: 2-5cm
Lane-level positioning service: the positioning accuracy of typical scenarios is less than 10cm, which meets the requirements for high-precision positioning of automatic driving L2.5 and higher level
Round the clock：provide customers with 7×24 hours, round the clock service
Ultra-high fixed rate: in typical use scenarios, the fixed rate of positioning can reach more than 95%
Multi-scene performance optimization: the performance is optimized for occlusion and multi-path environment
Integrity guarantee: ensure service integrity through monitoring algorithm, protection level calculation and other methods
Safety guarantee: the product design meets the functional safety requirements of ASIL-B level based on PPP-RTK technology to meet the functional safety requirements
Product: Smart Corner
In order to support future ADAS and AD, Marelli AL has developed the Smart CornerTM technology for intelligent lighting that can integrates sensors into the lights, so that the sensors can get the best field of view with greater performance. Sensors that can be integrated include RADAR, LiDAR, Camera, FIR Camera and LiFi. The electronic control unit that integrates the perception algorithm controls the new intelligent lights.
Solved the styling pain points and preserves stylistic integrity. Simplify vehicle architecture by pre-calibrating sensors. Lower the installation costs due to easier manufacturing process. Avoid adverse environmental effects on sensors by using efficient cleaning and heating system on lens.
MIND Electronic Appliances
Product: Ring Topology Intelligent Power Distribution & Data Pre-processing Unit
To work on the self-driving system with higher level of SAE L4 & above, the ring topology intelligent power supply & data pre-processing unit’s been researched and developed. The unit works as the center of reliable power distribution and mass raw data pre-processing. And effectively and fully undertakes the zonal architecture ‘s physical layer functions, supports the zonal integration solutions for the self-driving system.
• Reliable power distribution: redundant power distribution for main loop by mosfet power sw. Smart power management & fault diagnosis for branch loads by exclusively designed smart e-fuse module.
• Mass data pre-processing: to collect/pre-process the raw data from sensors including Lidar/ultrasonic/camera, SBC with HSD drive module applies to integrate detection, AD data conversion, data loading and reduces the zonal CPU’s computing duty cycles
• Network load stabilization: the unique designed built-in Ethernet offload engine TOE module helps to move the Ethernet protocol stack processed by CPU to the TOE module, greatly improves the better stability of high speed data transmission for autonomous driving system.
Ring topology smart power supply & data pre-processing unit
• Safety solution by annular redundant power supply
Power supply for main loops: To achieve the failure protection and fault diagnosis functions by using the unique designed mosfet modules to detect the over-voltage/over current/under-voltage/over temp occurred to the main loops. The mosfet modules designed and located on both sides of power input and output on each unit achieves the two-way fault isolation to realize the redundant power supply resolution. When the failure creates on the loops, the fault area will be detected and cut off from the main loops, the mosfet module on other side start connection to ensure the power supply to avoid the system power shut down, ensures the reliable power supply for the self- driving system.
Power supply for branches: the series-parallel integrated resolution by the newly designed smart e-fuse module covers 2A-100A ampere of the whole vehicle electrical loads, achieves the power distribution for downstream branch, power management and fault diagnosis, removes the traditional fuses and relays by smart e-fuse modules, achieved a better cut-off response time, less than<10μ sec., resettable fuse function, noiseless, and a working life of 1 million times, reduces the corresponding wires gauge by 20%.
• High-speed Ethernet network stabilization
The build-in Ethernet TCP/IP offload engine modular designs to solve the technical bottleneck of the zonal CPUs’ overload and avoid losing quickly respond to the mass raw data packets caused by radar sensor’s high-speed transmission in the system, greatly improves the stability of the high speed Ethernet system.
Product: Autonomous Driving Data Solutions
MindFlow Technology takes the data intelligence platform as the core, integrates various rich and efficient annotation tools, can realize one-stop processing of image, voice, text, video, and 3D point cloud data, and utilizes the management of the whole life cycle of AI data, Complete all kinds of massive labeling scenarios, and realize the full-link aggregation of data from “raw materials” to “finished products”.
MindFlow Technology’s self-developed product – SEED platform has multi-dimensional annotation capabilities + limited project management capabilities + data life cycle management capabilities + supply chain management + project collaboration + AI human-machine collaboration + custom permissions + full scene annotation “multi-dimensional Stereo data processing capability.
With the support of these functional modules, the data labeling efficiency of the platform is increased by more than 10 times on average; with AI-assisted screening, the data accuracy can reach the level of 99.99%. Massive demand for multi-source heterogeneous data.
Product: L4 autonomous driving and intelligent logistics overall solutions & services in closed scenarios
Focusing on short-distance and high-frequency application scenarios such as mining areas, ports and yards, the company provides autonomous driving and intelligent logistics solutions and services, and has independently developed key technologies such as modular environment sensing system, high-precision all working condition positioning system, high-performance decision controller, vehicle domain controller, vehicle wire control platform, Intelligent vehicle networking platform, cloud service platform, production scheduling platform, etc, It has the ability to provide a complete set of unmanned intelligent logistics software and hardware solutions for mining areas and other scenarios. According to customer needs, in addition to providing product solutions and technical support, MaxSense technology also provides operation services.
• The team comes from Shanghai Jiao Tong University, Fudan, Tongji, Braunschweig and other well-known universities all over the world. At the same time, it has rich industry and product experience in BMW, GE, ZPMC, Volkswagen, SAIC, China Electronics Group, CHN ENERGY and so on. It is a practical and pragmatic team in the autonomous driving industry, which pays great attention to engineering;
• Profound accumulation of robot and autonomous driving related theories, technologies, engineering capabilities and experiences;
• The product fully considers the needs of customers and commercial implementation, with significant cost and product price advantages;
• Independent development of software and hardware, productization of core components, and full stack technical reserves;
• There are mature and complete products in various types of commercial vehicles and construction machinery, such as highway dump trucks, wide body dump trucks, AGVs, minibuses, mining trucks, agricultural machinery, ships and other vehicles, as well as various power sources such as diesel, pure electric and hybrid power;
• Selected as the target of strategic investment by Shandong Heavy Industry Group and WeiChai Group, as the only autonomous driving technology company in the closed scenario L4, it cooperates with WeiChai in all aspects from mass production vehicles, typical scenario building, strategic agreements, etc;
• At present, several large state-owned enterprise groups, including China National Building Materials, China Ordnance Group, Weichai group, Shandong Port Group, Weicheng Wanxin and Huayue holdings, are building a model room project for autonomous driving in a closed scene.
Product: 4D Imaging Radar
Accurately identify static targets, support the calculation of the height of the object relative to the road, capture the spatial coordinates of the targets around the vehicle, provide more realistic path planning and passable space detection functions, and then determine whether the vehicle can safely pass through the detected object;
It can distinguish whether the target is a pedestrian or a motor vehicle in real time;
High precision range resolution, which can distinguish two people close to each other and two parallel cars in the distance;
Support accurate and synchronous detection of the moving speed of different targets, and the detection distance can reach 150m. This information is very important in safe driving decisions related to route planning, such as automatic emergency braking and adaptive cruise control, so as to avoid false braking and missing braking to the greatest extent
Similar to the high-resolution and high-density point cloud information of lidar.
I79 (4D imaging radar) has strong hardware configuration and computing power. It adopts the imaging method of combining virtual and real aperture, and comprehensively optimizes the aperture transformation, the number of transceiver channels, algorithm architecture and operation efficiency, so as to achieve higher point cloud density, stronger target resolution and recognition ability, maintain the high cost performance and reliability of millimeter wave radar, and obtain high resolution similar to laser radar High density point cloud information and stronger robustness than lidar.
Product: IDC3.0 autonomous driving domain controller
Based on Qualcomm 8540+9000 platform with 5nm technology;
Support Level L3/4 autonomous driving function (Driving mode: end-to-end autonomous driving etc. Parking mode: Valet parking);
Support display out and video record function.
Multi scenario support for driving mode and parking mode OTA update;
Fusa and Cyber security;
Powerful computing capability, 360Tops for single controller and up to 1440Tops for expansion;
Precision position at centimeter level;
Strong data return function.
Product: Intelligent Tire Solution (ITS)
Sensata’s new generation Intelligent Tire Solution (ITS) is based on pressure and acceleration sensing technology, integrated with the latest Bluetooth communication and advanced software modeling technology, providing tire maintenance, remote monitoring, and vehicle condition monitoring.
• Provide more convenient communication for OEMs
• Simplify the operation process for tire manufacturers and after-sales maintenance
• Provide accurate input for automotive energy management system
• Reduce energy loss in the process of driving
• Improve vehicle safety.
Sunny Smartlead Technologies
Product: 8 mage High Pixel Automotive Front Sensing Camera Module
The products and services to automotive industry advanced auxiliary driving area, able to solve the low pixel perception recognition, low accuracy, closer to the problem of the small Angle, our company research and development of the global start mass production of high pixel 8 m environment perception module to solve in the gauge the environment temperature and the complex conditions of blurring and the change of inside and outside and change recognition error caused by the big shed, lenses, etc., It is of great significance to improve the technical capability and overall competitiveness of ADAS camera module in the field of autonomous driving independently developed by China to solve the pain points of the industry, ensure quality ability and create value for customers through deep cultivation of product elements.
This product applies anti-lens shedding technology to ensure robust design elements and solve the risk of lens shedding. Innovative temperature drift solution; Solve the problem of image blur in the full temperature range of high pixels; Professional stray light solution technology, combined with the windshield hood to solve the miscellaneous glare ghost and other problems; Through self-developed calibration technology, the company solved problems such as the influence of internal and external parameters calibration consistency, improved algorithm stability, innovative heat dissipation design, EMC solutions, and more than 100 K level of mass production performance, providing stable and reliable support for the development and mass production of many high-quality customers at home and abroad.
Product: Port Unmanned and Cloud Control system
By building a fully connected wireless network, SNIC realizes the comprehensive perception and intelligent scheduling of port transportation elements such as containers, transportation equipment, instruments, and personnel. At the same time, based on 5g mobile edge computing, video security monitoring, real-time data acquisition, remote disposal, and scheduling are carried out on the site. At the same time, through the application of automatic driving in the port area, the mode of loading, unloading, and transportation of goods in the port will be changed, and intelligent machinery will be promoted to replace labor.
After a large number of experiments and trial operations, the driverless solution has been landed at two terminals, tested at seven terminals, and officially signed commercial operation orders, accumulating a large number of real operation scenarios/data.
At present, automatic driving has six mainstream application scenarios in the field of commercial vehicles, including port scenarios, logistics parks, mining area scenarios, airport scenarios, trunk logistics, and terminal logistics. Because of its semi-closed and high degree of standardization, the port is regarded as one of the most potential scenarios for rapid digital transformation.
Different from other driverless application scenarios, port driverless has its own application environment and technical requirements.
First, in terms of intelligence, in the complex environment of people, vehicles, and things interwoven in the port, the unmanned truck is equipped with advanced technologies such as lidar, millimeter-wave radar, ultrasonic radar, high-definition camera, and satellite positioning module. In particular, the ultrasonic radar can detect all objects. Unlike lidar and millimeter-wave radar, they have their own advantages. They are more practical for mobile modes such as reversing, which is conducive to the environmental transportation of the port;
Second, in terms of networking, domestic port driverless truck basically adopts 5G network and v2x technology, which continuously strengthens the cooperative perception ability of vehicle end and road end, and realizes the interconnection between driverless truck and port automatic production equipment and system. At the same time, cloud computing and remote monitoring services can also be added to realize system real-time optimization, intelligent dispatching control, and remote driving, so as to provide a multi-dimensional guarantee for the safe operation of driverless trucks;
Third, technically, it is very different from AGV (automatic guided transport vehicle) and IGV (Intelligent guided transport vehicle). AGV needs to lay magnetic nails on the driving route, and the positioning accuracy of IGV is relatively poor. While driverless truck integrates the technical advantages, avoids the technical shortcomings of AGV and IGV, and saves more cost.
Therefore, unmanned container trucks are very important for container terminals. The horizontal transportation of the container terminal directly determines the comprehensive operation efficiency of a port. The Unmanned Container truck has innovated the horizontal transportation scheme of the terminal, greatly improved the working environment, reduced the working intensity, and solved the problems of easy fatigue and great potential safety hazards. The technical ability of the company’s Si Nian intelligent driving team ranks first in the domestic port track, and it is estimated that it will reach 500 million / year in three years. The business model includes two directions: operation and sales. Among them, the operation is the only team that adopts this model in domestic port driverless track.
Product: NavInfo Autonomous Driving Solution
NavInfo autonomous driving solution involves L0-L4 level autonomous driving system, including the integrated software and hardware mass production technology solution in the parking domain and driving domain, based on the platform and standardization of technical solutions, relying on the sustainable development idea of expandable hardware and upgradeable software to achieve a win-win situation in terms of cost and market competitiveness. In 2015, NavInfo began to layout autonomous driving, the industry-leading mapping and location service experience capabilities, the cognitive accumulation of the cognitive accumulation of own knowledge of real-world scenarios scene and the method practice of AI technology applications are integrated into the research and development of autonomous driving algorithms and solutions. With the goal of creating a truly safe and reliable autonomous driving system, we are gradually forming a unique path for the development of autonomous driving by making full use of our product advantages such as HD map and HD GNSS.
The ADS solution includes an automatic driving engine system ( Auto On Map ) based on its rich map data ability and experience. Fully considering the safety and comfort of automatic driving, it provides static autonomous driving route planning and deeply combines the dynamic sensing results of the vehicle-side sensors, which greatly improves the efficiency of automatic driving system and realizes comprehensive support for automatic driving system.
Vehicle-grade functional safety deployment, and with several the industry’s highest level of certification, such as IATF16949, ISO9001, ASPICE CL3, ASPICE ML3, ISO 26262, etc.,
All algorithms are self-researched, which can better match custom development needs, optimize hardware and software selection to control costs, and provide software and hardware solutions with strong engineering capabilities.
ADS 2.0 platform realizes the integration of ADS domain and IVI domain, and the distributed software architecture with software-defined hardware makes algorithm transplantation and resource expansion more convenient and upgrades the high computing power platform to support L2~L4 level autonomous driving functions.
Product: L4 autonomous driving universal software and hardware platform
The platform is made up of four parts, namely lidar camera system, WeRide ONE solution, fully safety redundancy design, automatic big data platform. It has been adapted to 4 innovative autonomous driving products.
1） LCS Sensor Integration Solution: modular, configurable, adaptive.
2） WeRide ONE: a universal self-driving solution for comprehensive open urban roads.
3） Fully Safety Redundancy Design: the comprehensive AD safety solution that covers sensors, compute units, DBW (drive-by-wire) and network connection.
4） Automatic Big Data Platform: automatic data collection, processing and deployment
5） It has been adapted to 4 innovative autonomous driving products:
The platform offers an all-rounded product mix of Robotaxi, Mini Robobus, Robovan and Robo Street Sweeper to provide multiple services including online ride-hailing, on-demand transport, urban logistics and smart environmental services.
Adapt Mobileye EQ4+ MCU, support static target recognition and multi-sensor fusion;
Functional safety level ASIL B;
Support HD Map, positioning (REM: Road Experience Management);
Available functions: AEB, iACC, FCW, LDW, LKA, TSR, HLB, TJA/ICA, ELK, EDR, etc.
Global platform development, product roadmap covers 1V, 2R/4R1V,4R1V+ till 4R9V1D, meet L2+ demand;
Reliable visual solution, Low probability of AEB accidental triggering Full-time Camera features;
Based on REM technology, high-precision positioning, real-time update and self-calibration of road characteristics, deal with the situation which is difficult to identify;
Projects adapted in market incl. China, NAFTA, EU, JP;
Flexible support for customer dev. roles and dev. models;
Full AUTOSAR supports collaborative development;
WBTL cooperate with competency Tier2 to ensure that the solution is industry-leading competitive and functional coverage.
Product: High-level Autonomous Driving SoC AD1000
AD1000 is a high-performance heterogeneous computing SoC designed for autonomous driving, and its scalability design makes it suitable for L2+ to L5 autonomous driving systems.
• Industry-leading 7nm process，higher computing power with lower power consumption
• Dedicated and unified neural network accelerator, optimized for autonomous driving scenario, higher performance and efficiency with carefully designed memory hierarchy, support floating point
• More powerful sensor input capability, one single SoC covers all cameras, radar, lidar sensors
• Powerful data recording capability matched with the perception system forms a closed-loop of the device-cloud integration
• Designed in accordance with ISO26262 functional safety requirements, with independent functional safety island
• Rich communication and interconnection interfaces
• Support Evita FULL standard Hardware Security Module (HSM), including OSCCA SM2/SM3/SM4 algorithms
Product: MEMS LiDAR
Zvision’s LiDAR products adopt a solid-state technology route. By using MEMS technology to realize the scanning of the beam, it overcomes the problem of traditional mechanical motors in terms of reliability and lifetime, which are difficult to meet the automotive-grade reliability; through optical design and combined with the mature small size MEMS devices in the market, it achieves both high resolution and high frame rate telemetry, and meets the automotive-grade reliability.
Strong R&D force
The R&D staff accounts for more than 60%, including more than 15% of R&D staff with doctoral degree or above, and more than 50% of staff with master degree or above. The core R&D team comes from international famous universities such as Tsinghua, Polytechnic University of Madrid, Technical University of Munich, Beijing Polytechnic University, as well as international famous Tier 1 companies such as Osram, Bosch and Autoliv.
Industry-leading technology strength
Automotive-grade MEMS LiDAR has passed automotive-grade vibration, shock and temperature cycling tests.
The industry’s first MEMS solution for the whole vehicle, providing the longest distance LiDAR products and the blind area LiDAR evaluation with the largest field of view.
Core chip self-research capability
With 13 years of world-leading chip R&D and mass production experience, we have established the self-research capability of integrated optoelectronic chip and analog-digital hybrid ASIC chip to build the core competitiveness from the bottom to the system.
Vehicle specification mass production manufacturing system and quality management system
The first 5000 square meters MEMS LiDAR production line in China is located in Changshu ,Jiangsu Province, where the auto parts industry is developed.
Build the first LiDAR production line with IATF-16949 and ISO-14001 certifications.
With the core goal of “domestic high quality LiDAR”, we have established a strict quality management system to ensure traceability and control of data in all aspects of the production line.
Product: VT-Pilot Auto Pilot Assistance System
VT-Pilot Auto Pilot Assistance System encompassing forward camera systems, APA/RPA/HPP/AVP, 360/540/720 around view parking system, in-car monitoring system, camera monitor system, ACC/ TSR other functions. Multi-sensor data fusion technology, high computing power platform + self-developed high-performance neural network algorithm, self-developed planning algorithm, dynamic adjustment of the path planning algorithm, supporting L2-L4 automatic driving scenarios.
There are more than 15000+ real parking scenarios. At presently, many parking functions are the exclusive highlights of Voyager Technology. For example, APA supports the parking in and out of mechanical parking spaces, AVP does not need to learn in advance to enter a new parking lot. It can complete self patrol, obstacle avoidance as well as remote summon in the parking lot.
High-quality, full-stack solutions, sensor fusion, software algorithm, system integration, in house developed by our R&D team with experience in IOV, chips, computer vision, supply chain management, and production team.
From the bottom up, built up of technical solutions, from hardware to algorithm software, system integration, from basic assistance to advanced assistance, Voyager Technology can meet the product needs of OEMS from different dimensions;
Massive real driving database lays a solid foundation for product research and development, update and performance optimization.
Product: FV6 Automotive Dual-spectrum Fusion Camera
Asens FV6 a new automotive dual-spectrum fusion camera，1080p visible light plus high-resolution infrared, high frame rate，various display modes available, realizing presentation in any weather and scenario presentation to ensure safe journey.
Rely on the self-research of the group’s chips, it has completely independent intellectual property rights;
Comply with the national standard of passive infrared detection device for automobiles;
Infrared adopts scene-based non-blocking algorithm;
Small size is easy to integrate;
Low power consumption;
640*512 high-resolution infrared, 1080P visible light, clear image;
Visible light has low illumination,HDR, LED flicker suppression
Product: Parking and Driving Integrated Solution
The parking and driving integrated solution presented by iMotion uses the 4R5V hardware configuration. It realizes many functions such as NOA (navigation assisted driving), L2 driving, and HPA by using 4 radars and 5 cameras. Equipped with an integrated domain controller for high and low-speed driving and parking, it provides customers with an excellent L2+ intelligent driving experience.
• Flexible system architecture design supports compatibility with domestic chips, functional safety is developed according to ISO 26262 ASIL B, supports multi-sensor fusion, and supports cloud and big data closed-loop development.
• In terms of hardware design, it supports ISO 26262 ASIL-B (D) functional safety level, supports the continuous iterative upgrade of the expansion of the large computing power platform, and rich interface protocols can flexibly match customer requirements.
• In terms of software architecture, it provides SOA-oriented software architecture and pre-installed basic software, which significantly reduces development costs and shortens development cycles.
Product: Long-short term Decoupled Decision-Making and Planning System
Instead of designing an integrated component of decision-making and trajectory planning as in conventional approaches, we decouple them into independent modules. The reinforcement learning agent in the decision-making component approximates the inference of future possible decision space, allowing the long-term, complex, uncertain, and interactive reasoning of ego behavior. The decision module outputs expected semantic action to the trajectory planning component, such as following, lane change, overtake， etc. The trajectory planning module uses an accurate kinematic model and object prediction, focusing on planning short-term, time-critical, collision-free, and comfortable trajectories, to achieve the received semantic action.
Both decision-making and trajectory planning have advantages and drawbacks in their approaches. But each of their pros and cons can be complementary. The inference of the environment model enables applying flexible methods in decision-making to address complicated problems efficiently. However, the resulting decision can be coarse and error-prone. On the contrary, accurate continuous modeling guarantees the accuracy, safety, and comfort of trajectory planning, but this also limits the pool of methods to choose from and the scale of the problem to solve. One way to utilize the pros and avoid the cons is to decouple decision and planning, by having decision making to decide on a semantic action under a large-scale complex environment, and letting trajectory planning to focus on achieving such specific action and avoid possible collisions. This is intuitively similar to human driving behavior, where we partition the driving scenario into time-spatial spaces, estimate the sequence of spaces to pass, then plan for throttle and steering within such spaces.
MAXIPILOT®2.0 is an intelligent navigate driving assistance system developed by MAXIEYE, which can meet L0-L2++ functions. It can support NOM and intelligent driving-parking integration solution.
Based on self-developed full-stack technology chain of deep learning perception algorithm, data fusion and planning control, the system can support point-to-point mobility.
First, the system is connected with high-precision map, and uses SLAM technology. By using FPP integrated path planning technology, and making full use of various information of the surrounding environment, the system makes comprehensive judgement with priority of navigation, lane line, traffic flow, railing, guardrail and roadside information. This effectively reduces the takeover rate, and improves the system fluency.
Second, in order to improve accuracy of perception and reduce the impact of accidental failures, the system adopts double redundancy design for target information.
Third, the system has the advantage of autonomous long-distance forward vision ability, which can help the parking system to achieve positioning and improve agility (parking at a faster speed).
Fourth, the system supports OTA online upgrade technology and shadow mode, providing users with data-driven innovative closed-loop of technology, product and service.
More extreme product experience. Based on the accumulation of engineering mass production and large-scale data verification, when faced with a variety of complex scenes (such as: tunnel, curve, complex lane lines, extreme cut-in, etc.), it can bring users the ultimate driving experience; In the case of parking, the advantages of autonomous long-distance forward vision can help the parking system to achieve positioning and improve agility (parking at a faster speed).
Based on FPP technology, the system effectively reduces taking-over rate and improves system fluency. The system can make full use of various information of the surrounding environment, and make comprehensive judgement with priority of navigation, lane lines, traffic flow, railings, guardrails, roadside and other information. It keeps the vehicle driving smoothly in the scene of missing high-precision map or missing navigation.
Flexible and customized solutions. Based on the full stack capability, MAXIPILOT®2.0 offers users different driving style options that match different driving strategies, from efficiency first mode to comfort mode, etc.
Product: ThunderSoft AVM
ThunderSoft AVM products integrate industry-leading 2D/3D environmental image stitching technologies and visual perception algorithms. The products are equipped with advanced graphics and image rendering engines. The products enable functions such as self-adaptive stitching, HD transparent chassis, adaptive color correction, dynamic stitching seam, dynamic blind spots, and dynamic trajectory, to actively guarantee the safety of intelligent driving.
• Industry-leading domain control-level AVM products and solutions
• Stronger computing power platform for higher performance and better user experience
• Architecture design with separation of software and hardware
• Easy OTA
• Production-level system architecture with Hypervisor based QNX or Android
The products holds rich and high-quality algorithms, including 360° seamless stitching + adaptive stitching, distortion correction and adaptive color correction, high-definition/high-precision transparent chassis, dynamic blind spot, dynamic stitching seams, dynamic scales of trajectory line, viewpoint follow-up, and rich viewpoint customizations.
The product features stunning, game-level graphics and animations and the industry‘s first AVM based on Kanzi that achieve high-quality 2D/3D rendering and animation effects. These include exquisite 3D car models, ambient light/colors and materials, dynamic 2D envelope lines and 3D radar walls, rich camera movement effects, and other information such as open door warning/blind spots/license plate.
Its camera image quality consultation for real and clear presentation of the surrounding environment, with excellent image quality in complex environments such as bright daylight, night, overexposure, and backlighting.
1) Product: Drop’nGo
Drop’nGo helps users realize the functions of autonomous valet parking, connection and remote monitoring in office, home, large supermarkets, airports and other parking scenarios. When users get off the car in the parking lot, Drop’nGo performs perception, real-time positioning and scanning of environmental information, and autonomous cruises to parking space or pick-up point to meet the users’last-kilometer parking needs. The Drop’nGo project is based on a unified software architecture and map architecture, which creates a unified parking experience and reverse car search service for users with L0-L4 parking functions, and provides users with simple and intelligent experience of “Drop off, and go away”.
Drop’nGo software platform is based on Zongmu’s new generation of software architecture and technical solutions which creates a cross model and L0-L4 parking platform (AVM, APA, HPP, AVP), a cross hardware platform (small computing power – large computing Force) and a plug-and-play software middle platform for sensors, and serves multi-model projects in the form of software productization. Through the data engine, crowdsourcing map and other technologies, scenes can be continuously optimized and iterated to create a complete closed loop of user experience.
2) Product: 4D millimeter-wave radar
Zongmu Technology’s first-generation short-range 4D millimeter-wave radar (hereinafter referred to as ZM-SDR1) is the world’s first dual-mode 4D millimeter-wave radar compatible with high-speed ADAS applications and low-speed parking applications which can provide users with ADAS functions such as BSD, LCA, FCTA, RCTA, RCW, DOW, etc. and escort all levels of automatic driving systems (Level 1-5). After two years of careful polishing, the ZM-SDR1 radar has been fully verified in terms of market scarcity, technological leadership, engineering applicability, and manufacturability. Now, the ZM-SDR1 radar has entered the stage of mass production. The designated customers include Jinkang, Meituan, JAC, etc and the total shipment is expected to exceed 0.4 million this year.
At present, the automotive millimeter-wave radar market is still monopolized by international component suppliers. However, the focus of these international suppliers is mainly in the field of high-speed ADAS, which cannot meet the needs of low-speed application scenarios. With the increasing maturity of autonomous driving technology, especially the rapid popularization of intelligent parking products, the market urgently needs a millimeter-wave radar that can be compatible with both high-speed ADAS and low-speed parking scenarios. Special optimizations have been made for high-speed ADAS scenarios (urban/town structured roads) and low-speed parking scenarios (underground/surface parking lots, parks, etc.), which can perfectly meet the application requirements of autonomous driving. In addition, the ZM-SDR1 is committed to providing low-cost, all-day, high-reliability environment perception solutions. In summary, the ZM-SDR1 radar is the only radar product that can meet high-speed ADAS applications and low-speed parking applications.
From the perspective of product demand, ZM-SDR1 adopts dual-mode design, and further improves the radar for the requirements of applications in high-speed and low-speed scenarios:More point clouds (more than 10,000 points per 4 radars per vehicle); better detection capability（≥100m@-45°to45°); Smaller blind spot of detection (≤15cm，better than ultrasonic sensors); larger field of view (FOV≥150°）; higher resolution（Angular resolution≤5°）; Introduce altitude measurement capability (4D point cloud, traditional angle radar has no altimetry capability). The ZM-SDR1 radar was benchmarked with the fifth-generation products of international giants at the beginning of its design.
In addition, the high-density 4D radar point cloud provided by ZM-SDR1 can accurately reconstructs the 3D contour of the scene. The point cloud rendering of ZM-SDR1 is shown in Figure 2. At present, the performance of point cloud can be benchmarked with 8-line lidar, and the cost is only 1/20 of it.