Charles Jillings, CEO of Utilico, energized by strong economic momentum across Latin America. Watch the video here.
London South East prides itself on its community spirit, and in order to keep the chat section problem free, we ask all members to follow these simple rules. In these rules, we refer to ourselves as "we", "us", "our". The user of the website is referred to as "you" and "your".
By posting on our share chat boards you are agreeing to the following:
The IP address of all posts is recorded to aid in enforcing these conditions. As a user you agree to any information you have entered being stored in a database. You agree that we have the right to remove, edit, move or close any topic or board at any time should we see fit. You agree that we have the right to remove any post without notice. You agree that we have the right to suspend your account without notice.
Please note some users may not behave properly and may post content that is misleading, untrue or offensive.
It is not possible for us to fully monitor all content all of the time but where we have actually received notice of any content that is potentially misleading, untrue, offensive, unlawful, infringes third party rights or is potentially in breach of these terms and conditions, then we will review such content, decide whether to remove it from this website and act accordingly.
Premium Members are members that have a premium subscription with London South East. You can subscribe here.
London South East does not endorse such members, and posts should not be construed as advice and represent the opinions of the authors, not those of London South East Ltd, or its affiliates.
https://www.engadget.com/2019/10/09/toyota-gm-nvidia-and-others-team-up-on-self-driving-car-chips/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAIzMKhc7rL1TfoE15jBLbZscutlsUB3c1bEZZDmHEjfMUmud04E1TSXlfoFLl2mVnpluc4Ppj2bjrUylm_jOJTOupbnR84-U-VuwZgCkfPUYflf1Dr14gN7_v-_I1BcKbaLm7riZ31RBKPmunSnU99Xm87d_a82LRxvi5zy1HTz1
Toyota, GM, NVIDIA and others team up on self-driving car chips
Autonomous vehicles are a big job for just one company.
Autonomous vehicles pose a whole bunch of R&D challenges. With so many aspects to consider -- power consumption, safety, user interface and data management, to name just a few -- creating a common computing platform for their use is a big ask of just one company. That's why a group of automotive and tech businesses have joined forces to create the Autonomous Vehicle Computing Consortium (AVCC), in a bid to create a platform that will promote the scalable deployment of automated and autonomous vehicles.
The consortium includes ARM, Bosch, Continental, DENSO, General Motors, NVIDIA, NXP Semiconductors and Toyota (whose P4 Automated Driving Test Vehicle is pictured above), who will collaborate on overcoming some of the most significant challenges posed by autonomous vehicles -- the group's first step will be developing a set of recommendations for a system architecture for the computing platform. According to Alex Harrod of Arm, "The group brings together a unique combination of expertise and a shared goal," and will be welcoming input from other interested parties and members of the automotive ecosystem.
-
Note this article uses the 'scalable deployment' of 'automated vehicles'...that's automated (with driver) NOT autonomous vehicles (assuming without driver)
and 'will be welcoming input from other interested parties and members of the automotive ecosystem.'
'the automotive ecosystem'....well, I'd say that includes Driver Monitoring Systems
so which DMS will become the industry standard for the major Automakers?
If their is collaborative work going on between companies in the AVCC then it's not difficult to imagine how the processes described below could compliment or work with the GM patent . The close timing of these patents and their possible alignment could also give the impression to those looking for patterns of some joint or parallel working between GM, Toyota and SEE.
VEHICLE SYSTEMS AND METHODS FOR DETERMINING A TARGET BASED ON A VIRTUAL EYE POSITION
https://worldwide.espacenet.com/publicationDetails/biblio?CC=US&NR=2019228242A1&KC=A1&FT=D&ND=3&date=20190725&DB=&locale=en_EP#
[0028] The user detection system 130 is communicatively coupled to the electronic control unit 102 over the communication path 104. The user detection system 130 may include any device configured to detect the presence of an object within the surrounding environment of the vehicle 100. More specifically, the user detection system 130 is configured to detect the presence of an object within the vicinity of the vehicle 100. The user detection system 130 may include a user detection sensor 132 configured to output an object signal indicative of the presence of one or more objects within the vicinity of the vehicle 100. Based on the object signal of the user detection sensor 132, the electronic control unit 102 may execute object recognition logic to detect an object and classify the detected object into a classification. The user detection sensor 132 may include, but is not limited to, a camera, a LiDAR sensor, a RADAR sensor, a sonar sensor, a proximity sensor, and the like. In some embodiments, the user detection system 130 includes more than one user detection sensor 132.
[0029] As explained below, the electronic control unit 102 calculates the target position 112. The electronic control unit 102 also classifies one or more objects located at the target position 112 based on the information provided by the object signal. More specifically, the electronic control unit 102 executes the object recognition logic to classify the type of object, where the object may be person, another vehicle, a building, and the like.
ONE.
Possibly Related
SYSTEMS AND METHODS OF INCENTIVIZED DATA SHARING
Toyota TRI 2019-10-03
[0067] In arrangements in which the sensor system 120 includes a plurality of sensors, the sensors can function independently from each other. Alternatively, two or more of the sensors can work in combination with each other. In such a case, the two or more sensors can form a sensor network. The sensor system 120 and/or the one or more sensors can be operably connected to the processor(s) 110, the data store(s) 115, and/or another element of the vehicle 100 (including any of the elements shown in FIG. 1). The sensor system 120 can acquire data of at least a portion of the external environment of the vehicle 100 (e.g., nearby vehicles).
[0068] The sensor system 120 can include any suitable type of sensor. Various examples of different types of sensors will be described herein. However, it will be understood that the embodiments are not limited to the particular sensors described. The sensor system 120 can include one or more vehicle sensors 121. The vehicle sensor(s) 121 can detect, determine, and/or sense information about the vehicle 100 itself. In one or more arrangements, the vehicle sensor(s) 121 can be configured to detect, and/or sense position and orientation changes of the vehicle 100, such as, for example, based on inertial acceleration. In one or more arrangements, the vehicle sensor(s) 121 can include one or more accelerometers, one or more gyroscopes, an inertial measurement unit (IMU), a dead-reckoning system, a global navigation satellite system (GNSS), a global positioning system (GPS), a navigation system 147, and/or other suitable sensors. The vehicle sensor(s) 121 can be configured to detect, and/or sense one or more characteristics of the vehicle 100. In one or more arrangements, the vehicle sensor(s) 121 can include a speedometer to determine a current speed of the vehicle 100.
[0069] Alternatively, or in addition, the sensor system 120 can include one or more environment sensors 122 configured to acquire, and/or sense driving environment data. “Driving environment data” includes and data or information about the external environment in which an autonomous vehicle is located or one or more portions thereof. For example, the one or more environment sensors 122 can be configured to detect, quantify and/or sense obstacles in at least a portion of the external environment of the vehicle 100 and/or information/data about such obstacles. Such obstacles may be stationary objects and/or dynamic objects. The one or more environment sensors 122 can be configured to detect, measure, quantify and/or sense other things in the external environment of the vehicle 100, such as, for example, lane markers, signs, traffic lights, traffic signs, lane lines, crosswalks, curbs proximate the vehicle 100, off-road objects, etc.
TWO.
[0070] Various examples of sensors of the sensor system 120 will be described herein. The example sensors may be part of the one or more environment sensors 122 and/or the one or more vehicle sensors 121. Moreover, the sensor system 120 can include operator sensors that function to track or otherwise monitor aspects related to the driver/operator of the vehicle 100. However, it will be understood that the embodiments are not limited to the particular sensors described.
[0071] As an example, in one or more arrangements, the sensor system 120 can include one or more radar sensors 123, one or more LIDAR sensors 124, one or more sonar sensors 125, and/or one or more cameras 126. In one or more arrangements, the one or more cameras 126 can be high dynamic range (HDR) cameras, infrared (IR) cameras and so on. In one embodiment, the cameras 126 include one or more cameras disposed within a passenger compartment of the vehicle for performing eye-tracking on the operator/driver in order to determine a gaze of the operator/driver, an eye track of the operator/driver, and so on.
[0072] The vehicle 100 can include an input system 130. An “input system” includes any device, component, system, element or arrangement or groups thereof that enable information/data to be entered into a machine. The input system 130 can receive an input from a vehicle passenger (e.g. a driver or a passenger). The vehicle 100 can include an output system 135. An “output system” includes any device, component, or arrangement or groups thereof that enable information/data to be presented to a vehicle passenger (e.g., a person, a vehicle passenger, etc.).
https://worldwide.espacenet.com/publicationDetails/description?CC=US&NR=2019306241A1&KC=A1&FT=D&ND=3&date=20191003&DB=&locale=en_EP#
soulboy
that's a very good point. I wonder what other tech-developments SEE is working on for OEMs rather than simply distraction, fatigue etc and whether NDA restrictions are indeed the main reason SEE cannot reveal the nuts-bolts of the OEM contracts....
One.
This is the most recent SEE patent * and looks to me to be closely related to the GM Patent of 2019-10-17. The patent talks of driver gaze camera and forward facing camera alignment. Scrolling down to 81/82/83 the commentary talks of networks of processors which maybe related to the GM statement ' In such embodiments, the data from the vehicles is generally indicates the gaze of other drivers when at the same location as the vehicle'.
*SYSTEM AND METHOD FOR IDENTIFYING A CAMERA POSE OF A FORWARD FACING CAMERA IN A VEHICLE
A method includes capturing images of a vehicle driver's face from a driver facing camera and images of a forward road scene from a forward facing camera. The images are analyzed to derive gaze direction data in a vehicle frame of reference. The gaze direction data are statistically collated to determine a frequency distribution of gaze angles. One or more peaks in the frequency distribution are identified and associated with corresponding reference points in the images to determine one or more reference gaze positions in the vehicle frame of reference. The one or more reference gaze positions are correlated with a position of the reference points in a forward facing camera frame of reference to to determine a camera pose of a forward facing camera in the vehicle frame of reference.
https://worldwide.espacenet.com/publicationDetails/description?CC=US&NR=2019266751A1&KC=A1&FT=D&ND=3&date=20190829&DB=&locale=en_EP#
GM Patent; https://patents.justia.com/patent/20190318180
SYSTEM AND METHOD FOR IDENTIFYING A CAMERA POSE OF A FORWARD FACING CAMERA IN A VEHICLE
[0081] In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
[0082] Note that while diagrams only show a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described for clarity. For example, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
[0083] Thus, one embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement. Thus, as will be appreciated by those skilled in the art, embodiments of the present disclosure may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method. Accordingly, embodiments of the present disclosure may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware embodiments. Furthermore, the present disclosure may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.
https://worldwide.espacenet.com/publicationDetails/description?CC=US&NR=2019266751A1&KC=A1&FT=D&ND=3&date=20190829&DB=&locale=en_EP#
I get the impression that these OEMs ARE actually taking the Fovio chip but then adding additional tech which as a package then becomes ‘propietary’
It’s the way they are each trying to introduce something new and ‘unique’ to stay ahead of the competition and why they insist on building strict NDA terms. It’s the reason we can’t announce these deals but it doesn’t take a genius to work out we are at the core. It’s a nonsense really and frustrates me because this was at the core of why we had to have a fundraise at 3p. Had the OEMs allowed us to announce our price would have reflected the good news and probably been north of 6p. Restrictive in every sense and cost US the shareholders a lot of money. Rant over - onwards and upwards!
Why wouldnt they just use the fovio chip? Its automotive grade top spec plug and play.
Could a licence be granted by SEE to Toyota ,involving licence fee & royalty ?
We have found references to
Toyota's Guardian 'Envelop protection System' GM 's Global B and possible link to Global Bus 100 along with a patented Toyota /Continental ' VEHICLE PERIPHERY MONITORING DEVICE. We also know that SEE has become a centre for Xilinx and Continental collaboration. Gill Pratt also speaks of 'Guardian for all' . The AVCC press release below speaks of the ' benefits to all' and shows growing industry collaboration in Safety architecture and noting the collection of ' top automakers along with some of the leading chipmakers and tier 1 suppliers in automotive today'. We also had SmartEye clearly moving their focus to China perhaps as a result of not being a club member. All circumstantial but I'm thinking they could be related.
'Through the AVCC, working groups will share ideas and study common technological challenges. The companies will “help the automotive industry work together by defining, educating and publishing for the benefit of all,” according to the release'.
https://insideunmannedsystems.com/industry-leaders-form-autonomous-vehicle-computing-consortium/
https://www.autoevolution.com/news/what-is-the-toyota-guardian-envelope-protection-system-131900.html
https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=20190404&CC=JP&NR=2019053633A&KC=A#
JC, as ever great stuff, cenkos mentioned full occupant monitoring in their CES note earlier in the year.
That's also an impressive consortium and I would expect SM to be their only choice for DMS
#Team300
That's how I'm reading it. I'm assuming the Safety architecture will be aligning data points rather than identity and Data points and a fire wall will exist between them both.
that's good stuff but the idea that some form of central processing system monitoring the gaze direction of ALL DRIVERS at the same time in all cars in a location is pretty disturbing assuming of course I am reading this right...
thanks for your research
I'm thinking this;
'In various embodiments, the attention determination system 22 further receives data from driver monitoring systems 14 of other vehicles (not shown) or a central processing system (not shown), for example, via a communication system. In such embodiments, the data from the vehicles is generally indicates the gaze of other drivers when at the same location as the vehicle. In such embodiments, the attention determination system 22 further determines the attention of the driver based on the general gaze direction of other drivers determined from the received data'.
https://patents.justia.com/patent/20190318180
and this below maybe related;
ARM, Toyota, GM set up new autonomous vehicle technology consortium
https://www.just-auto.com/news/arm-toyota-gm-set-up-new-autonomous-vehicle-technology-consortium_id191346.aspx
“Autonomous Vehicle Computing Consortium” (AVCC) announced today. This industry group includes Arm, Bosch, Continental, GM, Toyota, Nvidia, NXP and Denso, collecting top automakers along with some of the leading chipmakers and tier 1 suppliers in automotive today.
https://tinyurl.com/yydzja9v
and suggest to me that the DMS would need to be related also.
In various embodiments, the attention determination system 22 further receives data from driver monitoring systems 14 of other vehicles (not shown) or a central processing system (not shown), for example, via a communication system. In such embodiments, the data from the vehicles is generally indicates the gaze of other drivers when at the same location as the vehicle. In such embodiments, the attention determination system 22 further determines the attention of the driver based on the general gaze direction of other drivers determined from the received data.
https://patents.justia.com/patent/20190318180
https://worldwide.espacenet.com/publicationDetails/biblio?II=0&ND=3&adjacent=true&locale=en_EP&FT=D&date=20191017&CC=US&NR=2019318180A1&KC=A1#
https://patents.justia.com/patent/20190318180
METHODS AND SYSTEMS FOR PROCESSING DRIVER ATTENTION DATA
Methods and systems are provided for processing attention data. In one embodiment, a method includes: receiving, by a processor, object data associated with at least one object of an exterior environment of the vehicle; receiving upcoming behavior data determined from a planned route of the vehicle; receiving gaze data sensed from an occupant of the vehicle; processing, by the processor, the object data, the upcoming behavior data, and the gaze data to determine an attention score associated with an attention of the occupant of the vehicle; and selectively generating, by the processor, signals to at least one of notify the occupant and control the vehicle based on the attention score.
The technical field generally relates to methods and systems for processing attention data associated with a driver of a vehicle, and more particularly to methods and systems for processing attention data using vehicle perception data and behavior plan data.
Gaze detection systems generally include one or more cameras that are pointed at the eyes of an individual and that track the eye position and gaze direction of the individual. Vehicle systems use gaze detection systems to detect the gaze direction of a driver. The gaze direction of the driver is then used to detect the driver's attentiveness to the road ahead of them, or the driver's general attention to a feature inside the vehicle.
For example, some vehicle systems use the gaze direction of a driver to determine if the driver is inattentive to the road and to generate warning signals to the driver. In another example, some vehicle systems determine that the driver is looking in the direction of a particular control knob or switch of the vehicle and can control that particular element (e.g., turn it on, etc.) based on the determination. In each of the examples, the vehicle systems make a general determination of where the driver is looking and do not make a determination of what the driver is looking at (i.e. what is grasping the attention of the driver). In certain driving conditions, such as urban driving conditions, the driver attention will be based on the current driving conditions. For example, if the vehicle is stopped at stop sign, the driver may look left and then right. In another example, if the vehicle is about to make a turn, the driver attention will be in the direction of the upcoming turn.
Accordingly, it is desirable to provide improved methods and systems for detecting the attention of a driver based on the driving conditions. In addition, it is desirable to provide methods and system for making use of the information determined from the detected attention of the driver to the particular point or object. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and th