We would love to hear your thoughts about our site and services, please take our survey here.
Chequebook,
I have studied Seeing Machines, I know Seeing Machines. Seeing Machines is a friend of mine. Checkered Flag, you're no Warren Buffet either.
Invo Tech don't sound like a company that we would be partnering with. separate cameras for DMS, OMS in the front row and another OMS behind, sounds like a 3 systems running separately each not doing a great job
I have discussed the various issues with ToF cameras in the past, but here is a simple one from the EE Times article:
"it features a high system resolution of 640 × 480 pixels (VGA)."
I rest my case your Honour.
Seize, that is just the coffee cup based on steering behaviour. Too early for us anyway.
How does the system work?
It monitors driver behaviour closely – noting any erratic steering wheel movements and lane deviations, for example - so it can judge the moment that you are starting to feel sleepy and need to stop. It also continually evaluates traffic signals on the road when you are driving at speeds of more than 40mph. and works out when it's time to take a break.
How does it warn the driver?
If the system detects that you're starting to lose concentration it will alert you with a visual display on the dashboard and a warning sound. If you haven't taken a break within 15 minutes, the system will repeat the warning.
https://media.ford.com/content/fordmedia/fna/us/en/news/2022/11/01/doug-field-to-discuss-fords-focus-on-software-defined-vehicles-a.html
DOUG FIELD TO DISCUSS FORD’S FOCUS ON SOFTWARE-DEFINED VEHICLES AND SERVICES AT BERNSTEIN CONFERENCE ON NOV. 7
Nov 1, 2022 | Dearborn
Â
?
DEARBORN, Mich., Nov. 1, 2022 – Ford Model e Chief Advanced Product Development and Technology Officer Doug Field will participate in a virtual fireside chat with technology analyst Toni Sacconaghi at Bernstein’s 6th Electric Revolution Conference in London on Monday, November 7, 2022, at 10:30 a.m. ET (3:30 p.m. GMT).
Field will discuss how -- under the Ford+ plan for transformation -- technology innovation and software are helping Ford develop breakthrough electric vehicles at scale and strengthen its internal combustion product line up.
He will describe the current customer enthusiasm for Ford’s first EV products and BlueCruise L2 hands-free system, and how the company’s vision for its next generation EVs and L2+/L3 systems is taking shape.
Ford’s talented team of new and existing experts are developing software and digital systems that Ford believes will truly differentiate the brand through incredible customer experiences –and unlock new growth and highly accretive recurring revenue opportunities for the company.
Beyond EVs and BlueCruise, the Model e team is developing an array of software for all parts of the Ford enterprise, including services for Ford Blue’s popular gas-powered vehicles and hybrids, and productivity tools for Ford Pro’s commercial products.
A webcast of the conversation can be viewed online. Additional information is available at shareholder.ford.com.
###
As well as the 3 separate videos, there is also a longer version
https://youtu.be/KiQO3lzcTrI
You can see the camera and LEDs just by Patrick's shoulder, on the cowling around the instruments. You can just see the nearest LED flashing. There is a separate set on the other side. So now we know where to look in other ****pit shots
I just can't think why the VP of Global Program delivery could possibly need another Project Manager. What is she doing with them all?
As for the comment from the VP of Global Quality, has "Smashing it in the world of Driver Monitoring." Is that good or bad, and if it is good? Did the Nomad approve the statement?
Asking for a pescatarian friend who lives under a bridge.
https://www.linkedin.com/posts/dhayalan-asoka-mieaust-215b691_project-manager-aftermarket-job-in-fyshwick-activity-6992602889416294400-4jTt
It is a patent for a particular method of processing, it doesn't need to describe effectiveness. Although the more of the face that is obscured, the harder it would be to match a pose to the library. But the details and parameters aren't shared here.
One final point on the Kalman filter for Eye gaze, it will always smooth the eye movement and reduce the perceived angularity of the eye movements. Eyes rarely stay still (you would literally go blind to what you are lookimg at as the chemicals get used up)
So if you try to determine higher order functions based on patterns of eye glissades, you are working with a far lower level of detail, like signing your name while wearing boxing gloves you won't get the level of detail and confidence that a company like BMW would demand
Well we don't get many Smart Eye patents, so let's see what we have. As Esco points out, it does seem strange to mention UV light but that was only an example and not listed in the claim.
So it doesn't mention AI or machine learning, but a library of faces and the direction of gaze for each face, it can even use the face it sees if it is in the library of "lookups" (puntastic) . That is the first order guess for gaze. The second comes from a library of eye features with glints which give a better estimate of gaze, until the eye is looking too far off the camera axis, then it favours the face estimate.
The next part is key, the claims always use a Kalman filter to correct the estimate based on previous estimate and new data. This is like using a hozepipe to spray water at a target. You guess where to point and pull the trigger, then adjust to move the water towards the target. It is always lagging, but as you get closer you make smaller adjustments.
Did you spot the problem, an accurate fix takes multiple frames to become accurate, but the eye has moved on.
Another issue is the library, it is specific to the geometry of the vehicle and the position of the lights and camera relative to the person, so needs retuning for each vehicle. That means they need virtual faces rather than real faces to do the training.
Another issue was that they could use the real driver's gaze characteristics in the library - where would that be stored, the patent said "on-line" good luck with getting that biometric data stored off the vehicle.
Thanks as always to JC for trawling through to find the patents.
Mobileye was everywhere once. A black box with a chip from Intel. But it didn't do DMS and it gave little for OEMs or Tier1s to customise.
That left an opening for Nvidia, Qualcomm, Veoneer, etc to get into and now that market is fragmented.
Seeing Machines *did learn* from those mistakes, multiple possible platforms, scope for customisation and different features for different Tier1s and OEMs as long as they pay for it. We work with many and there are so many routes to install us, we compete at many levels, so there is always a Seeing Machines solution to fit. And we do still have some competition, just enough to keep everyone happy!
Mobileye was everywhere once. A black box with a chip from Intel. But it didn't do DMS and it gave little for OEMs or Tier1s to customise.
That left an opening for Nvidia, Qualcomm, Veoneer, etc to get into and now that market is fragmented.
Seeing Machines didn't learned from those mistakes, multiple possible platforms, scope for customisation and different features for different Tier1s and OEMs as long as they pay for it. We work with many and there are so many routes to install us, we compete at many levels, so there is always a Seeing Machines solution to fit. And we do still have some competition, just enough to keep everyone happy!
There are a variety of reasons why the Magna mirror may not be appropriate.
3 row SUV it won't see the back row.
Trucks, lorries, pabel vans etc where there is no reason for a mirror if there is no rear window
BMW i7 with the TV deployed for the passengers.
BMW, where they have just installed it in the instrument panel across the range.
Trucks where the drivers wear caps or stetson regularly (F-150 in the mid West)
On the next reason is trickier to visualise. A lack of fixed points in the camera field of view (across the model range). That probably needs explaining. The mirror moves, so in order to know where the driver is lookin, it first needs to identify where the mirror is looking. Hopefully, the height of the camera is fixed, but there is horizontal and vertical rotation as well as twist in the mirror. Drivers come in all shapes and sizes. The chair moves and its back rotates. Drivers may not even sit in the center of the chair and may lean sideways. Then we should have the car doors and B pilars, unless it is a convertable! Even if the pillars are there, the position can be different depending the number of doors.
So for the convertible scenario, you need to move the camera, but that would require a whole separate system.
So for BMW I expect we will remain with camera in the instrument cluster so it will be the same for 3 door, 5 door hatchback, coupe and convertible of a given model.
VW don't have the same variety of shapes within a model but they will perhaps load the model detail in firmware for the lower volume models like Eos?
Of course if the OEM wants Qualcomm and you are not SEE, you need to build everything and don't benefit from hardware optimisation. So your NRE costs are higher and your performance will be lower so less chance of winning.
Really, good meet up yesterday.
I won't cover everything as there are some good summaries already.
Qualcomm, we don't win via Qualcomm, but we may win on Qualcomm. So Christiano sells Q direct to the OEM with a list of features and a promise of scalability.
The OEMs then issue RFQ's to T1 for what they want. Perhaps hinting Q is required.
When T1 ask T2 about DMS/OMS we give them a solution that is already running on Q then develop their "custom features" if we dobt already have them in our list of 50+ features.
The cost to SEE is the NRE time so pre-optimised saves money, performance is already assured.
So we won on Qualcomm and may have cost us less, but we only get paid by T1.
Are you starting rumours about 3m now UD?
Nice to see that motor1.com are keen Supercruise spotters too!
You might almost say that the detailed example of how they ensure the driver was engaged was rather simplified in order to avoid patent infringement
Apologies, I added in the word VEHICLE into my paraphrase of the claim. Add a few more noughts on the end.
This could be used in your phone or TV realising that you aren't watching/looking so it turns down the detail (see I can give back a form of foveated rendering) to save power.
It could be air traffic controllers who have zoned out or may fail to spot a potential collision being guided if they need help but leaving them alone if they are actively watching and tracking.
It could be pilots in the air, did the Pilot look and check the air speed, engine power and then visually check that the co-pilot did engage the flaps.
And of course in cars - When indicating to overtake, did the driver glance at the mirror at all, or did she watch the vehicle approaching from behind in detail and time their move carefully so that she could slot into the gap as another vehicle passes - if so then don't use a strong warning unless the wheel moves the car too early. Alternatively, if the glance was too short and they had not previously seen the vehicle in the rear view mirror, then the car should provide more support.
Jump forward a few years, "robots" however you picture them - lets say even a stationary Alexa, your Roomba cleaning the floor or the Delivery robot cart in your street, could all employ this patent. Did the human who is near me, see me coming, are they busy or available to talk to me or to just navigate around me or should I move out of their path. Is BILLIONS big enough?
Seeing Machines have been "Issued" a US patent No 11455810
The final claim says
So what is it for - The final claim says, eye gaze as "attention rays" which are mapped onto a digital representation of the view outside the vehicle and the spread or distribution of attention.
So if you are paying attention (watching and tracking) the objects that the vehicle's computer model of the surroundings think is most relevant, then you are paying attention.
Thanks to Johnchukka for finding and tracking these patents.
And well done to Tim Edwards and John Noble for landing another great patent.
Day to day, these may not get valued in the share price, but a lot of lawyers will be attributing value when it comes to each and every granted and in progress patent application. These are the concrete that builds the strong impenetrable moat around our business, that adds value, based not on the 470k cars that are already built today, or a nominal 50% of vehicles that are built in the a few years time, but on the BILLIONS of cars that will follow