London South East prides itself on its community spirit, and in order to keep the chat section problem free, we ask all members to follow these simple rules. In these rules, we refer to ourselves as "we", "us", "our". The user of the website is referred to as "you" and "your".
By posting on our share chat boards you are agreeing to the following:
The IP address of all posts is recorded to aid in enforcing these conditions. As a user you agree to any information you have entered being stored in a database. You agree that we have the right to remove, edit, move or close any topic or board at any time should we see fit. You agree that we have the right to remove any post without notice. You agree that we have the right to suspend your account without notice.
Please note some users may not behave properly and may post content that is misleading, untrue or offensive.
It is not possible for us to fully monitor all content all of the time but where we have actually received notice of any content that is potentially misleading, untrue, offensive, unlawful, infringes third party rights or is potentially in breach of these terms and conditions, then we will review such content, decide whether to remove it from this website and act accordingly.
Premium Members are members that have a premium subscription with London South East. You can subscribe here.
London South East does not endorse such members, and posts should not be construed as advice and represent the opinions of the authors, not those of London South East Ltd, or its affiliates.
S2020 Regarding Semicasts Qualcomm Smartphones Speculation. With my very limited knowledge the more I look at Microsoft VR related patents I would hazard a guess that a licencing deal in this area maybe under consideration also.
Not the same Colin. Missing an 'n'.
Let's not forget the guy who called ARM right
Semicast Research
@semicast_res
·
Jan 10
THIS IS SPECULATION: Qualcomm takes exclusive license for eye gaze tracking in smartphones. Gives competitive advantage over Apple and Samsung.
@seeingmachines
is funded to profit with a steady royalty stream.
Thanks TLS,
With P Mcimmimmimmimmiminent talking about licencing the IP would a licencing deal with Qualcomm with all it's possible applications involving SEE tech be possibly what he has in mind?
Yes TLS and JC, appreciate the time effort and explanation.
TLS and JC, that seriously interesting stuff, reading that last post I'm almost starting to understand it.
#Team300
I have been reviewing SEE’s HIGH FRAME RATE IMAGE PRE-PROCESSING SYSTEM AND METHOD patent.
Frame rate is the number of pictures taken over a period of time. Cinema was 24 frames per second. TV UK TV is 60 Hz (frames per second). If you speak to gamers they will complain if the refresh rate is lower than 60 Hz and are happiest with 100 or 144 frames per second, a more natural feeling and allows them to react faster to on screen events.
So why is this patent important?
Computation is expensive. It takes time and it takes power. The more computation you add the longer it takes. The more short cuts that you can take the more effective performance you can get in a short period of time and also it will save power if that is important.
I just did a test, while driving in traffic through town, I glanced at the side mirror and back to the road ahead. I got partway through the “W” of “One Mississippi” say 1/6 of a second. At 30Hz that is 5 pictures.
An aside: when the eye moves suddenly, often we blink. If not, then the brain “blinks” as it can’t process the fuzzy image, so in that dart to the side, the brain already knew the forward scene, shut down until the eye stabilised on the mirror, then shut down while it processed the mirror view as it returned to the front. The snapshot of the mirror was a tiny fraction of the 1/6s This means that at 30Hz, the camera may not have seen my eye reach the mirror, just blurred eye or blink as I scanned past the bottom right of the windshield. That is why 60Hz + is required for safety
If we want to analyse our images at say 60 Hz, we need to have every image perfect. We can do this by taking images at say 300 Hz and selecting only best images to pass on to the next stage. We could vary the brightness of the LED, or the sensitivity of the camera and pick the images with the best glints from the LED. That will allow us to better compensate for bright sunlight or dark glasses. Only relatively simple computation is required to do this and it can be done at the higher frequency. The resulting “lower rate” images are then processed using more complicated algorithms to get gaze angles and detect drowsiness.
Based on the requirements the analysis that you do at the high frame rate could be looking for different things – good quality glints, a specific number of glints in set locations (based on the number and locations of the currently illuminated LEDs and where the glints were last seen and expecting extra reflections from glasses that were there on previous scan.) It could be to get a good image of the Iris in some frames for identification, or to maximise the contrast between an image with the IR on and off so it can subtract the glare from the sun.
Without this patent, we don’t get to select the best images or everything has to be done at the output frequency which then drops frames that don’t work.
Qualcomm 2020-01-20
LOW-DELAY BUFFERING MODEL IN VIDEO CODING 2020-01-20
[0050] Video applications that may make use of video encoder 20 and video decoder 30 may include local playback, streaming, broadcast/multicast and conversational applications. Conversational applications include video telephony and video conferencing. Conversational applications are also referred to as low-delay applications, in that such real-time applications are not tolerant to significant delay. For a good user experience, conversational applications require a relatively low end-to-end delay of the entire systems, i.e., the delay between the time when a video frame is captured at a source device and the time when the video frame is displayed at a destination device. Typically, an acceptable end-to-end delay for conversational applications should be less than 400 ms. An end-to-end delay of around 150 ms is considered very good.
[0051] Each processing step of a conversational application may contribute to the overall end-to-end delay. Example delays from processing steps includes capturing delay, pre-processing delay, encoding delay, transmission delay, reception buffering delay (for de-jittering), decoding delay, decoded picture output delay, post-processing delay, and display delay. Typically, the codec delay (encoding delay, decoding delay and decoded picture output delay) is targeted to be minimized in conversational applications. In particular, the coding structure should ensure that the pictures' decoding order and output order are identical such that the decoded picture output delay is equal to or close to zero.
[0056] In the AVC and HEVC HRD models, decoding or CPB removal is access unit (AU) based, and it is assumed that picture decoding is instantaneous (e.g., decoding process 104 in FIG. 2 is assumed to be instantaneous). An access unit is a set of network abstract layer (NAL) units and contains one coded picture. In practical applications, if a conforming decoder strictly follows the decoding times signaled, e.g., in picture timing supplemental enhancement information (SEI) messages generated by video encoder 20, to start decoding of AUs, then the earliest possible time to output a particular decoded picture is equal to the decoding time of that particular picture (i.e., the time when a picture starts to be decoded) plus the time needed for decoding that particular picture. The time needed for decoding a picture in the real-world cannot be equal to zero.
https://worldwide.espacenet.com/publicationDetails/description?CC=DK&NR=2936818T3&KC=T3&FT=D&ND=3&date=20200120&DB=&locale=en_EP#
TLS etal,
A question about SEE Patent HIGH FRAME RATE IMAGE PRE-PROCESSING SYSTEM AND METHOD.
070 talks about near real time/ real time / Milli/nanoseconds.
Is the processing speed described an improvement/advance of technology across the industry and potentially applied in video gaming and many consumer applications etc or is this advance specifically related to lower power use/ lower heat / optimising situations?
[0070] As used herein, the terms‘real-time’ refer to the ability of the system to process information within a timeframe such that the next step in the process can be timely performed. By way of example, the above described image pre-processing method is able to be performed iteratively on each batch of images such that the images can be fed to the video processing pipeline in an ongoing basis sufficient to produce continuous output. Applicable response periods for the purpose of defining the constraints of ‘real-time’ and ‘near real-time’ are in the range from nanoseconds to several milliseconds.
https://worldwide.espacenet.com/publicationDetails/description?CC=WO&NR=2019241834A1&KC=A1&FT=D&ND=3&date=20191226&DB=&locale=en_EP#