Sunday, January 21, 2018

Samsung Applies for Under-Display Fingerprint Image Sensor Patent

Samsung patent application US20180012069 "Fingerprint sensor, fingerprint sensor package, and fingerprint sensing system using light sources of display panel" by Dae-young Chung, Hee-chang Hwang, Kun-yong Yoon, Woon-bae Kim, Bum-suk Kim, Min Jang, Min-chul Lee, and Jung-woo Kim proposes an optical fingerprint image sensor under an OLED display panel:

Saturday, January 20, 2018

ST 115dB Linear HDR Pixel

MDPI Special Issue on IISW 2017 publishes ST paper "A 750 K Photocharge Linear Full Well in a 3.2 μm HDR Pixel with Complementary Carrier Collection" by Frédéric Lalanne, Pierre Malinge, Didier Hérault, Clémence Jamin-Mornet, and Nicolas Virollet.

"The native HDR pixel concept based on a parallel electron and hole collection for, respectively, a low signal level and a high signal level is particularly well-suited for this performance challenge. The theoretical performance of this pixel is modeled and compared to alternative HDR pixel architectures. This concept is proven with the fabrication of a 3.2 μm pixel in a back-side illuminated (BSI) process including capacitive deep trench isolation (CDTI). The electron-based image uses a standard 4T architecture with a pinned diode and provides state-of-the-art low-light performance, which is not altered by the pixel modifications introduced for the hole collection. The hole-based image reaches 750 kh+ linear storage capability thanks to a 73 fF CDTI capacitor. Both images are taken from the same integration window, so the HDR reconstruction is not only immune to the flicker issue but also to motion artifacts."

Friday, January 19, 2018

Espros LiDAR Sensor Presentation at AutoSens 2017

AutoSens publishes a video of Espros CEO Beat De Coi presentation of a pulsed ToF sensor in October 2017:

Intel Starts Shipments of D400 RealSense Cameras

Intel begins shipping two RealSense D400 Depth Cameras from its next-generation D400 product family: the D415 and D435, based on previously announced D400 3D modules.

RealSense D415

Intel is also offering its D4 and D4M (mobile version) depth processor chips for stereo cameras:


Thursday, January 18, 2018

ams Bets on 3D Sensing

SeekingAlpha publishes an analysis of the recent ams business moves:

"ams has assembled strong capabilities in 3D sensing - one of the strongest emerging new opportunities in semiconductors. 3D sensing can detect image patterns, distance, and shape, allowing for a wide range of uses, including facial recognition, augmented reality, machine vision, robotics, and LIDAR.

Although ams is not currently present in the software side, the company has recently begun investing in software development as a way to spur future adoption. Ams has also recently begun a collaboration with Sunny Optical, a leading Asian sensor manufacturer, to take advantage of Sunny's capabilities in module manufacturing.

At this point it remains to be seen how widely adopted 3D sensing will be; 3D sensing could become commonplace on all non-entry level iPhones in a short time and likewise could gain broader adoption in Android devices. What's more, there is the possibility of adding 3D sensing to other consumer devices like tablets, not to mention adding 3D sensing to the back of phones in future models.
"

Wednesday, January 17, 2018

RGB to Hyperspectral Image Conversion

Ben Gurion University, Israel, researches implement a physically impossible thing - converting regular RGB consumer camera images into hyperspectral ones, purely by software. Their paper "Sparse Recovery of Hyperspectral Signal from Natural RGB Images" by Boaz Arad and Ohad Ben-Shahar presented at European Conference on Computer Vision (ECCV) in Amsterdam, The Netherlands, in October 2016, says:

"We present a low cost and fast method to recover high quality hyperspectral images directly from RGB. Our approach first leverages hyperspectral prior in order to create a sparse dictionary of hyperspectral signatures and their corresponding RGB projections. Describing novel RGB images via the latter then facilitates reconstruction of the hyperspectral image via the former. A novel, larger-than-ever database of hyperspectral images serves as a hyperspectral prior. This database further allows for evaluation of our methodology at an unprecedented scale, and is provided for the benefit of the research community. Our approach is fast, accurate, and provides high resolution hyperspectral cubes despite using RGB-only input."


"The goal of our research is the reconstruction of the hyperspectral data from natural images from their (single) RGB image. Prima facie, this appears a futile task. Spectral signatures, even in compact subsets of the spectrum, are very high (and in the theoretical continuum, infinite) dimensional objects while RGB signals are three dimensional. The back-projection from RGB to hyperspectral is thus severely underconstrained and reversal of the many-to-one mapping performed by the eye or the RGB camera is rather unlikely. This problem is perhaps expressed best by what is known as metamerism – the phenomenon of lights that elicit the same response from the sensory system but having different power distributions over the sensed spectral segment.

Given this, can one hope to obtain good approximations of hyperspectral signals from RGB data only? We argue that under certain conditions this otherwise ill-posed transformation is indeed possible; First, it is needed that the set of hyperspectral signals that the sensory system can ever encounter is confined to a relatively low dimensional manifold within the high or even infinite-dimensional space of all hyperspectral signals. Second, it is required that the frequency of metamers within this low dimensional manifold is relatively low. If both conditions hold, the response of the RGB sensor may in fact reveal much more on the spectral signature than first appears and the mapping from the latter to the former may be achievable.

Interestingly enough, the relative frequency of metameric pairs in natural scenes has been found to be as low as 10^−6 to 10^−4. This very low rate suggests that at least in this domain spectra that are different enough produce distinct sensor responses with high probability.

The eventual goal of our research is the ability to turn consumer grade RGB cameras into a hyperspectral acquisition devices, thus permitting truly low cost and fast HISs.
"

X-Ray Imaging at 30fps

Teledyne Dalsa publishes a nice demo of its 1MP 30fps X-Ray sensor:

Tuesday, January 16, 2018

SD Optics Depth Sensing Camera

SD Optics publishes two videos of depth sensing by means of fast focus variations of its MEMS lens:




Monday, January 15, 2018

Imec 3D Stacking Aims to 100nm Contact Pitch

Imec article on 3D bonding technology by Eric Beyne, imec fellow & program director 3D system integration presents solutions that are supposed to reach 100nm contact pitch:

Gate/Body-tied MOSFET Image Sensor Proposes

Sensors and Materials publishes a paper "Complementary Metal Oxide Semiconductor Image Sensor Using Gate/Body-tied P-channel Metal Oxide Semiconductor Field Effect Transistor-type Photodetector for High-speed Binary Operation" by Byoung-Soo Choi, Sang-Hwan Kim, Jimin Lee, Chang-Woo Oh, Sang-Ho Seo, and Jang-Kyoo Shin from Kyungpook National University, Korea.

"In this paper, we propose a CMOS image sensor that uses a gate/body-tied p-chnnel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector for highspeed binary operation. The sensitivity of the gate/body-tied PMOSFET-type photodetector is approximately six times that of the p–n junction photodetector for the same area. Thus, an active pixel sensor with a highly sensitive gate/body-tied PMOSFET-type photodetector is more appropriate for high-speed binary operation."

The 3T-style pixel uses pmos instead of PD and has a non-linear response. Probably, its inherent non-linearity has been the main reason that the binary operation mode is proposed: