{"id":139,"date":"2019-06-12T18:12:09","date_gmt":"2019-06-12T18:12:09","guid":{"rendered":"http:\/\/smartsystems.ece.ufl.edu\/?page_id=139"},"modified":"2019-06-12T18:12:09","modified_gmt":"2019-06-12T18:12:09","slug":"smart-image-sensor","status":"publish","type":"page","link":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/research\/projects\/smart-image-sensor\/","title":{"rendered":"Smart Image Sensor"},"content":{"rendered":"<blockquote><p>Bio-Inspired Reconfigurable Neuromorphic Image Sensor<\/p><\/blockquote>\n<p>Cameras are pervasively used for applications like surveillance, traffic monitoring, and precision agriculture. Most camera systems, however, are used as data collection and relaying units while the processing happens at backend servers. This project focuses on bringing the processing units close to the image sensor to introduce parallelism in the design with the help of three types of processor namely pixel processor, region processor, and sequential processor. This architecture has three logical layers where those processors are distributed. Pixel-processors are distributed in the first logical layer of the digital pixel sensor (DPS), and there is a processor dedicated to each pixel. They work in parallel, handle low-level image processing applications, remove temporal redundancy in an image, and provide pixel-level parallelism. The second logical layer is comprised of a certain number of region-processors. A certain number of the pixel-processors form a group or region and the design has a region-processor for every region. Like pixel-processors, they also work in parallel, take input from the corresponding region to perform mid-\\high-level image processing, remove spatial redundancy, and ensures region level parallelism. Those two layers jointly provide massive parallelism in the design. Finally, a sequential processor who resides in the last layer receives the extracted information from the region processors through a bus and completes the remaining task (high-level reasoning operations) to complete the machine vision application. All those processors maintain hierarchical connections among the computational layers and reduce data volume through hierarchical processing. Moreover, those processors are reconfigurable in the ASIC paradigm to handle different machine vision applications. This flexible design emulates some of the concepts of the biological vision system. The simulation result shows the processing archives high acceleration in vision application and saves a significant amount of power through the hierarchical processing.<\/p>\n<figure id=\"attachment_81\" aria-describedby=\"caption-attachment-81\" style=\"width: 499px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/hierarchy.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-81 size-full\" src=\"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/hierarchy.jpg\" alt=\"\" width=\"499\" height=\"373\" srcset=\"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/hierarchy.jpg 499w, https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/hierarchy-300x224.jpg 300w\" sizes=\"auto, (max-width: 499px) 100vw, 499px\" \/><\/a><figcaption id=\"caption-attachment-81\" class=\"wp-caption-text\">Block Diagram of the Computational Units in the Image Sensor<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<h3>Keywords<\/h3>\n<p>Inter-pixel Processing, image sensor, Image processing, VLSI, FPGA, ASIC, Bio-Inspired processing, Neuromorphic Computing.<\/p>\n<h3>Evaluation Platform<\/h3>\n<p>We implement the full RTL-to-GDSII flow on Application Specific Integrated Circuit (ASIC) for the Image Sensor at the block level using 1.1 V supply voltage and 800 MHz clock frequency in 45 nm technology. We used Synopsys VCS and Design Compiler to convert RTL to gate-level net-list, Cadence Innovus to Place and Route of the synthesized net-list, Cadence Calibre to check DRC violation, and finally Synopsys Primetime for Static Timing Analysis using Nangate library as process design kit (PDK). Besides, to evaluate the performance, we also implement the design on the FPGA board provided by Xilinx (Kintex Ultra scale plus evaluation board) using Vivado design suite 18.2. While using the FPGA, we concentrate on the RTL design which also implementable on the ASIC platform.<\/p>\n<h3>Goal of the Hierarchical Processing<\/h3>\n<p>The ultimate goal of the project is to truncate the redundant information to accelerate the machine vision application. The figure illustrated below gives the understanding of extracting the relevant information from an image and which makes event-driven processing.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/MDPIJournal-Page-1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-93\" src=\"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/MDPIJournal-Page-1-1024x261.jpg\" alt=\"\" width=\"1024\" height=\"261\" srcset=\"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/MDPIJournal-Page-1-1024x261.jpg 1024w, https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/MDPIJournal-Page-1-300x76.jpg 300w, https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/MDPIJournal-Page-1-768x196.jpg 768w, https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-content\/uploads\/sites\/685\/2019\/06\/MDPIJournal-Page-1.jpg 1323w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a>A comparison of frame-based and event-based processing for a scene with three objects is presented in the figure above. Object 1 is a bird sitting on a branch that is on the top right corner of the image. Object 2 contains some insignificant scattered moving objects. Lastly, object 3 is a flying bird that is flying in a serpentine way. In the frame-based processing on the left, each frame is produced after executing each pixel and the frames are produced after a sudden time interval. Conversely, our event-driven processing system, on the right, responds if there is a significant event. Here the processing for object 1 is discarded since there is no temporal change. Though Object 2 has temporal change it does not carry relevant information. Only object 3 has not redundant information and it is our target to perform processing on the serpentine path of object 3 as shown in this figure.<\/p>\n<h3>Overall Benefit of the Project<\/h3>\n<ul>\n<li>The integration of several computational layers in the sensor provides in-sensor processing and brings the computational unit close to the image sensor<\/li>\n<li>The pixel-Parallel design gives the benefit of parallel processing and exhibits high acceleration of low\/mid-level applications in the machine vision application.<\/li>\n<li>Bio-inspired Computing removes the temporal and spatial redundancy and saves significant power and energy in the hierarchical layers. Parallelly, the computing system reduces the data volume in each layer and it reduces the burden to the external sequential processor, and accelerates the sequential operation in the vision application.<\/li>\n<li>The processors in each layer are reconfigurable to different applications in ASIC. This allows flexibility in the design and enables us to apply the sensor for different applications.<\/li>\n<\/ul>\n<h3>Simulator<\/h3>\n<p>The source code for the python simulator of our region-based event camera can found in this <a href=\"https:\/\/github.com\/jubaer-pantho\/event-camera-simulator\">link<\/a>:<\/p>\n<h3>Publications<\/h3>\n<ul>\n<li>Pankaj Bhowmik, Md Jubaer Hossain Pantho, Marjan Asadinia, and Christophe Bobda. \u201cDesign of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor.\u201d In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 673-681. 2018.<\/li>\n<li>Md Jubaer Hossain Pantho, Pankaj Bhowmik, and Christophe Bobda. \u201cPixel-Parallel Architecture for Neuromorphic Smart Image Sensor with Visual Attention.\u201d In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 245-250. IEEE, 2018.<\/li>\n<li>Pankaj Bhowmik, Md Jubaer Hossain Pantho, and Christophe Bobda. \u201cVisual Cortex Inspired Pixel-Level Re-configurable Processors for Smart Image Sensors.\u201d In Proceedings of Design and Automation Conference 2019(DAC)<\/li>\n<li>Pankaj Bhowmik, Md Jubaer Hossain Pantho, Sujan Saha, and Christophe Bobda. \u201cA Reconfigurable Layered-Based Bio-Inspired Smart Image Sensor.\u201d In 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) [Accepted In DAC2019 as work in process]<\/li>\n<li>Md Jubaer Hossain Pantho, Pankaj Bhowmik, and Christophe Bobda.\u201dNeuromorphic Image Sensor Design with Region-Aware Processing.\u201d In 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).<\/li>\n<li>Event-Based Re-configurable Hierarchical Processors for Smart Image Sensors.\u201d In the Proceedings of the IEEE Application-Specific Systems, Architecture and Processor 2019.<\/li>\n<\/ul>\n<h3>Patent<\/h3>\n<ul>\n<li>Pankaj Bhowmik, Md Jubaer Hossain Pantho, Marjan Asadinia, and Christophe Bobda. US Patent, \u201cReconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor\u201d.<\/li>\n<\/ul>\n<h3>Awards<\/h3>\n<ul>\n<li>Best Poster Presentation Award for presenting the paper titled \u201cDesign of a Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor\u201d at the conference on Computer Vision and Pattern Recognition Workshops, Salt Lake, Utah, USA.<\/li>\n<\/ul>\n<h3>Acknowledgment<\/h3>\n<p>National Science Foundation (NSF) is supporting the smart image sensor project under Grant-1618606.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This project focuses on bringing the processing units close to the image sensor to introduce parallelism in the design with the help of three types of processor, namely: pixel processor, region processor, and sequential processor.<\/p>\n","protected":false},"author":1329,"featured_media":923,"parent":17,"menu_order":8,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"inline_featured_image":false,"featured_post":"","footnotes":"","_links_to":"","_links_to_target":""},"class_list":["post-139","page","type-page","status-publish","has-post-thumbnail","hentry"],"acf":[],"_links":{"self":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages\/139","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/users\/1329"}],"replies":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/comments?post=139"}],"version-history":[{"count":0,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages\/139\/revisions"}],"up":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/pages\/17"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/media\/923"}],"wp:attachment":[{"href":"https:\/\/faculty.eng.ufl.edu\/smartsystems\/wp-json\/wp\/v2\/media?parent=139"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}