In-sensor and Near-sensor Computing

In-sensor and Near-sensor Computing

In-sensor and Near-sensor Computing

Schematic of In-sensor and near-sensor computing

Computing at Sensor Edge

AI-generated audio summary (reviewed and verified) available for individuals with low vision or new to this topic.

AI-generated audio summary (reviewed and verified) available for individuals with low vision or new to this topic.

Artificial Intelligence (AI) has been widely adopted for various applications. Today, generative AI and large language models have entered our lives—enhancing our productivity, helping with education, and accessing and curating massive amounts of information—and are becoming more intelligent by processing various inputs such as text, images, video, and audio. In addition, autonomous driving is another excellent example of AI applications that have been developed through image processing and AI algorithms.

Considering these advances in AI, while software is advancing toward an unprecedented future, the burden on hardware to process massive data flows continues to grow. Hardware for AI involves many different components such as sensors, memories, computing units, parallel computation systems, and more. However, advances in each individual component are insufficient due to data bandwidth issues. Even if the computing unit can process data fast enough for advanced AI algorithms, huge amounts of data must still travel between components. This centralized computing creates the bottleneck in advanced computing. Thus, decentralization of computing units can be a solution that distributes computation into other components. Therefore, heterogeneous integration is the key to reducing data transfer requirements.

What if we could integrate sensors and computing units, thereby enabling preprocessing for AI at the point of sensing? Then, the data volume coming from sensors would be reduced, and memory systems wouldn't face an unmanageable flow of data. This would eliminate the bottleneck in data transportation between hardware components.

In-sensor and near-sensor computing represents a new architectural approach to achieve this preprocessing and enhance computation speed. This concept incorporates heterogeneous integration of dissimilar materials, devices, and systems. It has enabled the combination of sensors with computing, memory with computing, and algorithms that split computational loads. My research interests include in-sensor and near-sensor computing through sensor and computing unit fabrication, device-level integration, and algorithm development. This perspective article provides details about this emerging architecture for the artificial intelligence of things and shares my insights on the field.