The in‑sensor computing architecture reduces wiring, data bandwidth, and power consumption, enabling thinner, more responsive electronic skins for next‑generation human‑machine interaction.
Artificial skin has long been limited by pixel‑based sensor arrays that require individual wiring and external processors to reconstruct tactile information. This von Neumann separation inflates power consumption, adds latency, and leaves blind spots between discrete pressure points, especially for continuous gestures like sliding. As wearables, prosthetics, and haptic robots demand thinner, faster interfaces, engineers are exploring material‑level computation that merges sensing and processing within a single stack, promising leaner architectures and real‑time responsiveness, and offers a path toward truly biomimetic tactile perception.
The Xiamen University team built a multilayer sensor using multi‑walled carbon nanotube (CNT) films on flexible PET. In the resting state a spacer keeps the conductive layers apart; pressure forces contact, creating a resistance path that varies with distance from the electrode, delivering sub‑500 µm positional resolution in a single analog signal. Stacking three layers with different spacer thicknesses yields activation thresholds of 9.6 kPa, 200.9 kPa and 385.8 kPa, so pressure classification emerges directly from the number of active layers—true in‑sensor computing without software, and maintains signal stability across temperature and humidity variations.
The reduced channel count cuts wiring, power and data bandwidth, enabling ultra‑thin wearables and robotic skins where every millimeter counts. Prototypes have driven a four‑axis robotic arm, acted as a pressure‑sensitive keyboard, and formed a two‑factor tactile lock, while a 1‑D CNN achieved over 96 % user‑identification accuracy from touch signatures. Challenges remain: multi‑point contacts can create signal ambiguity, and health‑state predictions need larger clinical studies. Nonetheless, embedding computation in the material itself accelerates the convergence of sensing, actuation and intelligence for next‑generation haptic devices. Future work will explore multi‑touch decoding and integration with AI edge processors.
Comments
Want to join the conversation?
Loading comments...