FA

China Makes Breakthrough in Next-generation Carbon Nanotube Chips

China Makes Breakthrough in Next-generation Carbon Nanotube Chips


Chinese researchers have unveiled the world's first carbon-nanotube-based tensor processor chip, marking a significant step forward in artificial intelligence (AI) processing technology.

 September 10, 2024  Nano in the world


This new development addresses the growing limitations of traditional silicon-based semiconductors, which are increasingly inadequate for the data processing demands of modern AI.

The research team from Peking University published their findings on Monday in the journal Nature Electronics, titled "A carbon-nanotube-based tensor processing unit." 

This study presents an innovative systolic array architecture that leverages the properties of carbon nanotube transistors and tensor operations, providing a potential solution to extend Moore's Law, which predicts the doubling of transistors on a chip roughly every two years.

Current silicon-based computing chips face challenges related to size reduction and increasing power consumption. There is an urgent need for new materials that can provide better performance and efficiency. Carbon nanotubes, known for their excellent electrical properties and ultra-thin structures, are emerging as a promising alternative.

Professor Zhang Zhiyong from the research team noted that carbon nanotube transistors outperform commercial silicon-based transistors in both speed and power consumption. These transistors offer a tenfold advantage, enabling the production of more energy-efficient integrated circuits and systems, which are crucial in the AI era.

While various international research groups have demonstrated carbon-nanotube-based integrated circuits, including logic gates and simple CPUs, this research is the first to apply carbon nanotube transistor technology to high-performance AI computing chips.

"We report a tensor processing unit (TPU) that is based on 3,000 carbon nanotube field-effect transistors and can perform energy-efficient convolution operations and matrix multiplication," said Si Jia, an assistant research professor at Peking University.

This TPU, using a systolic array architecture, supports parallel 2-bit integer multiply-accumulate operations, Si said, explaining that a five-layer convolutional neural network utilizing this TPU achieved 88 percent accuracy in Modified National Institute of Standards and Technology image recognition tests with a power consumption of just 295 microwatts.

This recent development shows the potential of carbon nanotube technology in high-performance computing and opens up new possibilities for advancements in AI processing, aiming for more efficient and powerful AI systems.

 

Comments


Name: *
Email: *
Your Comment : *
Security Code :   *  

Register for the Newsletter