GSTDTAP

浏览/检索结果: 共2条,第1-2条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Classification with a disordered dopantatom network in silicon 期刊论文
NATURE, 2020, 577 (7790) : 341-+
作者:  Vagnozzi, Ronald J.;  Maillet, Marjorie;  Sargent, Michelle A.;  Khalil, Hadi;  Johansen, Anne Katrine Z.;  Schwanekamp, Jennifer A.;  York, Allen J.;  Huang, Vincent;  Nahrendorf, Matthias;  Sadayappan, Sakthivel;  Molkentin, Jeffery D.
收藏  |  浏览/下载:24/0  |  提交时间:2020/07/03

Classification is an important task at which both biological and artificial neural networks excel(1,2). In machine learning, nonlinear projection into a high-dimensional feature space can make data linearly separable(3,4), simplifying the classification of complex features. Such nonlinear projections are computationally expensive in conventional computers. A promising approach is to exploit physical materials systems that perform this nonlinear projection intrinsically, because of their high computational density(5), inherent parallelism and energy efficiency(6,7). However, existing approaches either rely on the systems'  time dynamics, which requires sequential data processing and therefore hinders parallel computation(5,6,8), or employ large materials systems that are difficult to scale up(7). Here we use a parallel, nanoscale approach inspired by filters in the brain(1) and artificial neural networks(2) to perform nonlinear classification and feature extraction. We exploit the nonlinearity of hopping conduction(9-11) through an electrically tunable network of boron dopant atoms in silicon, reconfiguring the network through artificial evolution to realize different computational functions. We first solve the canonical two-input binary classification problem, realizing all Boolean logic gates(12) up to room temperature, demonstrating nonlinear classification with the nanomaterial system. We then evolve our dopant network to realize feature filters(2) that can perform four-input binary classification on the Modified National Institute of Standards and Technology handwritten digit database. Implementation of our material-based filters substantially improves the classification accuracy over that of a linear classifier directly applied to the original data(13). Our results establish a paradigm of silicon-based electronics for smallfootprint and energy-efficient computation(14).


  
Fully hardware-implemented memristor convolutional neural network 期刊论文
NATURE, 2020, 577 (7792) : 641-+
作者:  Yoshioka-Kobayashi, Kumiko;  Matsumiya, Marina;  Niino, Yusuke;  Isomura, Akihiro;  Kori, Hiroshi;  Miyawaki, Atsushi;  Kageyama, Ryoichiro
收藏  |  浏览/下载:39/0  |  提交时间:2020/07/03

Memristor-enabled neuromorphic computing systems provide a fast and energy-efficient approach to training neural networks(1-4). However, convolutional neural networks (CNNs)-one of the most important models for image recognition(5)-have not yet been fully hardware-implemented using memristor crossbars, which are cross-point arrays with a memristor device at each intersection. Moreover, achieving software-comparable results is highly challenging owing to the poor yield, large variation and other non-ideal characteristics of devices(6-9). Here we report the fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs, which integrate eight 2,048-cell memristor arrays to improve parallel-computing efficiency. In addition, we propose an effective hybrid-training method to adapt to device imperfections and improve the overall system performance. We built a five-layer memristor-based CNN to perform MNIST10 image recognition, and achieved a high accuracy of more than 96 per cent. In addition to parallel convolutions using different kernels with shared inputs, replication of multiple identical kernels in memristor arrays was demonstrated for processing different inputs in parallel. The memristor-based CNN neuromorphic system has an energy efficiency more than two orders of magnitude greater than that of state-of-the-art graphics-processing units, and is shown to be scalable to larger networks, such as residual neural networks. Our results are expected to enable a viable memristor-based non-von Neumann hardware solution for deep neural networks and edge computing.