Next Article in Journal
Hydrogel-Based Technologies for the Diagnosis of Skin Pathology
Next Article in Special Issue
Circuit Implementation of a Modified Chaotic System with Hyperbolic Sine Nonlinearities Using Bi-Color LED
Previous Article in Journal
Microwave-Assisted Industrial Scale Cannabis Extraction
Previous Article in Special Issue
A Zynq-Based Robotic System for Treatment of Contagious Diseases in Hospital Isolated Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hardware Implementation of a Softmax-Like Function for Deep Learning †

by
Ioannis Kouretas
* and
Vassilis Paliouras
Electrical and Computer Engineering Department, University of Patras, 26 504 Patras, Greece
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Proceedings of the 8th International Conference on Modern Circuits and Systems Technologies (MOCAST), Thessaloniki, Greece, 13–15 May 2019.
Submission received: 28 April 2020 / Revised: 14 August 2020 / Accepted: 25 August 2020 / Published: 28 August 2020
(This article belongs to the Special Issue MOCAST 2019: Modern Circuits and Systems Technologies on Electronics)

Abstract

:
In this paper a simplified hardware implementation of a CNN softmax-like layer is proposed. Initially the softmax activation function is analyzed in terms of required numerical accuracy and certain optimizations are proposed. A proposed adaptable hardware architecture is evaluated in terms of the introduced error due to the proposed softmax-like function. The proposed architecture can be adopted to the accuracy required by the application by retaining or eliminating certain terms of the approximation thus allowing to explore accuracy for complexity trade-offs. Furthermore, the proposed circuits are synthesized in a 90 nm 1.0 V CMOS standard-cell library using Synopsys Design Compiler. Comparisons reveal that significant reduction is achieved in area × delay and power × delay products for certain cases, respectively, over prior art. Area and power savings are achieved with respect to performance and accuracy.

1. Introduction

Deep neural networks (DNN) have emerged as a means to tackle complex problems such as image classification and speech recognition. The success of DNNs is attributed to the big data availability, the easy access to enormous computational power and the introduction of novel algorithms that have substantially improved the effectiveness of the training and inference [1]. A DNN is defined as a neural network (NN) which contains more than one hidden layer. In the literature, a graph is used to represent a DNN, with a set of nodes in each layer, as shown in Figure 1. The nodes at each layer are connected to the nodes of the subsequent layer. Each node performs processing including the computation of an activation function [2]. The extremely large number of nodes at each layer impels the training procedure to require extensive computational resources.
A class of DNNs are the convolutional neural networks (CNNs) [2]. CNNs offer high accuracy in computer-vision problems such as face recognition and video processing [3] and have been adopted in many modern applications. A typical CNN consists of several layers, each one of which can be convolutional, pooling, or normalization with the last one to be a non-linear activation function. A common choice for normalization layers is usually the softmax function as shown in Figure 1. To cope with increased computational load, several FPGA accelerators have been proposed and have demonstrated that convolutional layers exhibit the largest hardware complexity in a CNN [4,5,6,7,8,9,10,11,12,13,14,15]. In addition to CNNs, hardware accelerators for RNNs and LSTMs have also been investigated [16,17,18]. In order to implement a CNN in hardware, the softmax layer should also be implemented with low complexity. Furthermore, the hidden layers of a DNN can use the softmax function when the model is designed to choose one among several different options for some internal variable [2]. In particular, neural turing machines (NTM) [19] and differentiable neural computer (DNC) [20] use softmax layers within the neural network. Moreover, softmax is incorporated in attention mechanisms, an application of which is machine translation [21]. Furthermore, both hardware [22,23,24,25,26,27,28] and memory-optimized software [29,30] implementations of the softmax function have been proposed. This paper, extending previous work published in MOCAST2019 [31], proposes a simplified architecture for a softmax-like function, the hardware implementation of which is based on a proposed approximation which exploits the statistical structure of the vectors processed by the softmax layers in various CNNs. Compared to the previous work [31], this paper uses a large set of known CNNs, and performs extensive and fair experiments to study the impact of the applied optimizations in terms of the achieved accuracy. Moreover the architecture in [31] is further elaborated and generalized by taking into account the requirements of the targeted application. Finally, the proposed architecture is compared with various softmax hardware implementations. In order for the softmax-like function to be implemented efficiently in hardware, the approximation requirements are relaxed.
The remainder of the paper is organized as follows. Section 2 revisits the softmax activation function. Section 3 describes the proposed algorithm and Section 4 offers a quantitative analysis of the proposed architecture. Section 5 discusses the hardware complexity of the proposed scheme based on synthesis results. Finally, conclusions are summarized in Section 6.

2. Softmax Layer Review

CNNs consist of a number of stages each of which contains several layers. The final layer is usually fully-connected using ReLU as an activation function and drives a softmax layer before the final output of the CNN. The classification performed by the CNNs is accomplished at the final layer of the network. In particular, for a CNN which consists of i + 1 layers, the softmax function is used to transform the real values generated by the i th CNN layer to possibilities, according to
f j ( z ) = e z j k = 1 n e z k ,
where z is an arbitrary vector with real values z j , j = 1 , , n , generated at the i th layer of the CNN and n is the size of the vector. The ( i + 1 )st layer is called the softmax layer. By applying the logarithm function to both sides of (1), it follows that
log ( f j ( z ) ) = log e z j k = 1 n e z k
= log ( e z j ) log k = 1 n e z k
= z j log k = 1 n e z k .
In (4), the term log k = 1 n e z k is computed as
log k = 1 n e z k = log ( k = 1 n e m e m e z k )
= log ( e m k = 1 n 1 e m e z k )
= log e m + log ( k = 1 n e m e z k )
= m + log ( k = 1 n e z k m ) ,
where m = max k ( z k ) . From (4) and (8) it follows that
log ( f j ( z ) ) = z j m + log ( k = 1 n e z k m )
= z j m + log ( k = 1 n e z k m 1 + 1 )
= z j m + log ( Q + 1 ) ,
where
Q = k = 1 n e z k m 1 = k = 1 z k max n e z k m + 1 1 = k = 1 z k max n e z k m .
Due to the definition of m , it holds that
z j m z j m 0 e z j m 1
k = 1 n e z k m n k = 1 z k max n e z k m + 1 n
Q + 1 n Q n 1 1 Q 1 ,
where Q = Q n 1 . Expressing Q in terms of Q , (11) becomes
log ( f j ( z ) ) = z j m log ( n 1 ) Q + 1 .
The next section presents the proposed simplifications for (16) and the derivative architecture for the softmax-like hardware implementation.

3. Proposed Softmax Architecture

Equation (15) involves the distance of the maximum component from the remainder of the components of a vector. As Q 0 , the differences in z i s increase and z j m . On the contrary, as Q 1 the differences in z i s are eliminated. Based on this observation, a simplifying approximation can be obtained, as follows. The third term in the right hand side of (16), log ( n 1 ) Q + 1 , can be roughly approximated by 0. Hence, (16) is approximated by
log ( f j ^ ( z ) ) z j m .
Furthermore, this simplification substantially reduces hardware complexity as described below. From (17), it follows that
f j ^ ( z ) = e z j max k ( z k ) .
Theorem 1.
Let argmax z ( f j ( z ) ) = q and argmax z ( f j ^ ( z ) ) = r be the decisions obtained by (1) and (18), respectively. Then q = r .
Proof. 
Due to the softmax definition, it holds that
max f j ( z ) = max e z q k = 1 n e z k
z q = max k ( z k ) .
For the case of the proposed function (18), it holds that
max f j ^ ( z ) = max e z r max k ( z k )
z r = max k ( z k ) .
From (20) and (22), it is derived that z q = z r . Hence, argmax z ( f q ( z ) ) = argmax z ( f r ( z ) ) q = r .  □
Corollary 1.
It holds that f j ^ ( z ) f j ( z ) = k = 1 n e z k m .
Proof. 
Proof is trivial and is omitted. □
Theorem 1 states that the proposed softmax-like function and the actual softmax function always derive the same decisions. The proposed softmax-like approximation is based on the idea that the softmax function is used during training to target an output y by using maximum-log likelihood [2]. Hence, if the correct answer has already the maximum input value to the softmax function then log ( n 1 ) Q + 1 0 will roughly alter the output decision due to the exponent function used in term Q . In general, j f j ^ ( z ) > 1 , since the sequence f j ^ ( z ) cannot be denoted as a probability density function. For models where the normalization function is required to be a pdf, a modified approach can be followed, as detailed below.
According to the second approach, from (11) and (12) it follows:
log ( f j ^ ( z ) ) = z j m log ( k = 1 z k max n e z k m + 1 )
= z j m log ( k = 1 m k max p e m k m + k = p + 1 m k max n e m k m + 1 )
= z j m log ( k = 1 p e m k m + k = p + 1 n e m k m )
= z j m log ( Q 1 + Q 2 ) ,
with Q 1 = k = 1 p e m k m , where M 1 = { m 1 , , m p } = { m k : k = 1 , , p } are chosen to be the top p maximum values of z . For the quantity Q 2 , it holds Q 2 = k = p + 1 n e m k m , with M 2 = { m p + 1 , , m n } = { m k : k = p + 1 , , n } being the remainder values of the vector z , i.e., M 1 M 2 = { z 1 , , z n } .
A second approximation is performed as
log ( Q 1 ) = log ( k = 1 p e m k m 1 + 1 ) k = 1 p e m k m 1 .
Q 2 0 .
From (26)–(28), it derives that
log ( f j ^ ( z , p ) ) z j m k = 1 p e m k m + 1
f j ^ ( z , p ) = e z j m k = 1 p e m k m + 1 .
Equation (30) uses parameter p which defines the number of additional terms used. By properly selecting p , then it holds that j f j ^ ( z , p ) 1 and (30) approximates pdf better than (18). This is derived from the fact that in a real life CNN, the p maximum values are those that contribute to the computation of the softmax since all the remainder values are close to zero.
Lemma 1.
It holds f j ^ ( z , p ) = f j ^ ( z ) when p = 1 .
Proof. 
By definition, it holds that when p = 1 then m 1 = m , since the p = 1 maximum value m 1 is identified as the maximum m . Hence, by substituting p = 1 in (30), it derives that
f j ^ ( z , 1 ) = e z j m k = 1 1 e m k m + 1 f j ^ ( z , 1 ) = e z j m e m m + 1 f j ^ ( z , 1 ) = e z j m f j ^ ( z , 1 ) = ( 18 ) f j ^ ( z )
From a hardware perspective, (18) and (30) can be performed by the same circuit which implements the exponential function. The contributions of the paper are as follows. Firstly, the quantity log ( n 1 ) Q + 1 is eliminated from (16), implying that the target application requires decision making. Secondly, further mathematical manipulations are proposed to be applied to (30), in order to approximate the outputs as pdf i.e., probabilities that sum to one. Thirdly, the circuit for the evaluation of e x is simplified, since
z j m
z j m 0
e z j m 1 .
and
z j m
z j m k = 1 p e m k m + 1 < 0
e z j m k = 1 p e m k m + 1 < 1 .
Figure 2 depicts the various building blocks of the proposed architecture. More specifically, the proposed architecture is comprised of the block which computes the maximum m , i.e., m = max ( z k ) k . The particular computation is performed by a tree which generates the maximum by comparing the elements by two, as shown in Figure 3. The depicted tree structure generates m = max ( z k ) k , k = 0 , 1 , , 7 . Notation z i j denotes the maximum of z i and z j , while z i j k l denotes the maximum of z i j and z k l . The same architecture is used to compute the top p maximum values of z i s. For example, Z 01 , Z 21 , Z 45 and Z 67 are the top four maximum values and m = max ( z k ) k is the maximum.
Subsequently, m is subtracted from all the component values z k as dictated by (17). The subtraction is performed through adders, denoted as in Figure 2, using two’s complement representation for the input negative values m . The obtained differences, also represented in two’s complement, are used as inputs to a LUT, which performs the proposed simplified e x operation of (18), to compute the final vector f j ^ ( z ) as shown in Figure 2a. Additional p terms are added and subsequently each output f j ^ ( z ) through (30) generates the final value for the softmax-like layer output as shown in Figure 2b. For the hardware implementation of the e x function, an LUT is adopted the input of which is x = z j m . The LUT size increases on the larger range of e x . Our proposed hardware implementation is simpler than other exponential implementations which propose CORDIC transformations [32], use floating-point representation [33], or LUTs [34]. In (33), the e x values are restricted to the range 0 , 1 and the derived LUT size significantly diminishes and leads to simplified hardware implementation. Furthermore, no conversion from the logarithmic to the linear domain is required, since f j ( z ) represents the final classification layer.
The next section quantitatively investigates the validity and usefulness of employing f j ( z ) , in terms of the approximation error.

4. Quantitative Analysis of Introduced Error

This section quantitatively verifies the applicability of the approximation introduced in Section 2, for certain applications, by means of a series of Examples.
In order to quantify the error introduced by the proposed architecture, the mean square error (MSE) is evaluated as
MSE = 1 n f j ( z ) f j ^ ( z ) 2 ,
where f j ^ ( z ) and f j ( z ) are the expected and the actually evaluated softmax output, respectively.
As an illustrative example, denoted as Example 1, Figure 4 depicts the histograms of component values in test vectors used as inputs to the proposed architecture, selected to have specific properties detailed below. The corresponding parameters are evaluated by using the proposed architecture for the case of a 10-bit fixed-point representation, where 5 bits are used for the integer and 5 bits are allocated to the fractional part. More specifically, the vector in Figure 4a contains n = 30 values, for which it holds that z j = 5 , j = 1 , , 11 , z j = 4.99 , j = 12 , , 22 and z j = 5 , j = 23 , , 30 . For this case the softmax values obtained by (1) are
f j z = 0.0343 , j = 1 , , 11 0.0314 , j = 12 , , 22 0.0347 , j = 23 , , 30 .
From a CNN perspective, the softmax layer output generates similar values, where possibilities are all around 3%, and hence classification or decision cannot be made with high confidence. By using (18), the modified softmax values are
f j ^ z = 0.9844 , j = 1 , , 11 0.8906 , j = 12 , , 22 1 , j = 23 , , 30 .
The statistical structure of the vector is characterized by the quantity Q = 0.9946 of (15). The estimated MSE = 0.8502 dictates that the particular vector is not suitable as an alternative to softmax input in terms of CNN performance, i.e., the obtained classification is performed with low confidence. Hence although the proposed approximation in (18) demonstrates large differences when compared to (1), neither is applicable in CNN terms.
Consider the following Example 2. The component values for vector z in Example 2 are
z j = 3 , j = 1 6 , j = 2 4 , j = 3 2 , j = 4 < 1 , j = 5 , , 30 ,
the histogram of which is shown in Figure 4b. In this case, the statistical structure of the vector demonstrates Q = 0.0449 and MSE = 0.0018 . The feature of vector z in Example 2 is that it contains three large different component values close to each other, namely z 1 = 3 , z 2 = 6 , z 3 = 4 , z 4 = 2 and all other components are smaller than 1. The softmax output in (1) for the values in the particular z are
f j z = 0.0382 , j = 1 0.7675 , j = 2 0.1039 , j = 3 0.0141 , j = 4 < 0.005 , j = 5 , , 30 .
By using the proposed approximation (18), the obtained modified softmax values are
f j ^ z = 0.0469 , j = 1 1 , j = 2 0.1250 , j = 3 0.0156 , j = 4 0 , j = 5 , , 30 .
Equations (41) and (42) show that the proposed architecture chooses component z 2 with value 1 while the actual probability is 0.7675. This means that the introduced error of MSE = 0.0018 can be negligible depending on the application, dictated by Q = 0.0449 1 .
In the following, tests using vectors obtained from real CNN applications are considered. More specifically, in an example shown in Figure 5, the vectors are obtained from image and digit classification applications. In particular, Figure 5a,b depict the values used as input to the final softmax layer, generated during a single inference for a VGG-16 imagenet image classification network for 1000 classes and a custom net for MNIST digit classification for 10 classes. Quantity Q can be used to determine whether the proposed architecture is appropriate for application on vector z before evaluating the MSE. It is noted that MSE for the example of Figure 5a and MSE for the example of Figure 5b are of the orders of 10 13 and 10 5 , respectively which renders them as negligible.
Subsequently, the proposed method is applied on the ResNet-50 [35], VGG-16, VGG-19 [36], InceptionV3 [37] and MobileNetV2 [38] CNNs, for 1000 classes with 10000 inferences of a custom image data set. In particular, for the case of ResNet-50, Figure 6a,b depict the histograms of the MSE and Q values, respectively. More specifically, Figure 6a demonstrates that the MSE values are of magnitude 10 3 with 8828 of the values be in the interval [ 2.18 × 10 28 , 8.16 × 10 4 ] . Furthermore, Figure 6b shows that the Q values are of magnitude 10 2 with 9096 of them be in the interval [ 0 , 0.00270 ] . Furthermore, Table 1a–e depict actual softmax and the proposed method softmax-like values obtained by executing inference on the CNN models for six custom images. The values are sorted from left to right with the maximum value on the left side. Furthermore, the inferences a f and a f denote the same custom image as input to the model. More specifically, Table 1a demonstrates values from six inferences for the ResNet-50 model. It is shown that for the case of inference a the maximum obtained values are f 1 ( z ) = 0.75371140 and f 1 ^ ( z ) = 1 , for f j ( z ) and f j ^ ( z ) , respectively. Other values are f 2 ( z ) = 0.027212996 , f 3 ( z ) = 0.018001331 , f 4 ( z ) = 0.014599146 , f 5 ( z ) = 0.014546203 and f 2 ^ ( z ) = 0.03515625 , f 3 ^ ( z ) = 0.0234375 , f 4 ^ ( z ) = 0.0185546875 , f 1 ^ ( z ) = 0.0185546875 , respectively. Hence, as shown by colorally 1, the maximum takes the value ‘1’ and the remainder of the values follow the values obtained by the actual softmax function. Similar analysis can be obtained for all the inferences a f and a f for each one of the CNNs. Furthermore, the same class is outputted from the CNN in both cases for each inference.
For the case of VGG-16, Figure 7a depicts the histogram of the MSE values. It is shown that the values are of magnitude 10 3 and 8616 are in the interval [ 1.12 × 10 34 , 7.26 × 10 4 ] . Figure 7b demonstrates histogram for the Q values with magnitude of 10 2 and more than 8978 values are in the interval [ 0 , 0.00247 ] . Table 1b demonstrates that in the case of the e and e inference the top values are 0.51182884 and 1, respectively. The second top values are 0.18920843 and 0.369140625, respectively. In this case, the decision is made with a confidence of 0.51182884 for the actual softmax value and 1 for the softmax-like value. Furthermore, the second top value is 0.369140625 which is not negligible when compared to 1 and hence denotes that the selected class is of low confidence. The same conclusion derives for the case of the actual softmax value. Furthermore, for the case of a f and a * f * inferences, the values obtained by Figure 8b are close to the actual pdf softmax outputs. In particular, for the d and d * cases, the top 5 values are 0.747070313, 0.10941570, 0.065981410, 0.018329350, 0.012467328 and 0.747070313, 0.110351563, 0.06640625, 0.017578125, 0.01171875, respectively. It is shown that values are similar. Hence, depending the application, an alternative architecture, as shown in Figure 2b can be used to generate pdf softmax values as outputs.
Moreover, Figure 8a,b depict graphically the values for the actual softmax and the proposed softmax-like output for inferences A and B, respectively, for the case of the VGG-16 CNN for the output classification. Furthermore, Figure 8c–f depict values for the architectures in Figure 2a,b, respectively. It is shown that in the case of Figure 2a, the values demonstrate a similar structure. In case of Figure 2b, values are similar to the actual softmax outputs.
Similar analysis can be performed for the case of VGG-19. In particular, Figure 9a demonstrates that MSE is of magnitude 10 3 and 8351 of which are in the interval [ 2.49 × 10 34 , 6.91 × 10 4 ] . In Figure 9b 8380 values are in the interval [ 0 , 0.00192 ] . For the case of InceptionV3, histograms in Figure 10a,b demonstrate MSE and Q values the 9194 and 9463 of which are in the intervals [ 2.62 × 10 25 , 7.08 × 10 4 ] and [ 0 , 0.003 ] , respectively. For the MobileNetV2 network, Figure 11a,b demonstrate MSE and Q values the 8990 and 9103 of which are in the intervals [ 2.48 × 10 25 , 1.05 × 10 3 ] and [ 1.55 × 10 7 , 0.004 ] , respectively. Furthermore, Table 1c–e derive similar conclusions as in the case of VGG-16.
In general, in all cases identical output decisions are obtained for the actual softmax and the softmax-like output layer for each one of the CNNs.
Considering the impact of the data wordlegth representation, let ( l , k ) denote the fixed-point representation of a number with l integral and k fractional bits. Figure 12a,b depict histograms for the MSE values obtained for the case of 1000 inferences by the VGG-16 CNN. It is shown that the case w = ( 6 , 2 ) demonstrates the smaller MSE values. The reason for this is that the maximum value of the inputs in the softmax layer is 56 for all the 1000 inferences, and hence the value of 6 for the integral part is sufficient.
Summarizing, it is shown that the proposed architecture suits well for the final stage of a CNN network as an alternative to implementing the softmax layer stage, since the MSE is negligible. Next, the proposed architecture is implemented in hardware and compared with published counterparts.

5. Hardware Implementation Results

This section describes implementation results obtained by synthesizing the proposed architecture outlined in Figure 2. Among several authors reporting results on CNN accelerators, [22,23,24] have recently published works focusing on hardware implementation of the softmax function. In particular, in [23], a study based on stochastic computation is presented. Geng et al. provide a framework for the design and optimization of softmax implementation in hardware [26]. They also discuss operand bit-width minimization, taking into account application accuracy constraints. Du et al. propose a hardware architecture that derives the softmax function without a divider [25]. The appproach relies on an equivalent softmax expression which requires natural logarithms and exponentials. They provide detailed evaluation of the impact of the particular implementation on several benchmarks. Li et al. describe a 16-bit fixed-point hardware implementation of the softmax function [27]. They use a combination of look-up tables and multi-segment linear approximations for the approximation of exponentials and a radix-4 Booth–Wallace-based 6-stage pipeline multiplier and modified shift-compare divider.
In [24], the architecture demonstrates LUT-based computations that add complexity and exhibits 444,858 μ m 2 area complexity by using 65-nm standard-cell library. For the same library the architecture in [25] reports 640,000 μ m 2 area complexity with 0.8 μ w power consumption at a 500 MHz clock frequency. The architecture in [28] reports 104,526 μ m 2 area complexity with 4.24 μ w power consumption at a 1 GHz clock frequency. The proposed architecture in [26] demonstrates power consumption and area complexity of 1.8 μ w and 3000 μ m 2 , respectively at a 500 MHz clock frequency with UMC 65 nm standard cell library. In [27], it is reported 3.3 GHz and 34,348 μ m 2 frequency and area complexity at 45 nm technology node. Yuan [22] presented an architecture for implementing the softmax layer. Nevertheless there is no discussion of the implementation of the LUTs and there are no synthesis results. Our proposed softmax-like function differs from the actual softmax function due to the approximation of the quantity log ( n 1 ) Q + 1 , as discussed in Section 3. In particular, (18) approximates the softmax output as a decision making application and not as a pdf function. The proposed softmax-like function in (30) approximates outputs as pdf function, depending on the number of p terms used. As p n , (30) actual softmax function. The hardware complexity reduction derives from the fact that a limited number, p , of z i s contribute to the computation of the softmax function. Summarizing, we compare both architectures depicted in Figure 2a,b with [22] to quantify the impact of p on the hardware complexity. Section 4 shows that the softmax-like function suits well in a CNN. For a fair comparison we have implemented and synthesized both architectures, our proposed and [22], by using a 90 nm 1.0 V CMOS standard-cell library with Synopsys Design Compiler [39].
Figure 13 depicts the architecture obtained from synthesis where the various building blocks, namely maximum evaluation, subtractor and the simplified exponential LUTs that perform in parallel, are shown. Furthermore registers have been added at the circuits inputs and outputs for applying the delay constraints. Detailed results are depicted in Table 2a–c for the proposed softmax-like of Figure 2a, the [22] and the proposed softmax-like of Figure 2b layer with size 10, respectively. Furthermore, results are plotted graphically in Figure 14a,b where area vs. delay and power vs. delay are depicted, respectively. Results demonstrate that substantially area savings are achieved with no delay penalty. More specifically, for a 4 ns delay constraint the area complexity is 25,597 μ m 2 and 43,576 μ m 2 in case of architectures in Figure 2b and [22], respectively. For the case where the pdf output is not significant, the area complexity reduction can be 17,293 μ m 2 for the architecture on Figure 2a. Summarizing, depending on the application and the design constraints there is a trade-off between the additional p terms used for the evaluation of the softmax output. As we increase the value of the parameter p , then the actual softmax value is better approximated while hardware complexity increases. When p = 1 , then the hardware complexity is minimized while softmax output approximation diverges.

6. Conclusions

This paper proposes hardware architectures for implementing the softmax layer in a CNN with substantially reduced reduction in area × delay and power × delay product, respectively, for certain cases. A family of architectures that can approximate the softmax function have been introduced and evaluated, each member of which is obtained through a design parameter p , which controls the number of terms employed for the approximation. It is found that a very simple approximation using p = 1 , suffices to deliver accurate results in certain cases, even though the derived approximation is not a pdf. Furthermore, it has been demonstrated that for image and digit classification applications, the proposed architecture suits ideally as it achieves MSEs of the order of 10 13 and 10 5 , respectively, which are considered low.

Author Contributions

All authors contributed equally. All authors have read and agreed to the published verion of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  2. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 27 August 2020).
  3. Girshick, R.B.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  4. Carreras, M.; Deriu, G.; Meloni, P. Flexible Acceleration of Convolutions on FPGAs: NEURAghe 2.0; Ph.D. Workshop; CPS Summer School: Alghero, Italy, 23 September 2019. [Google Scholar]
  5. Zainab, M.; Usmani, A.R.; Mehrban, S.; Hussain, M. FPGA Based Implementations of RNN and CNN: A Brief Analysis. In Proceedings of the 2019 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 1–2 November 2019; pp. 1–8. [Google Scholar] [CrossRef]
  6. Sim, J.; Lee, S.; Kim, L. An Energy-Efficient Deep Convolutional Neural Network Inference Processor With Enhanced Output Stationary Dataflow in 65-nm CMOS. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2020, 28, 87–100. [Google Scholar] [CrossRef]
  7. Hareth, S.; Mostafa, H.; Shehata, K.A. Low power CNN hardware FPGA implementation. In Proceedings of the 2019 31st International Conference on Microelectronics (ICM), Cairo, Egypt, 15–18 December 2019; pp. 162–165. [Google Scholar] [CrossRef]
  8. Zhang, S.; Cao, J.; Zhang, Q.; Zhang, Q.; Zhang, Y.; Wang, Y. An FPGA-Based Reconfigurable CNN Accelerator for YOLO. In Proceedings of the 2020 IEEE 3rd International Conference on Electronics Technology (ICET), Chengdu, China, 8–12 May 2020; pp. 74–78. [Google Scholar]
  9. Tian, T.; Jin, X.; Zhao, L.; Wang, X.; Wang, J.; Wu, W. Exploration of Memory Access Optimization for FPGA-based 3D CNN Accelerator. In Proceedings of the 2020 Design, Automation Test in Europe Conference Exhibition (DATE), Grenoble, France, 9–13 March 2020; pp. 1650–1655. [Google Scholar]
  10. Nakahara, H.; Que, Z.; Luk, W. High-Throughput Convolutional Neural Network on an FPGA by Customized JPEG Compression. In Proceedings of the 2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), Fayetteville, AR, USA, 3–6 May 2020; pp. 1–9. [Google Scholar]
  11. Shahan, K.A.; Sheeba Rani, J. FPGA based convolution and memory architecture for Convolutional Neural Network. In Proceedings of the 2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID), Bangalore, India, 4–8 January 2020; pp. 183–188. [Google Scholar]
  12. Shan, J.; Lazarescu, M.T.; Cortadella, J.; Lavagno, L.; Casu, M.R. Power-Optimal Mapping of CNN Applications to Cloud-Based Multi-FPGA Platforms. IEEE Trans. Circuits Syst. II Express Briefs 2020, 1. [Google Scholar] [CrossRef]
  13. Zhang, W.; Liao, X.; Jin, H. Fine-grained Scheduling in FPGA-Based Convolutional Neural Networks. In Proceedings of the 2020 IEEE 5th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA), Chengdu, China, 10–13 April 2020; pp. 120–128. [Google Scholar]
  14. Zhang, M.; Li, L.; Wang, H.; Liu, Y.; Qin, H.; Zhao, W. Optimized Compression for Implementing Convolutional Neural Networks on FPGA. Electronics 2019, 8, 295. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, D.; Shen, J.; Wen, M.; Zhang, C. Efficient Implementation of 2D and 3D Sparse Deconvolutional Neural Networks with a Uniform Architecture on FPGAs. Electronics 2019, 8, 803. [Google Scholar] [CrossRef] [Green Version]
  16. Bank-Tavakoli, E.; Ghasemzadeh, S.A.; Kamal, M.; Afzali-Kusha, A.; Pedram, M. POLAR: A Pipelined/ Overlapped FPGA-Based LSTM Accelerator. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2020, 28, 838–842. [Google Scholar] [CrossRef]
  17. Xiang, L.; Lu, S.; Wang, X.; Liu, H.; Pang, W.; Yu, H. Implementation of LSTM Accelerator for Speech Keywords Recognition. In Proceedings of the 2019 IEEE 4th International Conference on Integrated Circuits and Microsystems (ICICM), Beijing, China, 25–27 October 2019; pp. 195–198. [Google Scholar] [CrossRef]
  18. Azari, E.; Vrudhula, S. An Energy-Efficient Reconfigurable LSTM Accelerator for Natural Language Processing. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 4450–4459. [Google Scholar] [CrossRef]
  19. Graves, A.; Wayne, G.; Reynolds, M.; Harley, T.; Danihelka, I.; Grabska-Barwinska, A.; Colmenarejo, S.G.; Grefenstette, E.; Ramalho, T.; Agapiou, J.; et al. Hybrid computing using a neural network with dynamic external memory. Nature 2016, 538, 471–476. [Google Scholar] [CrossRef] [PubMed]
  20. Graves, A.; Wayne, G.; Danihelka, I. Neural Turing Machines. arXiv 2014, arXiv:1410.5401. [Google Scholar]
  21. Olah, C.; Carter, S. Attention and Augmented Recurrent Neural Networks. Distill 2016. [Google Scholar] [CrossRef]
  22. Yuan, B. Efficient hardware architecture of softmax layer in deep neural network. In Proceedings of the 2016 29th IEEE International System-on-Chip Conference (SOCC), Seattle, WA, USA, 6–9 September 2016; pp. 323–326. [Google Scholar] [CrossRef]
  23. Hu, R.; Tian, B.; Yin, S.; Wei, S. Efficient Hardware Architecture of Softmax Layer in Deep Neural Network. In Proceedings of the 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP), Shanghai, China, 19–21 November 2018; pp. 1–5. [Google Scholar] [CrossRef]
  24. Sun, Q.; Di, Z.; Lv, Z.; Song, F.; Xiang, Q.; Feng, Q.; Fan, Y.; Yu, X.; Wang, W. A High Speed SoftMax VLSI Architecture Based on Basic-Split. In Proceedings of the 2018 14th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT), Qingdao, China, 31 October–3 November 2018; pp. 1–3. [Google Scholar] [CrossRef]
  25. Du, G.; Tian, C.; Li, Z.; Zhang, D.; Yin, Y.; Ouyang, Y. Efficient Softmax Hardware Architecture for Deep Neural Networks. In Proceedings of the 2019 on Great Lakes Symposium on VLSI, Tysons Corner, VA, USA, 9–11 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 75–80. [Google Scholar] [CrossRef]
  26. Geng, X.; Lin, J.; Zhao, B.; Kong, A.; Aly, M.M.S.; Chandrasekhar, V. Hardware-Aware Softmax Approximation for Deep Neural Networks. In Lecture Notes in Computer Science, Proceedings of the Efficient Hardware Architecture of Softmax Layer in Deep Neural NetworkComputer Vision-ACCV 2018-14th Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; Revised Selected Papers, Part IV; Jawahar, C.V., Li, H., Mori, G., Schindler, K., Eds.; Springer: Cham, Switzerland, 2018; Volume 11364, pp. 107–122. [Google Scholar] [CrossRef]
  27. Li, Z.; Li, H.; Jiang, X.; Chen, B.; Zhang, Y.; Du, G. Efficient FPGA Implementation of Softmax Function for DNN Applications. In Proceedings of the 2018 12th IEEE International Conference on Anti-counterfeiting, Security, and Identification (ASID), Xiamen, China, 9–11 November 2018; pp. 212–216. [Google Scholar]
  28. Alabassy, B.; Safar, M.; El-Kharashi, M.W. A High-Accuracy Implementation for Softmax Layer in Deep Neural Networks. In Proceedings of the 2020 15th Design Technology of Integrated Systems in Nanoscale Era (DTIS), Marrakech, Morocco, 1–3 April 2020; pp. 1–6. [Google Scholar]
  29. Dukhan, M.; Ablavatski, A. The Two-Pass Softmax Algorithm. arXiv 2020, arXiv:2001.04438. [Google Scholar]
  30. Wei, Z.; Arora, A.; Patel, P.; John, L.K. Design Space Exploration for Softmax Implementations. In Proceedings of the 31st IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP), Manchester, UK, 6–8 July 2020. [Google Scholar]
  31. Kouretas, I.; Paliouras, V. Simplified Hardware Implementation of the Softmax Activation Function. In Proceedings of the 2019 8th International Conference on Modern Circuits and Systems Technologies (MOCAST), Thessaloniki, Greece, 13–15 May 2019; pp. 1–4. [Google Scholar] [CrossRef]
  32. Hertz, E.; Nilsson, P. Parabolic synthesis methodology implemented on the sine function. In Proceedings of the 2009 IEEE International Symposium on Circuits and Systems, Taipei, Taiwan, 24–27 May 2009; pp. 253–256. [Google Scholar] [CrossRef] [Green Version]
  33. Yuan, W.; Xu, Z. FPGA based implementation of low-latency floating-point exponential function. In Proceedings of the IET International Conference on Smart and Sustainable City 2013 (ICSSC 2013), Shanghai, China, 19–20 August 2013; pp. 226–229. [Google Scholar] [CrossRef]
  34. Tang, P.T.P. Table-lookup algorithms for elementary functions and their error analysis. In Proceedings of the 10th IEEE Symposium on Computer Arithmetic, Grenoble, France, 26–28 June 1991; pp. 232–236. [Google Scholar] [CrossRef] [Green Version]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  36. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  37. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  38. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
  39. Synopsys. Available online: https://www.synopsys.com (accessed on 27 August 2020).
Figure 1. A typical deep learning network.
Figure 1. A typical deep learning network.
Technologies 08 00046 g001
Figure 2. Proposed softmax-like layer architecture. The circuit max ( z k ) k , k = 0 , 1 , , n computes the maximum value m of the input vector z = z 1 × z n T . Next m is subtracted by each z k , as described in (17). (a) Proposed softmax-like architecture with p = 1 . Each output f j ^ ( z ) through (18) generates the final value for the softmax-like layer output. (b) Proposed softmax-like architecture. The notation denotes negation. Additional p terms are added and subsequently each output f j ^ ( z ) through (30) generates the final value for the softmax-like layer output.
Figure 2. Proposed softmax-like layer architecture. The circuit max ( z k ) k , k = 0 , 1 , , n computes the maximum value m of the input vector z = z 1 × z n T . Next m is subtracted by each z k , as described in (17). (a) Proposed softmax-like architecture with p = 1 . Each output f j ^ ( z ) through (18) generates the final value for the softmax-like layer output. (b) Proposed softmax-like architecture. The notation denotes negation. Additional p terms are added and subsequently each output f j ^ ( z ) through (30) generates the final value for the softmax-like layer output.
Technologies 08 00046 g002
Figure 3. Tree structure for the computation of max ( z k ) k , k = 0 , 1 , , 7 . Notation z i j denotes the maximum of z i and z j while z i j k l denotes the maximum of z i j and z k l .
Figure 3. Tree structure for the computation of max ( z k ) k , k = 0 , 1 , , 7 . Notation z i j denotes the maximum of z i and z j while z i j k l denotes the maximum of z i j and z k l .
Technologies 08 00046 g003
Figure 4. Values obtained from Examples 1 and 2. (a) Q = 0.9946, MSE = 0.8502. (b) Q = 0.0449, MSE = 0.0018
Figure 4. Values obtained from Examples 1 and 2. (a) Q = 0.9946, MSE = 0.8502. (b) Q = 0.0449, MSE = 0.0018
Technologies 08 00046 g004
Figure 5. Values obtained from imagenet image classification for 1000 classes and a custom net for MNIST digit classification for 10 classes. (a) Q = 0.001, MSE = 1.5205 × 10−5. (b) Q = 0.1111, MSE = 6.0841 × 10 13
Figure 5. Values obtained from imagenet image classification for 1000 classes and a custom net for MNIST digit classification for 10 classes. (a) Q = 0.001, MSE = 1.5205 × 10−5. (b) Q = 0.1111, MSE = 6.0841 × 10 13
Technologies 08 00046 g005
Figure 6. Values obtained from imagenet image ResNet-50 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values
Figure 6. Values obtained from imagenet image ResNet-50 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values
Technologies 08 00046 g006
Figure 7. Values obtained from imagenet image VGG-16 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Figure 7. Values obtained from imagenet image VGG-16 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Technologies 08 00046 g007
Figure 8. Actual and proposed approximation softmax values for two inferences, namely A and B, for the VGG-16 CNN. (a) Actual softmax values for inference A. (b) Actual softmax values for inference B. (c) Proposed approximation softmax values for inference A based on architecture of Figure 2a and (18). (d) Proposed approximation softmax values for inference B based on architecture of Figure 2a and (18). (e) Proposed approximation softmax values for inference A based on architecture of Figure 2b and (30) with p = 10 . (f) Proposed approximation softmax values for inference B based on architecture of Figure 2b and (30) with p = 10 .
Figure 8. Actual and proposed approximation softmax values for two inferences, namely A and B, for the VGG-16 CNN. (a) Actual softmax values for inference A. (b) Actual softmax values for inference B. (c) Proposed approximation softmax values for inference A based on architecture of Figure 2a and (18). (d) Proposed approximation softmax values for inference B based on architecture of Figure 2a and (18). (e) Proposed approximation softmax values for inference A based on architecture of Figure 2b and (30) with p = 10 . (f) Proposed approximation softmax values for inference B based on architecture of Figure 2b and (30) with p = 10 .
Technologies 08 00046 g008aTechnologies 08 00046 g008b
Figure 9. Values obtained from imagenet image VGG-19 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Figure 9. Values obtained from imagenet image VGG-19 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Technologies 08 00046 g009
Figure 10. Values obtained from imagenet image inceptionV3 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Figure 10. Values obtained from imagenet image inceptionV3 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Technologies 08 00046 g010
Figure 11. Values obtained from imagenet image MobileNetV2 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Figure 11. Values obtained from imagenet image MobileNetV2 net classification for 1000 classes. (a) Histogram of the MSE values. (b) Histogram of the Q values.
Technologies 08 00046 g011
Figure 12. Histograms for the MSE values for various data wordlengths for the case of 1000 inferences in the VGG-16 CNN. (a) Histograms for the MSE values for (3, 2) and (4, 2) data wordlength. (b) Histograms for the MSE values for (5, 2) and (6, 2) data wordlength.
Figure 12. Histograms for the MSE values for various data wordlengths for the case of 1000 inferences in the VGG-16 CNN. (a) Histograms for the MSE values for (3, 2) and (4, 2) data wordlength. (b) Histograms for the MSE values for (5, 2) and (6, 2) data wordlength.
Technologies 08 00046 g012
Figure 13. Proposed architecture obtained by synthesis.
Figure 13. Proposed architecture obtained by synthesis.
Technologies 08 00046 g013
Figure 14. Area, delay and power complexity plots for a softmax layer of size 10, for the proposed and [22] circuits in the case of 10 -bit wordlength implemented in a standard-cell library. A, B and C in the legends denote architectures in Figure 2a, Figure 2b and [22], respectively. (a) Area vs. delay plot. (b) Power vs. delay plot.
Figure 14. Area, delay and power complexity plots for a softmax layer of size 10, for the proposed and [22] circuits in the case of 10 -bit wordlength implemented in a standard-cell library. A, B and C in the legends denote architectures in Figure 2a, Figure 2b and [22], respectively. (a) Area vs. delay plot. (b) Power vs. delay plot.
Technologies 08 00046 g014
Table 1. Top-5 softmax values for six indicative inferences for each model. The actual softmax values and the proposed method values are obtained by (1) and (18), respectively. For the VGG-16 CNN, values obtained by (30) are also presented.
(a) ResNet-50.
(a) ResNet-50.
InferenceTop-5 Softmax Values
(1)
a 0.753711400.0272129960.0180013310.0145991460.014546203
b 0.999922044.0420113 × 10−58.1600538 × 10−65.8431901 × 10−62.5279753 × 10−6
c 0.362630930.135680240.0907581670.0631062020.061747193
d 0.939374860.0115489550.0108921900.00686639550.0043140189
e 0.986965420.00903515420.00108300490.000686129320.00025251327
f0.998335660.00157955254.6098357 × 10−52.2146454 × 10−51.0724646 × 10−5
(18)
a 10.035156250.02343750.01855468750.0185546875
b 10000
c 10.37402343750.250.1738281250.169921875
d 10.011718750.01074218750.00683593750.00390625
e 10.00878906250.000976562500
f 10.0009765625000
(b) VGG-16.
(b) VGG-16.
InferenceTop-5 Softmax Values
(1)
a 0.999993092.3484120 × 10−69.8677640 × 10−74.2830493 × 10−73.1285174 × 10−7
b 0.993893440.00146560420.000723814130.000624388810.00028156079
c 0.943631350.0221387710.00877504980.00487983790.0047590565
d 0.739014270.109415700.0659814100.0183293500.012467328
e 0.511828840.189208430.100426820.0554102550.030226296
f 0.591144740.408211500.000436155740.000172724101.3683101 × 10−5
(18)
a 10000
b 10.0009765625000
c 10.02343750.00878906250.00488281250.0048828125
d 10.14746093750.08886718750.02441406250.0166015625
e 10.3691406250.19628906250.1074218750.05859375
f 10.6904296875000
(30)
a * 0.9990234380000
b * 0.996093750.000976563000
c * 0.9541015630.0214843750.0087890630.0048828130.00390625
d * 0.7470703130.1103515630.066406250.0175781250.01171875
e * 0.4560546880.167968750.0888671880.0488281250.026367188
f * 0.50.345703125000
(c) VGG-19.
(c) VGG-19.
InferenceTop-5 Softmax Values
(1)
a 0.992049870.00354237020.00186058390.000447015220.00035935538
b 0.999999642.9884677 × 10−71.5874324 × 10−111.5047866 × 10−119.9192204 × 10−13
c 0.994058370.00131780710.000716318090.000408394550.00027133274
d 0.192570180.129520970.121078600.105897190.074582554
e 0.996033850.00149636130.00108129940.000243224740.00015848021
f0.745595040.155030550.0106518160.00818926280.0075844983
(18)
a 10.00292968750.000976562500
b 10000
c 10.0009765625000
d 10.6718750.62792968750.5488281250.38671875
e 10.00097656250.000976562500
f 10.207031250.0136718750.01074218750.009765625
(d) InceptionV3.
(d) InceptionV3.
InferenceTop-5 Softmax Values
(1)
a 0.981369260.000671917400.000226328030.000208862970.00018680355
b 0.523920300.172704860.128382760.00244790970.0017230138
c 0.617212770.0420224890.0382705070.0118706070.0036431390
d 0.961877640.00111408180.000841530390.000690973770.00045776321
e 0.996432190.000580876770.000157131225.3965716 × 10−54.0285959 × 10−5
f0.457232800.414157390.000780481150.000718521830.00068869896
(18)
a 10000
b 10.32910156250.2441406250.003906250.0029296875
c 10.06738281250.06152343750.01855468750.005859375
d 10.0009765625000
e 10000
f 10.90527343750.00097656250.00097656250.0009765625
(e) MobileNetV2.
(e) MobileNetV2.
InferenceTop-5 Softmax Values
(1)
a 0.813054080.0144056880.0124060610.00911198930.0077789603
b 0.957020460.00422846340.00402785190.00208134160.00098748843
c 0.492314520.0227766840.0209059420.0187538750.018386556
d 0.604019170.298271810.0155936130.0105112640.0038427035
e 0.975016470.00324968430.00147901100.00088576670.00076536590
f0.870929000.0226090570.00440597160.00236967210.0014177967
(18)
a 10.0175781250.01464843750.01074218750.0087890625
b 10.003906250.003906250.0019531250.0009765625
c 10.04589843750.04199218750.03808593750.037109375
d 10.49316406250.0253906250.01660156250.005859375
e 10.00292968750.000976562500
f 10.0253906250.00488281250.0019531250.0009765625
Table 2. Area, delay and power consumption for the 10-class softmax layer output of a convolutional neural network (CNN).
(a) Architecture of Figure 2a.
(a) Architecture of Figure 2a.
Delay (ns)Area ( μ m 2 )Power ( μ w)
2.9316,8911611.6
3.4317,2931423.7
3.9515,5501070.5
4.4215,788936.8
4.9415,084812.4
5.4715,349503.5
(b) Architecture in [22].
(b) Architecture in [22].
Delay (ns)Area ( μ m 2 )Power ( μ w)
3.9943,5763228.2
5.1932,9681695.1
6.4525,445871.8
7.9526,358714.4
9.4425,846624.9
10.4126,154570.2
(c) Architecture of Figure 2b with p = 5.
(c) Architecture of Figure 2b with p = 5.
Delay (ns)Area ( μ m 2 )Power ( μ w)
3.4227,6151933.2
3.9225,5971576.3
4.9123,6541216.4
5.4221,4581050.6
6.4520,251838.6
7.9420,147636.1

Share and Cite

MDPI and ACS Style

Kouretas, I.; Paliouras, V. Hardware Implementation of a Softmax-Like Function for Deep Learning. Technologies 2020, 8, 46. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies8030046

AMA Style

Kouretas I, Paliouras V. Hardware Implementation of a Softmax-Like Function for Deep Learning. Technologies. 2020; 8(3):46. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies8030046

Chicago/Turabian Style

Kouretas, Ioannis, and Vassilis Paliouras. 2020. "Hardware Implementation of a Softmax-Like Function for Deep Learning" Technologies 8, no. 3: 46. https://0-doi-org.brum.beds.ac.uk/10.3390/technologies8030046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop