|  | NMSIS-NN
    Version 1.4.1
    NMSIS NN Software Library | 
| Functions | |
| riscv_nmsis_nn_status | riscv_nn_activation_s16 (const int16_t *input, int16_t *output, const int32_t size, const int32_t left_shift, const riscv_nn_activation_type type) | 
| s16 neural network activation function using direct table look-up  More... | |
| void | riscv_nn_activations_direct_q15 (q15_t *data, uint16_t size, uint16_t int_width, riscv_nn_activation_type type) | 
| neural network activation function using direct table look-up  More... | |
| void | riscv_nn_activations_direct_q7 (q7_t *data, uint16_t size, uint16_t int_width, riscv_nn_activation_type type) | 
| Q7 neural network activation function using direct table look-up.  More... | |
| void | riscv_relu6_s8 (int8_t *data, uint16_t size) | 
| s8 ReLU6 function  More... | |
| void | riscv_relu_q15 (int16_t *data, uint16_t size) | 
| Q15 RELU function.  More... | |
| void | riscv_relu_q7 (int8_t *data, uint16_t size) | 
| Q7 RELU function.  More... | |
Perform activation layers, including ReLU (Rectified Linear Unit), sigmoid and tanh
| riscv_nmsis_nn_status riscv_nn_activation_s16 | ( | const int16_t * | input, | 
| int16_t * | output, | ||
| const int32_t | size, | ||
| const int32_t | left_shift, | ||
| const riscv_nn_activation_type | type | ||
| ) | 
s16 neural network activation function using direct table look-up
| [in] | input | pointer to input data | 
| [out] | output | pointer to output | 
| [in] | size | number of elements | 
| [in] | left_shift | bit-width of the integer part, assumed to be smaller than 3. | 
| [in] | type | type of activation functions | 
RISCV_NMSIS_NN_SUCCESSSupported framework: TensorFlow Lite for Microcontrollers. This activation function must be bit precise congruent with the corresponding TFLM tanh and sigmoid activation functions
| void riscv_nn_activations_direct_q15 | ( | q15_t * | data, | 
| uint16_t | size, | ||
| uint16_t | int_width, | ||
| riscv_nn_activation_type | type | ||
| ) | 
neural network activation function using direct table look-up
Q15 neural network activation function using direct table look-up.
| void riscv_nn_activations_direct_q7 | ( | q7_t * | data, | 
| uint16_t | size, | ||
| uint16_t | int_width, | ||
| riscv_nn_activation_type | type | ||
| ) | 
Q7 neural network activation function using direct table look-up.
| [in,out] | data | pointer to input | 
| [in] | size | number of elements | 
| [in] | int_width | bit-width of the integer part, assume to be smaller than 3 | 
| [in] | type | type of activation functions | 
This is the direct table look-up approach.
Assume here the integer part of the fixed-point is <= 3. More than 3 just not making much sense, makes no difference with saturation followed by any of these activation functions.
| void riscv_relu6_s8 | ( | int8_t * | data, | 
| uint16_t | size | ||
| ) | 
s8 ReLU6 function
| [in,out] | data | pointer to input | 
| [in] | size | number of elements | 
| void riscv_relu_q15 | ( | int16_t * | data, | 
| uint16_t | size | ||
| ) | 
Q15 RELU function.
| [in,out] | data | pointer to input | 
| [in] | size | number of elements | 
| void riscv_relu_q7 | ( | int8_t * | data, | 
| uint16_t | size | ||
| ) | 
Q7 RELU function.
| [in,out] | data | pointer to input | 
| [in] | size | number of elements |