NMSIS-NN  Version 1.3.1
NMSIS NN Software Library
Activation Functions

Perform activation layers, including ReLU (Rectified Linear Unit), sigmoid and tanh. More...

Functions

riscv_nmsis_nn_status riscv_nn_activation_s16 (const int16_t *input, int16_t *output, const int32_t size, const int32_t left_shift, const riscv_nn_activation_type type)
 s16 neural network activation function using direct table look-up More...
 
void riscv_nn_activations_direct_q15 (q15_t *data, uint16_t size, uint16_t int_width, riscv_nn_activation_type type)
 neural network activation function using direct table look-up More...
 
void riscv_nn_activations_direct_q7 (q7_t *data, uint16_t size, uint16_t int_width, riscv_nn_activation_type type)
 Q7 neural network activation function using direct table look-up. More...
 
void riscv_relu6_s8 (int8_t *data, uint16_t size)
 s8 ReLU6 function More...
 
void riscv_relu_q15 (int16_t *data, uint16_t size)
 Q15 RELU function. More...
 
void riscv_relu_q7 (int8_t *data, uint16_t size)
 Q7 RELU function. More...
 

Detailed Description

Perform activation layers, including ReLU (Rectified Linear Unit), sigmoid and tanh.

Function Documentation

◆ riscv_nn_activation_s16()

riscv_nmsis_nn_status riscv_nn_activation_s16 ( const int16_t *  input,
int16_t *  output,
const int32_t  size,
const int32_t  left_shift,
const riscv_nn_activation_type  type 
)

s16 neural network activation function using direct table look-up

Parameters
[in]inputpointer to input data
[out]outputpointer to output
[in]sizenumber of elements
[in]left_shiftbit-width of the integer part, assumed to be smaller than 3.
[in]typetype of activation functions
Returns
The function returns RISCV_NMSIS_NN_SUCCESS

Supported framework: TensorFlow Lite for Microcontrollers. This activation function must be bit precise congruent with the corresponding TFLM tanh and sigmoid activation functions

◆ riscv_nn_activations_direct_q15()

void riscv_nn_activations_direct_q15 ( q15_t *  data,
uint16_t  size,
uint16_t  int_width,
riscv_nn_activation_type  type 
)

neural network activation function using direct table look-up

Q15 neural network activation function using direct table look-up.

Note
Refer header file for details.

◆ riscv_nn_activations_direct_q7()

void riscv_nn_activations_direct_q7 ( q7_t *  data,
uint16_t  size,
uint16_t  int_width,
riscv_nn_activation_type  type 
)

Q7 neural network activation function using direct table look-up.

Parameters
[in,out]datapointer to input
[in]sizenumber of elements
[in]int_widthbit-width of the integer part, assume to be smaller than 3
[in]typetype of activation functions

This is the direct table look-up approach.

Assume here the integer part of the fixed-point is <= 3. More than 3 just not making much sense, makes no difference with saturation followed by any of these activation functions.

◆ riscv_relu6_s8()

void riscv_relu6_s8 ( int8_t *  data,
uint16_t  size 
)

s8 ReLU6 function

Parameters
[in,out]datapointer to input
[in]sizenumber of elements

◆ riscv_relu_q15()

void riscv_relu_q15 ( int16_t *  data,
uint16_t  size 
)

Q15 RELU function.

Parameters
[in,out]datapointer to input
[in]sizenumber of elements

◆ riscv_relu_q7()

void riscv_relu_q7 ( int8_t *  data,
uint16_t  size 
)

Q7 RELU function.

Parameters
[in,out]datapointer to input
[in]sizenumber of elements