SMAUG
Simulating Machine Learning Applications on gem5-Aladdin
|
2 #include "smaug/operators/smv/kernels/params.h"
14 float16* host_inputs1,
15 float16* host_results,
24 VEC_ARRAY_1D(
v8fp_t, _inputs0, inputs0);
25 VEC_ARRAY_1D(
v8fp_t, _inputs1, inputs1);
26 VEC_ARRAY_1D(
v8fp_t, _results, results);
29 for (
int i = 0; i < inputs_size /
VECTOR_SIZE; i++) {
30 _results[i] = _inputs0[i] + _inputs1[i];
void host_store_fp16(float *local_data, float16 *remote_data, int num_elems, int local_offset, int remote_offset)
void host_load_fp16(float *local_data, float16 *remote_data, int num_elems, int local_offset, int remote_offset)
fp_t v8fp_t
8 packed 32-bit floating point values.
Aladdin kernels to load/store FP16 data to/from host memory.
void smv_eltwise_add_nc_vec_fxp(float16 *host_inputs0, float16 *host_inputs1, float16 *host_results, float *inputs0, float *inputs1, float *results, int inputs_size)
Utilities for writing and invoking Aladdin kernels from Operators.
#define VECTOR_SIZE
Vector size used in SMV backends.