SMAUG
Simulating Machine Learning Applications on gem5-Aladdin
|
C_activation_param_t | Parameters to the activation function hardware |
C_SamplingInfo | Simulation sampling information maintained by the Operator and passed to the accelerated kernel |
CActivationInfo | Specifies an activation function and relevant parameters |
Csmaug::AvgPoolingOp< Backend > | Implements the arithmetic-average-pooling operator |
Csmaug::BatchNormOp< Backend > | Implements the batch normalization layer |
►Csmaug::BatchNormOp< SmvBackend > | |
Csmaug::ConcatOp< Backend > | Concatenates N Tensors along a specified axis |
Csmaug::ConvolutionOp< Backend > | The base class for all 4D spatial convolution operators |
►Csmaug::ConvolutionOp< SmvBackend > | |
Csmaug::TiledTensor::CopyTilesArgs | |
Csmaug::DataflowGraphWriter | DataflowGraphWriter writes the current network as a dot-graph file to the given ostream |
Csmaug::DataOp< Backend > | Exposes a Tensor as its only output |
Csmaug::DebugStream | An stream class to consume debug logs |
Csmaug::DepthwiseConvolutionOp< Backend > | Implements the depthwise convolution operator |
Csmaug::EltwiseAddOp< Backend > | Adds two Tensors elementwise |
►Csmaug::EltwiseAddOp< SmvBackend > | |
Csmaug::EltwiseMulOp< Backend > | Multiplies two Tensors elementwise |
►Csmaug::EltwiseMulOp< SmvBackend > | |
Csmaug::EluOp< Backend > | Implements the exponential linear unit function |
►Csmaug::EluOp< SmvBackend > | |
Csmaug::FlattenOp< Backend > | Flattens each batch of a Tensor |
Csmaug::FromDataType< DataType > | Provides compile-time conversion from SMAUG DataType to C type |
Csmaug::FromDataType< Bool > | |
Csmaug::FromDataType< Float16 > | |
Csmaug::FromDataType< Float32 > | |
Csmaug::FromDataType< Float64 > | |
Csmaug::FromDataType< Int32 > | |
Csmaug::FromDataType< Int64 > | |
Csmaug::GreaterEqualOp< Backend > | Implements an elementwise greater than or equal to operator |
►Csmaug::GreaterEqualOp< SmvBackend > | |
Csmaug::GreaterOp< Backend > | Implements an elementwise greater than operator |
►Csmaug::GreaterOp< SmvBackend > | |
Csmaug::HardTanhOp< Backend > | Implements the hard tanh operator, which bounds the min and max value of the tanh operator |
►Csmaug::HardTanhOp< SmvBackend > | |
Csmaug::InnerProductOp< Backend > | Implements the inner product operator |
►Csmaug::InnerProductOp< SmvBackend > | |
Csmaug::LessEqualOp< Backend > | Implements an elementwise less-than-or-equal-to operator |
►Csmaug::LessEqualOp< SmvBackend > | |
Csmaug::LessOp< Backend > | Implements an elementwise less-than operator |
►Csmaug::LessOp< SmvBackend > | |
Csmaug::MaxPoolingOp< Backend > | Implements the max-pooling operator |
Csmaug::MergeOp< Backend > | Forwards the first live input to its output |
Csmaug::Network | Network encapsulates all of the information SMAUG will use during execution: the overall computation graph of the model, all the operators and tensors, various housekeeping structures, and simulation information |
►Csmaug::Operator | Operator is the base class for all graph operators supported by SMAUG |
Csmaug::Network::OperatorInsertion | |
Csmaug::PaddingOp< Backend > | Pad a given tensor in any number of dimensions with arbitrary size |
Csmaug::ReferenceBackend | ReferenceBackend provides reference implementations of all operators supported by SMAUG |
Csmaug::ReluOp< Backend > | Implements the rectified linear unit operator: max(slope * x, 0) |
►Csmaug::ReluOp< SmvBackend > | |
Csmaug::ReorderOp< Backend > | Implements a Tensor reordering operation to convert between different DataLayouts |
Csmaug::RepeatOp< Backend > | Replicates a Tensor's data among all dimensions |
Csmaug::ReshapeOp< Backend > | Changes the Tensor's shape while retaining the number of elements |
Csmaug::Scheduler | Scheduler is responsible for running the Network |
Csmaug::gem5::ScopedStats | A RAII helper class which dumps and/or resets gem5 stats at construction and destruction |
Csmaug::SeluOp< Backend > | Implements the scaled exponential linear unit function |
►Csmaug::SeluOp< SmvBackend > | |
Csmaug::SigmoidOp< Backend > | Implements the sigmoid operator, defined as 1/(1 + exp(-input)) |
►Csmaug::SigmoidOp< SmvBackend > | |
Csmaug::SmaugTest | The Catch2 test fixture used by all C++ unit tests |
Csmaug::SmvAcceleratorPool | Implements a pool of worker accelerators |
Csmaug::SmvBackend | SmvBackend implements a set of models of optimized DL kernels that were taped out on a machine learning SoC by the Harvard Architecture, Circuits, and Compilers |
Csmaug::SoftmaxOp< Backend > | Implements the softmax operator |
►Csmaug::SoftmaxOp< SmvBackend > | |
Csmaug::SplitOp< Backend > | Implements the split operator, which divides a Tensor into N output Tensors along a specified dimension |
Csmaug::SwitchOp< Backend > | Conditionally forwards an input to one of two outputs |
Csmaug::TanhOp< Backend > | Implements the tanh operator |
►Csmaug::TanhOp< SmvBackend > | |
►Csmaug::TensorBase | The base class of all Tensor objects |
►Csmaug::TensorIndexIterator | An iterator over a multidimensional tensor's indices, accounting for data alignment padding |
Csmaug::TensorIndices | Additional metadata for edges in the graph |
Csmaug::TensorShape | TensorShape describes the shape of a Tensor |
Csmaug::ThreadPool::ThreadInitArgs | |
Csmaug::ThreadPool | A user-space cooperatve thread pool implementation designed for gem5 in SE mode |
Csmaug::TiledTensor::Tile | A tile is a rectangular portion of a larger Tensor |
Csmaug::smv::TilingConfig | A TilingConfig describes tiling strategies and optimal tile sizes for inputs, weights, and outputs Tensors |
►Csmaug::smv::TilingOptimizerBase | |
Csmaug::ToDataType< T > | Provides compile-time conversion from C types to SMAUG DataTypes |
Csmaug::ToDataType< bool > | |
Csmaug::ToDataType< double > | |
Csmaug::ToDataType< float > | |
Csmaug::ToDataType< float16 > | |
Csmaug::ToDataType< int32_t > | |
Csmaug::ToDataType< int64_t > | |
Csmaug::ToDataType< uint32_t > | |
Csmaug::ToDataType< uint64_t > | |
Csmaug::ThreadPool::WorkerThread | All state and metadata for a worker thread |
Csmaug::Workspace | Workspace is the container and owner of all Tensors and Operators in the Network |