What Is Half Precision?
This video introduces the concept of half precision, or float16, a relatively new floating-point data. It can be used to reduce memory usage by half and has become very popular for accelerating deep learning training and inference. This video also looks at the benefits and the tradeoffs over traditional 32-bit single precision or 64-bit double-precision data types for traditional control applications.
- Half-Precision Data Type in MATLAB: https://bit.ly/36vcvul
- Floating Point Numbers: https://bit.ly/2Fqa803
- Fixed-Point Arithmetic: https://bit.ly/2QUeH8e
- Construct Fixed-Point Numeric Object: https://bit.ly/2MZniWg
- Optimizing Lookup Tables: https://bit.ly/2s29m6z
- Lookup Table Optimization (2:21): https://bit.ly/2Qu5eFF
- Half-Precision Data Type in MATLAB: https://bit.ly/36vcvul
- Floating Point Numbers: https://bit.ly/2Fqa803
- Fixed-Point Arithmetic: https://bit.ly/2QUeH8e
- Construct Fixed-Point Numeric Object: https://bit.ly/2MZniWg
- Optimizing Lookup Tables: https://bit.ly/2s29m6z
- Lookup Table Optimization (2:21): https://bit.ly/2Qu5eFF
No comments