Source
Volume
73Issue
3Page
735-746DOI
10.1109/TC.2023.3343093Published
MAR 2024Indexed
2024-03-24Document Type
ArticleAbstract
Existing approximate Booth multipliers fail to keep up with modern approximate multipliers such as truncation-based approximate logarithmic multipliers. This paper introduces a new approximation scheme for Booth multipliers that can operate with negligible error rates using only $N/4$N/4 Booth decoders, instead of the traditional $N/2$N/2 Booth decoders. The proposed 16-bit BD16.4 approximate Booth multiplier reduces the Normalized Mean Error Deviation (NMED) by 96.5% and the Power-Area-Product (PAP) by 69.6%, when compared to a state-of-the-art approximate logarithmic multiplier. Additionally, the proposed BD16.4 approximate multiplier reduces the NMED by 94.4% and PAP by 74.8%, when compared to a state-of-the-art higher-radix approximate Booth multiplier. The proposed 8-bit approximate Booth multipliers reduce the NMED by up to 74% and PAP by up to 5% when compared to the existing state-of-the-art approximate logarithmic multipliers. We validated the results derived in this paper through a neural network inference experiment, where the proposed approximate multipliers showed a negligible drop in inference accuracy compared to the exact Booth multipliers and the state-of-the-art approximate logarithmic multipliers (ALM). The proposed approximate multipliers achieved a Power-Delay-Product reduction of 63% (vs. exact) and 21.22% (vs. ALM) in 16-bit experiments and a reduction of 67% (vs. exact) and 8.75% (vs. ALM) in 8-bit experiments.