https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
AdAM: Adaptive Approximate Multiplier for Fault Tolerance in DNN Accelerators
Tallinn Univ Technol, Comp Syst Dept, EE-19086 Tallinn, Estonia..
Tallinn Univ Technol, Comp Syst Dept, EE-19086 Tallinn, Estonia..
Univ Zanjan, Zanjan 4516346119, Iran..
Univ Zanjan, Zanjan 4516346119, Iran.;Tallinn Univ Technol, EE-19086 Tallinn, Estonia..
Show others and affiliations
2025 (English)In: IEEE transactions on device and materials reliability, ISSN 1530-4388, E-ISSN 1558-2574, Vol. 25, no 1, p. 66-75Article in journal (Refereed) Published
Abstract [en]

Deep Neural Network (DNN) hardware accelerators are essential in a spectrum of safety-critical edge-AI applications with stringent reliability, energy efficiency, and latency requirements. Multiplication is the most resource-hungry operation in the neural network's processing elements. This paper proposes a scalable adaptive fault-tolerant approximate multiplier (AdAM) tailored for ASIC-based DNN accelerators at the algorithm and circuit levels. AdAM employs an adaptive adder that relies on an unconventional use of input Leading One Detector (LOD) values for fault detection by optimizing unutilized adder resources. A gate-level optimized LOD design and a hybrid adder design are also proposed as a part of the adaptive multiplier to improve the hardware performance. The proposed architecture uses a lightweight fault mitigation technique that sets the detected faulty bits to zero. The hardware resource utilization and the DNN accelerator's reliability metrics are used to compare the proposed solution against the Triple Modular Redundancy (TMR) in multiplication, unprotected exact multiplication, and unprotected approximate multiplication. It is demonstrated that the proposed architecture enables a multiplication with a reliability level close to the multipliers protected by TMR while at the same time utilizing 2.74x less area and with 39.06% less power-delay product compared to the exact multiplier. Moreover, it has similar area, delay, and power consumption parameters compared to the state-of-the-art approximate multipliers with similar accuracy while providing fault detection and mitigation capability.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC , 2025. Vol. 25, no 1, p. 66-75
Keywords [en]
Accuracy, Fault tolerant systems, Adders, Hardware, Artificial neural networks, Resource management, Reliability engineering, Integrated circuit reliability, Fault detection, Prevention and mitigation, Deep neural networks, approximate computing, circuit design, reliability, DNN accelerator
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-70554DOI: 10.1109/TDMR.2024.3523386ISI: 001449689000004Scopus ID: 2-s2.0-105001086760OAI: oai:DiVA.org:mdh-70554DiVA, id: diva2:1948626
Available from: 2025-03-31 Created: 2025-03-31 Last updated: 2025-12-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Daneshtalab, Masoud

Search in DiVA

By author/editor
Daneshtalab, Masoud
By organisation
Embedded Systems
In the same journal
IEEE transactions on device and materials reliability
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 76 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf