Skip to main content



AMfoRS team leader: Paolo MAISTRI

The AMfoRS team addresses dependability and trust of digital systems at multiple abstraction levels for specific application domains (e.g., automotive, avionics, smartcards, IoT), by guaranteeing that digital circuits possess properties such as quality, reliability, safety, security, availability. The work of the team is focused on design and analysis methods, techniques and tools to assess and improve circuits dependability and trust, for the above-mentioned domains.

Many domains have functional safety among the classical list of design constraints, e.g. ISO 26262 standard in automotive. Our work aims at improving early evaluations of dependability w.r.t. errors induced by environmental disturbances. The goal, to reduce development and production costs, is to be able to evaluate accurately and at an early stage of the design the potential functional effects of soft and permanent errors. We have recently proposed a cross-layer fault simulation method to perform the robustness evaluation of RTL architectures used in critical embedded systems, based on both fault simulation in RTL and Transaction Level Model (TLM) descriptions to make a trade-off between simulation time and realism of the simulated high level faulty behaviors, with application to an airborne case study. In the context of radiation testing, we have evaluated the vulnerability of hardware-implemented machine learning algorithms, which are finding their way in various domains, including safety-critical applications. Thus, these algorithms have to perform correctly even in harsh environmental conditions, such as in avionics altitudes. Support Vector Machine (SVM) is an important Machine Learning that has been target of hardware implementation in recent years. We have presented the first evaluation of SVMs under thermal neutron radiation along with the first assessment of radiation effects on Multiclass SVMs, proving that Multiclass SVM present an overall higher reliability when compared to a Binary SVM.

We have recently worked on memory system reliability, which is a serious concern today and is becoming more worrisome as technology scales, system size grows and the demand of aggressive voltage reduction becomes more stringent. In this context, Error Correcting Codes (ECC)-based repair techniques were proposed and offer aggressive reduction of the repair cost for high defect densities, but this approach suffers from the fact that, single particles induce Single-Event Upsets (SEUs) may lead to Multi-Cell Upsets (MCUs) and Multi-Bit Upsets (MBUs) in the same memory word. Standard mitigating approaches based on interleaving exist, but the impact of MBUs on the repairing circuitry needs also to be mitigated, through a repair Content Addressable Memory (CAM) having interleaving at its data-words, or else an Offset CAM.
We have presented and evaluated a novel repair approach based on the Offset CAM in ECC-based Memory Repair and hence permits the mitigation of the MBUs affecting it.

Due to technology scaling and transistor size getting smaller and closer to atomic size, the last generation of CMOS technologies presents more variability in various physical parameters. Moreover, circuit wear-out degradation leads to additional temporal variations, potentially resulting in timing and functional failures. To handle such problems, one conventional method consists in providing more safety margins (also called guard bands) at design-time. Therefore, the usage of delay violation monitors becomes a must. Placing the monitors is a critical task as the designer has to carefully select the place that will age the most and may become a potential point of failure in a given design.

We have explored the use of Machine Learning techniques in order to drive the automated selection of potential insertion points of such monitors. Digital delay analysis of basic gates using multiple linear regression has been modeled, predicted, and validated against original data using spice simulation. We have compared multiple linear regression algorithms and used them to study the aging mechanism of CMOS basic gates using supervised learning algorithms. We have showed how it is possible to reliably estimate activity-related path aging and tune the prediction framework by extracting activity profiles from simulations on a synthesized design, which allows a finer grain estimation by obtaining activity profiles at both the path and gate-level.

The IEEE 1687-2014 standard proposes solutions for the access and usage of Embedded Instruments, but Electronic Design Automation (EDA) is still limited to only a small subset of the new features. In this context, in the frame of the Eureka European project HADES, we improved our innovative Test Flow and Environment called “Manager for SoC Test” (MAST), a software backend able to provide features and performance superior to the industrial legacy solutions. We have proposed an innovative solution that exploits the dynamic nature of the standard to obtain an Authentication-based Secure Access framework able to provide a trusted, configurable, efficient, and transparent interface to the test infrastructure depending on user-defined security levels. Our framework extends MAST with a novel solution developed within the team, the “Segment Set Authorization Keys” (SSAK) protocol. Security-wise, secret sharing is limited to a minimum; from a performance point of view, the tool fully leverages its strength in terms of topology resolution and concurrent execution. Last but not least, user experience is also optimal, as security is handled automatically and transparently.

The team works on the design of cryptographic/secure primitives, and the analysis of security and trust threats, by proposing effective countermeasures. We work on algorithms, schemes, and protocols, such as the SSAK protocol for secure test access (see System- level Test), post-quantum cryptography, homomorphic encryption, and non-linear codes for protecting circuits against fault attacks. Concerning the security threats, we work regularly on implementation attacks. In 2020, we have set up a platform for side channel attacks on embedded systems. The environment allows the designer to perform power and EM analysis, and clock and voltage glitch attacks on ARM microcontrollers and FPGA boards. Near-Field EM probes complete the setup, in the LF and RF bandwidths. The platform is going to be extended by supporting EM Fault Injection (EMFI) equipment in the next year. This platform has been partially supported by IRT Nanoelec and by the CNRS (INS2I). In the context of Control Flow Hardening, we have proposed the use of nonlinear codes. Hardware-based control flow monitoring techniques enable to detect both errors in the control flow and the instruction stream being executed on a processor. However, these techniques may fail to detect malicious carefully tuned manipulation of the instruction stream in a basic block. We have shown how using a non-linear encoder and checker can cope with this weakness. Concerning the secure primitives, we are currently working on secure elements such as Physically Unclonable Functions to cope with reliability issues, and True Random Number Generators with memristive and spintronic devices. In this context, we have set up an experimental platform, financed by the CNRS/INS2I, for the evaluation of SRAM-based PUFs. Concerning the trust issues, we are working on methods to detect and possibly avoid the presence of hardware Trojan horses.

Today’s computing systems are facing several issues related to architectural and technological limitations. To mitigate these issues, novel computing paradigms, such as Computing-in-Memory, Neuromorphic Computing and Approximate Computing, are being researched, in conjunction with novel emerging technologies such as memristive and spintronic devices. Concerning these devices, our research aims at using enhanced compact models to perform failure analysis, and define pertinent fault models to establish designfor-test and design-for-reliability methodologies. Concerning the Computing-in-Memory paradigm, we are investigating feasible design solutions, with a special focus on applications for security. Concerning Neuromorphic Computing, we are focusing on the reliability analysis and test of spiking neural networks. Concerning the Approximate Computing paradigm, which has been gaining momentum both in the industry and in academia, we are studying the trade-offs between selective approximation (or occasional violation of specifications) and power consumption. We are also working on an extension of a tool initially developed for dependability evaluation (EARS) in order to identify from RTL descriptions and a given application the operators that are the less sensitive to approximations.