Vai alla Pagina Personalizzata


  • Neuromorphic computing platforms

Biological brains are increasingly taken as a guide toward more efficient forms of computing. The latest frontier considers the use of spiking neural-network-based neuromorphic processors for near-sensor data processing, in order to fit the tight power and resource budgets of edge computing devices. However, a prevailing focus on brain-inspired computing and storage primitives in the design of neuromorphic systems is currently bringing a fundamental bottleneck to the forefront: chip-scale communications. While communication architectures (typically, a network-on-chip) are generally inspired by, or even borrowed from, general purpose computing, neuromorphic communications exhibit unique characteristics: they consist of the event-driven routing of small amounts of information to a large number of destinations within tight area and power budgets. This research aims at an inflection point in network-on-chip design for brain-inspired communications.

  • Multi-tenant edge computing platforms

The Public Fog represents the most challenging and forward-looking form of tenancy, easily resulting in conflicting resource allocation requirements among competing Internet-of-Things (IoT) services. A promising approach consists of taking advantage of the fluctuating workload of emerging Fog applications to enable elastic resource allocation. With fast-evolving hardware platforms toward enhanced parallelism and heterogeneity, the dynamic resource management problem becomes hierarchical, since after virtual resources are assigned to services, they have to be mapped to actual spatial partitions of adjacent processing tiles. The latter problem has not received enough attention so far, due to the required awareness of the underlying hardware platform and to the potential combinatorial explosion of the number of mapping solutions. In order to bridge this gap in the management stack of multi-tenant Fog computing nodes, this research proposes a concept HW/SW architecture optimized for elastic resource allocation and the detailed implementation of a Partition Manager for it.

  • Design methods for analog deep learning accelerators

Computation-in-memory (CIM) is one of the most appealing computing paradigms, especially for implementing artificial neural networks. Non-volatile memories like ReRAMs, PCMs, etc., have proven to be promising candidates for the realization of CIM processors. However, these devices and their driving circuits are subject to non-idealities. This research targets design technology for simulating memristor-based CIM systems. The target framework considers the impact of the non-idealities of the CIM components, including memristor device, memristor crossbar (interconnects), analog-to-digital converter, and transimpedance amplifier, on the vector-matrix multiplication performed by the CIM unit. The CIM modules are described in SystemC and SystemC-AMS to reach a higher simulation speed while maintaining high simulation accuracy.

  • Reliability analysis and fault-tolerance of deep learning hardware

Investigating the effects of Single Event Upset in domain-specific accelerators represents one of the key enablers to deploy Deep Neural Networks (DNNs) in mission-critical edge applications. Currently, reliability analyses related to DNNs mainly focus either on the DNNs model, at application level, or on the hardware accelerator, at architecture level. This research targets a systematic cross-layer reliability analysis of deep-learning accelerators. The goals are i) to analyze the propagation of faults from the hardware to the application level, and ii) to compare different architectural configurations. This research aims at new insights into the performance-accuracy-reliability trade-off spanned by the configuration space of deep learning accelerators.

  • Silicon nanophotonic networks

Silicon nanophotonics, with its high-speed, low-loss optical interconnects, and high computation capabilities, is seen as one of the promising technologies that can easily enable the transition from low data computation systems to high data computation systems. By providing faster and more energy-efficient communication, silicon nanophotonics is helping to drive the development of more powerful and efficient computing systems that can handle larger amounts of data. These advantages of silicon nanophotonics have been leveraged by academia and industry to design the alternative for electrical interconnects, i.e., Optical Network-on-Chip (ONoC). The ONoCs offer higher bandwidth and lower power consumption communication framework as compared to the electrical interconnects. It is expected that the electrical interconnects will continue to be replaced by optical interconnects as the demand for higher bandwidth and faster communication continues to grow. However, there are some challenges in the design of optical interconnects, some of which are attributed to the intrinsic nature of silicon nanophotonic devices such as fabrication challenges and some are associated solely with the ONoCs such as high static power consumption. This research aims at coping with these challenges in order to fully realize the benefits of silicon nanophotonics for on-chip and cross-chip communications.