Tuesday, November 13, 2018
Session

The Future of Smart Computing

Chair Joachim Pelka, Deputy Director, Fraunhofer-Gesellschaft
Joachim Pelka

Joachim Pelka
Deputy Director
Fraunhofer-Gesellschaft

Joachim Pelka

Biography
Dr. Joachim Pelka is the Deputy Director and Strategy Advisor of the business office for the Fraunhofer Group for Microelectronics. He studied electrical engineering, with an emphasis on semiconductor technology, at Berlin's Technical University and was awarded a doctorate there for his work on semiconductor components. He has been with the Fraunhofer-Gesellschaft since 1983. After several years of Research in process simulation, he was heading the Business Office of the Fraunhofer Group for Microelectronics from 1996-2018. As managing director he was responsible for strategic planning and for the coordination of work in the microelectronic institutes of the Fraunhofer-Gesellschaft. In keeping with deepening European integration, Dr. Pelka today functions as the main contact person for other European research facilities such as CEA-Leti, CSEM, IMEC and VTT. He represents the Group, complementing the Chairman of the Group, in the Heterogeneous Technology Alliance HTA and in the Electronics Leaders Group of the European Commission. Dr. Pelka is a member of the AENEAS Scientific Council.

12:45 Introduction
12:50
Towards Next Generation Architectures
  Eric Monchalin, Head of Machine Intelligence, Atos
Towards Next Generation Architectures
Eric Monchalin

Eric Monchalin
Head of Machine Intelligence
Atos

Eric Monchalin

Abstract
Difficult though it is to believe, the small, micro, even nano computers we encounter every day are the direct descendants of the von Neumann architecture described in 1945, which gave birth to the general-purpose computing model, and created the software revolution. Future architectures will disaggregate monolithic components like general purpose CPUs, operating systems and middleware, providing the required insertion points, adding flexibility and improving security. Quantum computing will inevitably be delivered with a mixture of general purpose CPUs, Quantum hardware and other accelerators, which will enable application developers to restructure their applications to benefit from the distinct capabilities of each specialized element. But, computing will no longer be restricted to traditional platforms, but will be embedded in almost everything, and interconnected in a way that establishes an unprecedented Computing Continuum. A plethora of sensors, actuators, robots, drones, cars etc. will not just act standalone. They will be part of an all pervading, interconnected network that exhibits enormous complexity, particularly at the ‘edge’ of the ecosystem, creating a cognitive loop. It is driving to the emergence of Swarm Intelligence where the individual computing capacity of the devices at the edge will be complemented and supplemented by their dynamic cooperation with other objects in swarm communities acting as an intelligent whole. It perfectly fits the emergence of the networked connection of people, process, data and things that is exploding, pushed by business models, for which current architectures will not work anymore.

Biography
Eric is Vice-President, Head of Machine Intelligence at Atos. He’s responsible for solidifying new technological and business directions for Big Data Global Business Line. Eric’s career has been mainly built on numerous R&D positions in several companies, with experience in leading 100+ people organizations and managing large projects in international environments. Eric’s best memory is the first Supercomputer Bull delivered CEA end 2005 and which was ranked number 5 worldwide. He was in charge of this multi tens millions Euros project on behalf of Bull R&D: technical presale, design, development and on site bring up. Eric is a technology-minded person who values wide range of skills and technological knowledge focused on customer wishes to turn them into reality.

13:15 Foundry eNVM Need as a new Driver for Emerging Memory Development
  Johannes Müller, GLOBALFOUNDRIES
13:40
Future Computing- the Neuromorphic approach.
  Carlo Reita, Director, Nanoelectronics Technical Marketing and Strategy, CEA
Future Computing- the Neuromorphic approach.
Carlo Reita

Carlo Reita
Director, Nanoelectronics Technical Marketing and Strategy
CEA

Carlo Reita

Abstract
Abstract Research activities in the field of brain-inspired computing have gained a large momentum in recent years. The main reason is the attempt to go beyond the limitation of the conventional Von Neumann architecture that is in-creasingly affected by the limitation of the bandwidth and latency of the memory-logic communication. In neuro-morphic architectures, the memory is distributed and can be co-localised with the logic, in particular it is what the new resistive memories technologies could provide. While most of the attention is being directed to implementation of Deep Learning algorithms in large computing system, the impact on device and circuit technology has been mixed. On one hand, advanced standard CMOS technology has been used to develop GPU and specific circuit accelerators without making use of any “bio-inspired” hardware. On the other hand, emerging resistive memory devices (RRAMs) are considered good candidates to emulate a biologically plausible synaptic behavior at nanometer scale, because of the fact that they offer the possibility to modulate their conductance by applying low biases, and can be easily integrated with CMOS-based neuron circuits in a back-end process during the making of the chip. This has opened the way for the realization of compact and energy-efficient computing architectures based on artificial neural networks (ANNs) – mainly using unsupervised learning rules such as the Spike Timing Dependent Plasticity - but that have been restricted mostly to the research community due to the insufficient maturity of the technology. An intermediate, and probably faster to market application of these new memory technologies will be their applica-tion as a slow-nonvolatile cache/fast mass storage as inter-mediate memory level in conventional accelerators. This will allow a reduction of the fast DRAM and SRAM cache areas while still reducing latency to access the mass storage.

Biography
Dr. Reita (M), is currently Director, Nanoelectronics Technical Marketing and Strategy at CEA-LETI. He obtained his Laurea di Dottore in Fisica from Rome University “La Sapienza”. After atwo year post-doc at the Istituto di Elettronica dello Stato Solido of the CNR (Italy) working on a-Si thin film transistors for sensors and displays, he joined the GEC-Marconi Hirst Research Centre in Wembley (UK) as Principal Research Scientist working on poly-Si TFTs for displays and drivers. After a two years assignment as Royal Society Industrial Fellow at Cambridge University Engineering Department, he joined the Laboratoire Centrale des Recherches of Thomson-CSF in Orsay (France) as Senior Research Scientist on poly-Si device physics and circuit design. In 1999 he joined the mask maker Align-Rite which following a merger in 2000 became Photronics. After a period as Sales Manager, he became Technical Marketing Manager in charge of the Joint Developed Programs with the major customers and then European R&D Director. In 2005 he joined CEA-LETI where he is currently in charge strategy definition and technical and scientific interface with industrial as well as institutional partners in the Nanoelectronics domain. He is author or co-author of over 80 refereed papers, several invited and review papers, two books chapters in the fields of electronic devices and lithography and served as member of national and international reviews and advisory committees.

14:00
Photonics for Next Generation Computing
  Tolga Tekin, Group Manager, Fraunhofer IZM
Photonics for Next Generation Computing
Tolga Tekin

Tolga Tekin
Group Manager
Fraunhofer IZM

Tolga Tekin

Abstract
Main bottleneck to the realization of next generation computing systems for all big-, secure-data applications and related industries, including System-in-Package and System-on-Chip based solutions, is the lack of off-chip (off-core) interconnects with low latency, low power, high bandwidth, and high density. The solution to overcome these challenges is the use of photonics. Photonics as an underlying technology is addressing the following main technological challenges of the next generation computing systems such as i) Off-chip interconnects, ii) Massive switching matrix, iii) Disruptive system architectures, iv) Cooling concepts, v) New peripheral component interconnect express, vi) Memory fabric, vii) Novel computing functions in order to enable Quantum- & Neuromorphic Computing, AI. Next Generation Photonics Platform will enable the disruptive computing technology and photonics enabled architectures, leading to faster, cheaper, power efficient, secure, denser solutions for applications and industries. Further, generic co-integration with all building-blocks of computing technology will be possible, since photonic based standard interfaces between building blocks are introduced and implemented.

Biography
Tolga Tekin received the Ph.D. degree in electrical engineering and computer science from the Technical University of Berlin, Germany. He was a Research Scientist with the Optical Signal Processing Department, Fraunhofer HHI, where he was engaged in advanced research on optical signal processing, 3R-regeneration, all-optical switching, clock recovery, and integrated optics. He was a Postdoctoral Researcher on components for O-CDMA and terabit routers with the University of California. He worked at Teles AG on phased-array antennas and their components for skyDSL. At the Fraunhofer Institute for Reliability and Microintegration (IZM) and at Technical University of Berlin, he then led projects on optical interconnects and silicon photonics packaging. He is engaged in photonic integrated system-in-package, photonic interconnects, and 3-D heterogeneous integration research activities. He is group manager of ‘Photonics and Plasmonics Systems’ and coordinator of ‘PhoxLab - Independent Platform for Photonics in Data Centers (PIH)‘ at Fraunhofer IZM . He is coordinator of European flagship project ‘PhoxTroT’ and European H2020 project ‘L3MATRIX’ on optical interconnects for data centers.

14:20
Towards wafer-scale Qubits
  Massimo Mongillo, Device Engineer, IMEC
Towards wafer-scale Qubits
Massimo Mongillo

Massimo Mongillo
Device Engineer
IMEC

Massimo Mongillo

Abstract
In this talk I will present the recent progress made by IMEC in the fabrication and integration of basic quantum circuits targeting qubits into a 300mm FAB. Quantum Computing holds promise for solving complex computational problems which are intractable by classical calculators. The basic ingredient for the implementation of a quantum computer is the availability of a two-level system mimicking the classical bit “0” and “1”(the qubit), on which we can encode the basic information. The exceptional computation power of a Quantum Computer originates by the quantum-mechanical property of superposition, according to which the qubit state is defined as an arbitrary linear combination of the two constituent bit states. At the qubit level, two main solid-state implementations are currently explored at IMEC. They are based on individual spins in Silicon and Superconducting circuits. Spins in silicon have demonstrated the longest coherence time in any solid-state device as a result of the lack of hyperfine interaction coupling the spins of the nuclei and the electronic spins. Another approach makes use of Superconducting devices, which, to date, represent the most advanced solid-state implementation of a qubit. Although this technology has proven to be mature for the implementation of basic Quantum Algorithms, it presents unique challenges in term of integration of a large (millions) array of qubits, necessary for error-correction. Given the rather large foot-print of an elemental Superconducting qubit, this platform need to demonstrate its viability in terms of up-scalability. In the long term, both qubits platform need to be integrated into a larger system comprising the control electronics routing the necessary signals to the physical qubit layer. In IMEC we are pursuing these research lines leveraging the extended know-how in terms of large-scale integration and system architecture.

Biography
Massimo Mongillo holds a Master degree in Physics from University of Naples in 2005 and a PhD in Nanophysics from University Joseph Fourier in Grenoble in 2010. His research has focused on the physics of Silicon nanoscaled devices, Quantum Transport and Superconductivity. In 2015 he has joined IMEC to develop devices based on two-dimentional materials. Since 2017 he is in the Quantum Computing group for the integration of Superconducitng and spin Qubits.

14:45 End