• Quantum Computing
    by Joanna Ptasinski
    Space and Naval Warfare Systems Center Pacific (SSC PAC), USA

    Quantum computation is an interdisciplinary scientific field devoted to build quantum computers and quantum information processing systems. Research on quantum computation focuses not only on the physical system, but also on building and running algorithms which exploit the physical properties of quantum computers.  Several promising discoveries have positioned quantum computation as a key element in modern science, mainly:  (a) the development of novel and powerful methods of computation that may allow us to significantly increase our processing power for solving certain problems, (b) the increasing number of quantum computing applications in several branches of science and technology (e.g. image processing, computational geometry, pattern recognition, and warfare), (c) the simulation of complex physical systems and mathematical problems for which we know no classical digital computer algorithm that could efficiently simulate them.

    As the computing revolution produced faster, smaller, and more powerful computers, it has become increasingly difficult to keep up with Moore’s law, which states that the number of transistors and the microprocessor chip performance will double every two years or so.  This doubling has already begun to falter, due to the heat that is unavoidably generated when more and more silicon circuitry is placed into the same small area. Additionally, microprocessors currently have circuit features that are around 14 nanometers across, smaller than most viruses. But by the mid 2020s, it is projected that we will reach the 2–3-nanometer limit, where features are just 10 atoms across. At this scale, electron behavior will be governed by quantum uncertainties that will make transistors hopelessly unreliable.  This changing landscape allows for the possibility for an entirely new type of information processing, one that operates according to the laws of quantum physics.  Taking advantage of the laws of quantum physics to perform certain information processing tasks allows for doing so in a radically more efficient manner.

    In this tutorial we will present an overview of quantum computing including quantum annealing methods.  We’ll then discuss the physical implementation of an optical quantum walk at the chip scale using an array of silicon nitride beam splitters.  A quantum walk is the quantum analogue of a classical random walk on a graph and it serves as a tool for building quantum algorithms.  Unlike a classical random walk, where the process advances randomly, a quantum walk is subject to superposition across different vertices.  For example, a quantum walk on a line will be characterized by a faster spread and a non-Gaussian distribution of the walker’s final position.  The tutorial will provide simple examples to show the drastically different behavior of traditional random walks versus quantum walks.


    Joanna Ptasinski

    Joanna Ptasinski earned her PhD degree in Electrical Engineering with a concentration in Photonics from the University of California San Diego in 2014.  She’s currently at the Space and Naval Warfare Systems Center Pacific (SSC PAC) where she serves as branch head of the Cryogenic Electronics & Quantum Research Branch.  Her career at SSC PAC spans 15 years and includes a list of diverse efforts, both with fundamental and applied research focus, with applications in communications and sensing.  Dr. Ptasinski has led projects in photonic integrated circuits, where she explored thermal stabilization of the devices by using liquid crystals and characterized thermo-optic coefficients of various liquid crystal mixtures, plasmonics for bio-chemical sensing, and networks with an emphasis on anomaly detection and system interoperability.  Her current research interests include quantum computing and quantum information processing for modeling of complex interactions and forecasting.

    More information can be found at http://www.public.navy.mil/spawar/Pacific/Pages/default.aspx

    Closed-form Design of Optimal FIR Filters
    by Pavel Zahradnik
    Czech Technical University in Prague, Czech.
    Tuturial slide can be downloaded HERE!

    Digital filters are almost omnipresent in contemporary technology. Although filter design may seem to be a closed chapter after many decades of a research, the contrary is true. Among digital filters, finite impulse response filters (FIR) are frequently appreciated in numerous applications because of their inherent linear phase frequency response and stability. A holy grail among FIR filters represent optimal filters in terms of their length for a specified filter selectivity. These are filters with an equiripple form of their magnitude frequency response. The origin of a polynomial equiripple approximation can be attributed to P. L. Chebyshev who introduced the first equiripple approximation of a constant value in form of his famous polynomial.

    In the introductory part of this tutorial, we will show the history of the polynomial equiripple approximation starting its roots and first selective equiripple polynomials introduced by E. I. Zolotarev including the latest results. We will also outline why the progress in the equiripple polynomial approximation is slow despite long term efforts.

    In the initial technical part, we will provide the terminology and underlaying mathematical background in terms of elliptic functions.

    The core parts of this tutorial will introduce polynomial equiripple approximations of particular types of FIR filters, namely of narrow band-pass filters, notch filters, DC-notch filters, comb filters, half-band filters and low-pass filters including examples of the design. Particular polynomial equiripple approximations will include an approximating polynomial, differential equation of the approximating polynomial, degree equation and a simple procedure for a robust evaluation of an impulse response of a filter. Typical applications of these filter will be mentioned as well. Further, equiripple filter banks, cascade form of equiripple FIR filters and a precise tuning of equiripple FIR filters will be presented.

    A major emphasis will be placed on the robustness of a closed-form approach which by far outperforms any numerical design like the Parks-McClellan/Remez iterative approach. Examples will be included.

    The final part of this tutorial will present open problems and challenges ahead.

    A Q&A part will conclude this half day (3 hours) tutorial.

    Contact: zahradni@fel.cvut.cz

    Pavel Zahradnik (PhD) is full professor at the Department of Telecommunication Engineering of the Electrical Engineering Faculty at the Czech Technical University in Prague, Czech Republic. He received the M.Sc. and Ph.D. degrees in telecommunication engineering from the Czech Technical University in Prague in 1986 and 1991, respectively. He is a honorary professor at the Amity University, Noida. In 1993-1994 he was a fellow of Swiss government working at the Paul Scherrer Institut, Villigen, Switzerland, doing research in microwave tomography for a real-time localization of tumors. In 1996-1997 he started his pioneering work in the analytical design of FIR filters during his Alexander-von-Humboldt (Bonn, Germany) fellowship at the FriedrichAlexanderUniversität Erlangen-Nürnberg, Germany. Since 30 years, Prof. Zahradnik is working in the digital signal processing, algorithms and their implementation. His interests include besides digital signal processing and filter design also microprocessor and FPGA technology. Prof. Zahradnik is a real guru in the closed-form design of equiripple FIR filters.
    Software Defined Networking, Network Function Virtualization and the future Internet
    by Doan B. Hoang
    University of Technology Sydney (UTS), Australia
    Tuturial Part I slide can be downloaded HERE!

    Tuturial Part II slide can be downloaded HERE!

    In the world of Internet of Everything (IoE) where billions of smart devices are connected to the Internet and an enormous number of services are being generated and consumed by users, how do we manage the complexity of this future Internet? Software Defined Networking (SDN) and Network Function Virtualization (NFV) technologies are forming the foundation for the future Internet in providing services on-demand by efficient virtualization of resources, dynamic provisioning them for services, and automating of the management of networks and network elements. This tutorial will explore SDN, NFV and their roles in the future Internet.

    The first part of the tutorial will cover the fundamentals of SDN from the reasons for its introduction, SDN architecture, functionality, protocols, and components to its latest development. The focus will be on SDN controllers, the OpenFlow protocol, and SDN switches.

    The second part of the tutorial will present the development of NFV, its architecture and mechanisms for provisioning and orchestrating virtual network functions. The focus will be on virtualization, NFV Infrastructure (NFVI) management with OpenStack platform as an exemplar of applicable functionality and MANO, the virtual network function management and the orchestration components of NFV.

    The third part of the tutorial will discuss how these complementary technologies can be integrated into the future Internet where Cloud computing has already established itself as an alternative IT infrastructure for services and where billions of IoT devices and services are coming into play along with their connectivity complexity and security concerns.

    Doan B. Hoang is a Professor in the School of Electrical and Data Engineering, Faculty of Engineering and Information Technology, University of Technology Sydney (UTS). Currently, he leads research in Virtualized Infrastructures and Cyber Security (VICS). His current research interests include: Optimization and machine learning for Software-defined Infrastructures (Cloud, SDN, NFV and IoT) and Services, Security capability maturity models and quantitative security metrics for Cyber Security, IoT security architecture and trust assessment, and models for Assistive Healthcare. Professor Hoang has published over 200 research papers and graduated 16 Ph.Ds and 8 Masters under his supervision. Before UTS, he was with Basser Department of Computer Science, University of Sydney. He held various visiting positions including Visiting Professorships at the University of California, Berkeley; Nortel Networks Technology Centre in Santa Clara, the University of Waterloo, Carlos III University of Madrid, Nanyang Technological University, Lund University, and POSTECH University. While on sabbatical at UC Berkeley and Nortel Networks, he participated and led several DARPA sponsored projects including Openet, Active/Programmable Networks, and Data-Intensive Service-on-Demand Enabled by Next Generation Dynamic Optical Networks. He has delivered keynotes in recent years including the 2017 NAFOSTED Conference on Information and Computer Science (NICS) on Software Defined Infrastructures and Software Defined Security, the 2016 IEEE NetSoft workshop on Security in Virtualised networks, the 2014 Asia-Pacific Conference on Computer Aided System Engineering (APCASE) on research challenges in Cloud computing, Internet of Things and Big Data, and the 2013 IET/IEEE Second International Conference on Smart and Sustainable City (ICSSC) on Cloud Computing and Wireless Sensor/Actor Networks for Smart City.


Leave a Reply


Your email address will not be published. Required fields are marked *