Google Tech Talks December, 6 2007 ABSTRACT This tech talk series explores the enormous opportunities afforded by the emerging field of quantum computing. The exploitation of quantum phenomena not only offers tremendous speed-ups for important algorithms but may also prove key to achieving genuine synthetic intelligence. We argue that understanding higher brain function requires references to quantum mechanics as well. These talks look at the topic of quantum computing from mathematical, engineering and neurobiological perspectives, and we attempt to present the material so that the base concepts can be understood by listeners with no background in quantum physics. This first talk of the series introduces the basic concepts of quantum computing. We start by looking at the difference in describing a classical and a quantum mechanical system. The talk discusses the Turing machine in quantum mechanical terms and introduces the notion of a qubit. We study the gate model of quantum computing and look at the famous quantum algorithms of Deutsch, Grover and Shor. Finally we talk about decoherence and how it destroys superposition states which is the main obstacle to building large scale quantum computers. We clarify widely held misconceptions about decoherence and explain that environmental interaction tends to choose a basis in state space in which the system decoheres while leaving coherences in other coordinate systems intact. Speaker: Hartmut Neven
Posts Tagged ‘ googletechtalks ’
An Overview of High Performance Computing and Challenges for the Future
Google Tech Talks January, 25 2008 ABSTRACT In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile–time and run–time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run–time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at …