Joseph Traub (Columbia University (on sabbatical at Harvard))
We introduce the notion of strong quantum speedup. To compute this speedup one must know the classical computational complexity. What is it about the problems of quantum physics and quantum chemistry that enable us to get lower bounds on the classical complexity?
Juerg Froehlich (ETH Zurich)
After a brief general introduction to the subject of quantum probability theory, quantum dynamical systems are introduced and some of their probabilistic features are described. On the basis of a few general principles - "duality between observables and indeterminates", "loss of information" and "entanglement generation" - a quantum theory of experiments and measurements is developed, and the "theory of von Neumann measurements" is outlined. Finally, a theory of non-demolition measurements is sketched, and, as an application of the Martingale Convergence Theorem, it is shown how facts emerge in non-demolition measurements.
Matthias Troyer (ETH Zurich)
About a century after the development of quantum mechanics we have now reached an exciting time where non-trivial devices that make use of quantum effects can be built. While a universal quantum computer of non-trivial size is still out of reach there are a number commercial and experimental devices: quantum random number generators, quantum encryption systems, and analog quantum simulators. In this colloquium I will present some of these devices and validation tests we performed on them. Quantum random number generators use the inherent randomness in quantum measurements to produce true random numbers, unlike classical pseudorandom number generators which are inherently deterministic. Optical lattice emulators use ultracold atomic gases in optical lattices to mimic typical models of condensed matter physics. Finally, I will discuss the devices built by Canadian company D-Wave systems, which are special purpose quantum simulators for solving hard classical optimization problems.
Sean Hallgren (Penn State (visiting MIT))
Computing the unit group of a number field is one of the main problems in computational algebraic number theory. Polynomial time algorithms for problems for arbitrary degree number fields should be polynomial time in both the degree and log of the discriminant. The best classical algorithms for computing the unit group takes time exponential in both parameters. There is a quantum algorithm that runs in time polynomial in log the discriminant but exponential in the degree. We give a quantum algorithm that is polynomial in both parameters of the number field. The proof works via a reduction to a continuous Hidden Subgroup Problem.
Alexander Belov (MIT)
In this talk, I describe some recent quantum algorithms for the problem of learning and testing juntas. For the main part of the talk, I study the following variant of the junta learning problem. We are given an oracle access to a Boolean function f on n variables that only depends on k variables, and, when restricted to them, equals some predefined symmetric function h. The task is to identify the variables the function depends on. This is a generalization of the Bernstein-Vazirani problem (when h is the XOR function) and the (combinatorial) group testing problem (when h is the OR function). I describe an optimal quantum algorithm for the case when h is the OR or the EXACT-HALF function. For the case of the MAJORITY function, I obtain an upper bound of O(k1/4). Additionally, I describe an application of these techniques for the problem of testing juntas, that is a joint work with Andris Ambainis, Oded Regev, and Ronald de Wolf.
Renato Renner (ETH)
Quantum state tomography is the task of estimating the state of a quantum system using measurements. Typically, one is interested in the (unknown) state generated during an experiment which can be repeated arbitrarily often in principle. However, the number of actual runs of the experiment, from which data is collected, is always finite (and often small). As pointed out recently, this may lead to unjustified (or even wrong) claims when employing standard statistical tools without care. In this talk, I will present a method for obtaining reliable estimates from finite tomographic data. Specifically, the method allows the derivation of confidence regions, i.e., subsets of the state space in which the unknown state is contained with probability almost one.
Keith Lee (Perimeter Institute)
Quantum field theory provides the framework for the Standard Model of particle physics and plays a key role in physics. However, calculations are generally computationally complex and limited to weak interaction strengths. After an introduction to quantum field theory, I'll describe polynomial-time quantum algorithms for computing relativistic scattering amplitudes in both scalar and fermionic quantum field theories. The algorithms achieve exponential speedup over known classical methods. One of the motivations for this work comes from computational complexity theory. Ultimately, we wish to know what is the computational power of our universe. Studying such quantum algorithms probes whether a universal quantum computer is powerful enough to represent quantum field theory; in other words, is quantum field theory in BQP? Conversely, one can ask whether quantum field theory can represent a universal quantum computer; is quantum field theory BQP-hard? I'll describe our approach to addressing the question of BQP-hardness.
Stephanie Wehner (National University of Singapore)
The second law of thermodynamics tells us which state transformations are so statistically unlikely that they are effectively forbidden. Its original formulation, due to Clausius, states that "Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time." The second law applies to systems composed of many particles, however, we are seeing that one can make sense of thermodynamics in the regime where we only have a small number of particles interacting with a heat bath, or when we have highly correlated systems and wish to make non-statistical statements about them. Is there a second law of thermodynamics in this regime? Here, we find that for processes which are cyclic or very close to cyclic, thesecond law for microscopic or highly correlated systems takes on a very different form than it does at the macroscopic scale, imposing not just one constraint on what state transformations are possible, but an entire family of constraints. In particular, we find that the quantum Renyi relative entropy distances to the equilibrium state can never increase. We further find that there are three regimes which govern which family of second laws govern state transitions, depending on how cyclic the process is. In one regime one can cause an apparent violation of the usual second law, through a process of embezzling work from a large system which remains arbitrarily close to its original state.
Isaac Kim (Perimeter Institute)
For a general multipartite quantum state, we formulate a locally checkable condition, under which the expectation values of certain nonlocal observables are completely determined by the expectation values of some local observables. The condition is satisfied by ground states of gapped quantum many-body systems in one and two spatial dimensions, assuming a widely conjectured form of area law is correct. Its implications on quantum state tomography, quantum state verification, and quantum error correcting code is discussed.
Sergei Bravyi (IBM)
Andris Ambainis (University of Latvia)
Robin Kothari (University of Waterloo)
We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous- and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also significantly simplify the analysis of this conversion, avoiding the need for a complex fault correction procedure. Our simplification relies on a new form of "oblivious amplitude amplification" that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error. This is joint work with Dominic W. Berry, Andrew M. Childs, Richard Cleve, and Rolando D. Somma. Available at http://arxiv.org/abs/1312.1414
Robin Blume-Kohout (tentative) (Sandia National Labs)