Showing posts with label quantum computers. Show all posts
Showing posts with label quantum computers. Show all posts

Thursday, October 20, 2022

Discovering the Multiverse: Quantum Mechanics and Hugh Everett III, with Peter Byrne — Manifold #22

 

Peter Byrne is an investigative reporter and science writer based in Northern California. His popular biography, The Many Worlds of Hugh Everett III - Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family (Oxford University Press, 2010) was followed by publication of The Everett Interpretation of Quantum Mechanics, Collected Works 1957-1980, (Princeton University Press, 2012), co-edited with philosopher of science Jeffrey A. Barrett of UC Irvine. 

Everett's formulation of quantum mechanics, which implies the existence of a quantum multiverse, is favored by a significant (and growing) fraction of working physicists. 

Steve and Peter discuss: 

0:00 How Peter Byrne came to write a biography of Hugh Everett 
18:09 Everett’s personal life and groundbreaking thesis as a catalyst for the book 
24:00 Everett and Decoherence 
31:25 Reaction of other physicists to Everett’s many worlds theory 
40:46 Steve’s take on Everett’s many worlds theory 
43:41 Peter on the bifurcation of science and philosophy 
49:21 Everett’s post-academic life 
52:58 How Hugh Everett is remembered now 


References: 


Thursday, April 07, 2022

Scott Aaronson: Quantum Computing, Unsolvable Problems, & Artificial Intelligence — Manifold podcast #9

 

Scott Aaronson is the David J. Bruton Centennial Professor of Computer Science at The University of Texas at Austin, and director of its Quantum Information Center. Previously, he taught for nine years in Electrical Engineering and Computer Science at MIT. His research interests center around the capabilities and limits of quantum computers, and computational complexity theory more generally. 

Scott also writes the blog Shtetl Optimized: https://scottaaronson.blog/ 

Steve and Scott discuss: 

1. Scott's childhood and education, first exposure to mathematics and computers. 

2. How he became interested in computational complexity, pursuing it rather than AI/ML. 

3. The development of quantum computation and quantum information theory from the 1980s to the present. 

4. Scott's work on quantum supremacy. 

5. AGI, AI Safety


Thursday, March 10, 2022

Vlatko Vedral: Oxford Theoretical Physicist on Quantum Superposition of Living Creatures — Manifold Podcast #7

 

Vlatko Vedral is Professor in the Department of Physics at the University of Oxford and Centre for Quantum Technologies (CQT) at the National University of Singapore. He is known for his research on the theory of Entanglement and Quantum Information Theory. 

Steve and Vlatko discuss: 

1. History of quantum information theory, entanglement, and quantum computing 

2. Recent lab experiments that create superposition states of macroscopic objects, including a living creature (tardigrade) 

3. Whether quantum mechanics implies the existence of many worlds: are you in a superposition state right now? 

4. Present status and future of quantum computing

Resources 


Entanglement Between Superconducting Qubits and a Tardigrade: https://arxiv.org/pdf/2112.07978.pdf 

Macroscopic Superposition States: entanglement of a macroscopic living organism (tardigrade) with a superconducting qubit (Infoproc blog discussion including Sidney Coleman talk Quantum Mechanics In Your Face!) 

Tuesday, August 17, 2021

John Preskill interview by Sean Carroll

 

This is a great interview of John Preskill by Sean Carroll. 

Both are many worlders. At about 20 minutes John says:
I'm an Everettian... 
I'm comfortable with nothing happening in the world besides unitary evolution ... 
Measurement isn't something fundamentally different. ... 
It seems minimal: you know there's nothing happening but the Schrodinger equation and things are evolving, and if we can reconcile that with what we observe about physics ...

In Ten Years of Quantum Coherence and Decoherence I listed a number of prominent theorists who have expressed some degree of belief in many worlds.

Q1. (largely mathematical): Does the phenomenology of pure state evolution in a closed system (e.g., the universe) reproduce Copenhagen for observers in the system? 
This is a question about dynamical evolution: of the system as a whole, and of various interacting subsystems. It's not a philosophical question and, in my opinion, it is what theorists should focus on first. Although complicated, it is still reasonably well-posed from a mathematical perspective, at least as far as foundational physics questions go. 
I believe the evidence is strong that the answer to #1 is Yes, although the issue of the Born rule lingers (too complicated to discuss here, but see various papers I have written on the topic, along with other people like Deutsch, Zurek, etc.). It is clear from Weinberg's writing that he and I agree that the answer is Yes, modulo the Born rule. 
Define this position to be 
Y* := "Yes, possibly modulo Born" 
There are some theorists who do not agree with Y* (see the survey results above), but they are mostly people who have not thought it through carefully, in my opinion. 
I don't know of any explicit arguments for how Y* fails, and our recent results applying the vN QET strengthen my confidence in Y*. 
I believe (based on published remarks or from my personal interactions) that the following theorists have opinions that are Y* or stronger: Schwinger, DeWitt, Wheeler, Deutsch, Hawking, Feynman, Gell-Mann, Zeh, Hartle, Weinberg, Zurek, Guth, Preskill, Page, Cooper (BCS), Coleman, Misner, Arkani-Hamed, etc. 
But there is a generational issue, with many older (some now deceased!) theorists being reticent about expressing Y* even if they believe it. This is shifting over time and, for example, a poll of younger string theorists or quantum cosmologists would likely find a strong majority expressing Y*. 
[ Social conformity and groupthink are among the obstacles preventing broader understanding of Q1. That is, in part, why I have listed specific high profile individuals as having reached the unconventional but correct view! ]

Tuesday, July 13, 2021

Peter Shor on Quantum Factorization and Error Correction

 

This talk by Peter Shor describes the discovery of his quantum algorithm for prime factorization, and the discovery of quantum error correcting codes. The talk commemorates the first conference (Endicott House meeting) on the physics of computation in 1981. See 40 Years of Quantum Computation and Quantum Information.

Shor did not attend the 1981 meeting, where Feynman gave the keynote address Simulating Physics With Computers -- he was in his senior year at Caltech. But he recalls a talk that Feynman gave around the same time, on the possibility that negative probabilities might illuminate the EPR experiment and the Bell inequalities. 

Coincidentally, in my senior year (1986) I got Feynman to give a talk to the Society of Physics Students on this very topic! (I think I was president of SPS at the time.)

Sunday, May 02, 2021

40 Years of Quantum Computation and Quantum Information


This is a great article on the 1981 conference which one could say gave birth to quantum computing / quantum information.
Technology Review: Quantum computing as we know it got its start 40 years ago this spring at the first Physics of Computation Conference, organized at MIT’s Endicott House by MIT and IBM and attended by nearly 50 researchers from computing and physics—two groups that rarely rubbed shoulders. 
Twenty years earlier, in 1961, an IBM researcher named Rolf Landauer had found a fundamental link between the two fields: he proved that every time a computer erases a bit of information, a tiny bit of heat is produced, corresponding to the entropy increase in the system. In 1972 Landauer hired the theoretical computer scientist Charlie Bennett, who showed that the increase in entropy can be avoided by a computer that performs its computations in a reversible manner. Curiously, Ed Fredkin, the MIT professor who cosponsored the Endicott Conference with Landauer, had arrived at this same conclusion independently, despite never having earned even an undergraduate degree. Indeed, most retellings of quantum computing’s origin story overlook Fredkin’s pivotal role. 
Fredkin’s unusual career began when he enrolled at the California Institute of Technology in 1951. Although brilliant on his entrance exams, he wasn’t interested in homework—and had to work two jobs to pay tuition. Doing poorly in school and running out of money, he withdrew in 1952 and enlisted in the Air Force to avoid being drafted for the Korean War. 
A few years later, the Air Force sent Fredkin to MIT Lincoln Laboratory to help test the nascent SAGE air defense system. He learned computer programming and soon became one of the best programmers in the world—a group that probably numbered only around 500 at the time. 
Upon leaving the Air Force in 1958, Fredkin worked at Bolt, Beranek, and Newman (BBN), which he convinced to purchase its first two computers and where he got to know MIT professors Marvin Minsky and John McCarthy, who together had pretty much established the field of artificial intelligence. In 1962 he accompanied them to Caltech, where McCarthy was giving a talk. There Minsky and Fredkin met with Richard Feynman ’39, who would win the 1965 Nobel Prize in physics for his work on quantum electrodynamics. Feynman showed them a handwritten notebook filled with computations and challenged them to develop software that could perform symbolic mathematical computations. ... 
... in 1974 he headed back to Caltech to spend a year with Feynman. The deal was that Fredkin would teach Feynman computing, and Feynman would teach Fredkin quantum physics. Fredkin came to understand quantum physics, but he didn’t believe it. He thought the fabric of reality couldn’t be based on something that could be described by a continuous measurement. Quantum mechanics holds that quantities like charge and mass are quantized—made up of discrete, countable units that cannot be subdivided—but that things like space, time, and wave equations are fundamentally continuous. Fredkin, in contrast, believed (and still believes) with almost religious conviction that space and time must be quantized as well, and that the fundamental building block of reality is thus computation. Reality must be a computer! In 1978 Fredkin taught a graduate course at MIT called Digital Physics, which explored ways of reworking modern physics along such digital principles. 
Feynman, however, remained unconvinced that there were meaningful connections between computing and physics beyond using computers to compute algorithms. So when Fredkin asked his friend to deliver the keynote address at the 1981 conference, he initially refused. When promised that he could speak about whatever he wanted, though, Feynman changed his mind—and laid out his ideas for how to link the two fields in a detailed talk that proposed a way to perform computations using quantum effects themselves. 
Feynman explained that computers are poorly equipped to help simulate, and thereby predict, the outcome of experiments in particle physics—something that’s still true today. Modern computers, after all, are deterministic: give them the same problem, and they come up with the same solution. Physics, on the other hand, is probabilistic. So as the number of particles in a simulation increases, it takes exponentially longer to perform the necessary computations on possible outputs. The way to move forward, Feynman asserted, was to build a computer that performed its probabilistic computations using quantum mechanics. 
[ Note to reader: the discussion in the last sentences above is a bit garbled. The exponential difficulty that classical computers have with quantum calculations has to do with entangled states which live in Hilbert spaces of exponentially large dimension. Probability is not really the issue; the issue is the huge size of the space of possible states. Indeed quantum computations are strictly deterministic unitary operations acting in this Hilbert space. ] 

Feynman hadn’t prepared a formal paper for the conference, but with the help of Norm Margolus, PhD ’87, a graduate student in Fredkin’s group who recorded and transcribed what he said there, his talk was published in the International Journal of Theoretical Physics under the title “Simulating Physics with Computers.” ...

Feynman's 1981 lecture Simulating Physics With Computers.

Fredkin was correct about the (effective) discreteness of spacetime, although he probably did not realize this is a consequence of gravitational effects: see, e.g., Minimum Length From First Principles. In fact, Hilbert Space (the state space of quantum mechanics) itself may be discrete.



Related: 


My paper on the Margolus-Levitin Theorem in light of gravity: 

We derive a fundamental upper bound on the rate at which a device can process information (i.e., the number of logical operations per unit time), arising from quantum mechanics and general relativity. In Planck units a device of volume V can execute no more than the cube root of V operations per unit time. We compare this to the rate of information processing performed by nature in the evolution of physical systems, and find a connection to black hole entropy and the holographic principle. 

Participants in the 1981 meeting:
 

Physics of Computation Conference, Endicott House, MIT, May 6–8, 1981. 1 Freeman Dyson, 2 Gregory Chaitin, 3 James Crutchfield, 4 Norman Packard, 5 Panos Ligomenides, 6 Jerome Rothstein, 7 Carl Hewitt, 8 Norman Hardy, 9 Edward Fredkin, 10 Tom Toffoli, 11 Rolf Landauer, 12 John Wheeler, 13 Frederick Kantor, 14 David Leinweber, 15 Konrad Zuse, 16 Bernard Zeigler, 17 Carl Adam Petri, 18 Anatol Holt, 19 Roland Vollmar, 20 Hans Bremerman, 21 Donald Greenspan, 22 Markus Buettiker, 23 Otto Floberth, 24 Robert Lewis, 25 Robert Suaya, 26 Stand Kugell, 27 Bill Gosper, 28 Lutz Priese, 29 Madhu Gupta, 30 Paul Benioff, 31 Hans Moravec, 32 Ian Richards, 33 Marian Pour-El, 34 Danny Hillis, 35 Arthur Burks, 36 John Cocke, 37 George Michaels, 38 Richard Feynman, 39 Laurie Lingham, 40 P. S. Thiagarajan, 41 Marin Hassner, 42 Gerald Vichnaic, 43 Leonid Levin, 44 Lev Levitin, 45 Peter Gacs, 46 Dan Greenberger. (Photo courtesy Charles Bennett)

Friday, October 11, 2019

The Quantum Simulation Hypothesis: Do we live in a quantum multiverse simulation?

The Simulation Hypothesis is the idea that our universe might be part of a simulation: we are not living in base reality. (See, e.g., earlier discussion here.)



There are many versions of the argument supporting this hypothesis, which has become more plausible (or at least more popular) over time as computational power, and our familiarity with computers and virtual worlds within them, has increased.

Modern cosmology suggests that our universe, our galaxy, and our solar system, have billions of years ahead of them, during which our civilization (currently only ~10ky old!), and others, will continue to evolve. It seems reasonable that technology and science will continue to advance, delivering ever more advanced computational platforms. Within these platforms it is likely that quasi-realistic simulations, of our world, or of imagined worlds (e.g., games), will be created, many populated by AI agents or avatars. The number of simulated beings could eventually be much larger than the number of biologically evolved sentient beings. Under these assumptions, it is not implausible that we ourselves are actually simulated beings, and that our world is not base reality.

One could object to using knowledge about our (hypothetically) simulated world to reason about base reality. However, the one universe that we have direct observational contact with seems to permit the construction of virtual worlds with large populations of sentient beings. While our simulation may not be entirely representative of base reality, it nevertheless may offer some clues as to what is going on "outside"!

The simulation idea is very old. It is almost as old as computers themselves. However, general awareness of the argument has increased significantly, particularly in the last decade. It has entered the popular consciousness, transcending its origins in the esoteric musings of a few scientists and science fiction authors.

The concept of a quantum computer is relatively recent -- one can trace the idea back to Richard Feynman's early-1980s Caltech coursePhysical Limits to Computation. Although quantum computing has become a buzzy part of the current hype cycle, very few people have any deep understanding of what a quantum computer actually is, and why it is different from a classical computer. A prerequisite for this understanding is a grasp of both the physical and mathematical aspects of quantum mechanics, which very few possess. Individuals who really understand quantum computing tend to have backgrounds in theoretical physics, physics, or perhaps computer science or mathematics.

The possibility of quantum computers requires that we reformulate the Simulation Hypothesis in an important way. If one is willing to posit future computers of gigantic power and complexity, why not quantum computers of arbitrary power? And why not simulations which run on these quantum computers, making use of quantum algorithms? After all, it was Feynman's pioneering observation that certain aspects of the quantum world (our world!) are more efficiently simulated using a quantum computer than a classical (e.g., Turing) machine. (See quantum extension of the Church-Turing thesis.) Hence the original Simulation Hypothesis should be modified to the Quantum Simulation Hypothesis: Do we live in a quantum simulation?

There is an important consequence for those living in a quantum simulation: they exist in a quantum multiverse. That is, in the (simulated) universe, the Many Worlds description of quantum mechanics is realized. (It may also be realized in base reality, but that is another issue...) Within the simulation, macroscopic, semiclassical brains perceive only one branch of the almost infinite number of decoherent branches of the multiverse. But all branches are realized in the execution of the unitary algorithm running on qubits. The power of quantum computing, and the difficulty of its realization, both derive from the requirement that entanglement and superposition be maintained in execution.

Given sufficiently powerful tools, the beings in the simulation could test whether quantum evolution of qubits under their control is unitary, thereby verifying the absence of non-unitary wavefunction collapse, and the existence of other branches (see, e.g., Deutsch 1986).



We can give an anthropic version of the argument as follows.

1. The physical laws and cosmological conditions of our universe seem to permit the construction of large numbers of virtual worlds containing sentient beings.

2. These simulations could run on quantum computers, and in fact if the universe being simulated obeys the laws of quantum physics, the hardware of choice is a quantum computer. (Perhaps the simulation must be run on a quantum computer!)

If one accepts points 1 and 2 as plausible, then: Conditional on the existence of sentient beings who have discovered quantum physics (i.e., us), the world around them is likely to be a simulation running on a quantum computer. Furthermore, these beings exist on a branch of the quantum multiverse realized in the quantum computer, obeying the rules of Many Worlds quantum mechanics. The other branches must be there, realized in the unitary algorithm running on (e.g., base reality) qubits.

See also

Gork revisited 2018

Are You Gork?

Big Ed

Tuesday, October 08, 2019

AI in the Multiverse: Intellects Vast and Cold



In quantum mechanics the state of the universe evolves deterministically: the state of the entire universe at time zero fully determines its state at any later time. It is difficult to reconcile this observation with our experience as macroscopic, nearly classical, beings. To us it seems that there are random outcomes: the state of an electron (spin-up in the z direction) does not in general determine the outcome of a measurement of its spin (x direction measurement probability 1/2 of either spin up or down). This is because our brains (information processing devices) are macroscopic: one macroscopic state (memory record) is associated with the spin up outcome, which rapidly loses contact (decoheres) from the other macroscopic state with memory record of the spin down outcome. Nevertheless, the universe state, obtained from deterministic Schrodinger evolution of the earlier state, is a superposition:

| brain memory recorded up, spin up >

+

| brain memory recorded down, spin down >.


We are accustomed to thinking about classical information processing machines: brains and computers. However, with the advent of quantum computers a new possibility arises: a device which (necessarily) resides in a superposition state, and uses superposition as an integral part of its information processing.

What can we say about this kind of (quantum) intelligence? Must it be "artificial"? Could there be a place in the multiverse where evolved biological beings use superposition and entanglement as a resource for information processing?

Any machine of the type described above must be vast and cold. Vast, because many qubits are required for self-awareness and consciousness (just as many bits are required for classical AI). Cold, because decoherence destroys connections across superpositions. Too much noise (heat), and it devolves back to isolated brains, residing on decohered branches of the wavefunction.

One could regard human civilization as a single intelligence or information processing machine. This intelligence is rapidly approaching the point where it will start to use entanglement as a significant resource. It is vast, and (in small regions -- in physics labs) cold enough. We can anticipate more and larger quantum computers distributed throughout our civilization, making greater and greater use of nearby patches of the multiverse previously inaccessible.

Perhaps some day a single quantum computer might itself be considered intelligent -- the first of new kind!

What will it think?

Consciousness in a mini multiverse... Thoughts which span superpositions.


See also Gork revisited 2018 and Are You Gork?

Sunday, September 30, 2018

Quantum Information Science Workshop at MSU


Webpage / Program / Abstracts.

My opening remarks:
On behalf of Michigan State University it is my pleasure to welcome all of you to this workshop on quantum information science.

In the fall of 1983 (my freshman year!) Feynman taught a graduate course at Caltech called Potentialities and Limitations of Computing Machines. Chapter 6 of the book developed from his lecture notes is entitled Quantum Mechanical Computers. In the prior years he had teamed with Professors Carver Mead and John Hopfield to teach a similar course. Carver Mead was the father of VLSI and coined the term "Moore's Law"! John Hopfield, no slouch, was an early pioneer of neural nets, among other things.

It was in 1981, in a paper called Simulating Physics with Computers, that Feynman proposed the idea of a Universal Quantum Simulator. He was the first to discuss the simulation of quantum systems using a quantum computer, and to point out the difficulties of using classical computations to explore what could be exponentially large Hilbert spaces. Feynman analyzed reversible (unitary) computations using quantum elements, and wrote "... the laws of physics present no barrier to reducing the size of computers until bits are the size of atoms, and quantum behavior holds dominant sway."
I recount this little bit of history because we have finally, thanks to the sweat and ingenuity of many physicists, reached the era of noisy, but useful, quantum simulators. Personally I feel that universal quantum computers -- of the type that could, for instance, implement Shor's Algorithm -- might still be far off. Nevertheless, quantum simulators are themselves an important step forward, and will likely become a very useful tool for physicists.

I can't resist making a small prediction of my own here. Some of you might know that the foundations of quantum mechanics are still in disarray. As Steve Weinberg says: "... today there is no interpretation of quantum mechanics that does not have serious flaws." Feynman himself said: "I think I can safely say that nobody understands quantum mechanics."

Most physicists, even theorists, focus their efforts on practical matters and don't worry about foundational questions. I believe that a side effect of work on quantum information and quantum computing will be a demystification of the process of measurement and of decoherence. By demystification I mean that many more physicists will develop a good understanding of something that was swept under the rug in von Neumann's Projection or Collapse postulate, which we now teach in every QM course. Once we truly understand decoherence we realize that Schrodinger evolution of the wavefunction describing both observer and system can reproduce all the usual phenomenology of quantum mechanics -- Collapse is not necessary. This was pointed out long ago by Everett, and well-appreciated by people like Feynman, Schwinger, Gell-Mann, Hawking, David Deutsch, and Steve Weinberg, although not widely understood in the broader physics community.

I apologize if these final comments are mysterious. Perhaps they will someday become clear... In the meantime, please enjoy the workshop :-)

Links:

Weinberg on quantum foundations

Schwinger on quantum foundations

Steven Weinberg: What's the matter with quantum mechanics?


Feynman and Gell-Mann

Friday, January 05, 2018

Gork revisited, 2018

It's been almost 10 years since I made the post Are you Gork?

Over the last decade, both scientists and non-scientists have become more confident that we will someday create:

A. AGI (= sentient AI, named "Gork" :-)  See Rise of the Machines: Survey of AI Researchers.

B. Quantum Computers. See Quantum Computing at a Tipping Point?

This change in zeitgeist makes the thought experiment proposed below much less outlandish. What, exactly, does Gork perceive? Why couldn't you be Gork? (Note that the AGI in Gork can be an entirely classical algorithm even though he exists in a quantum simulation.)




Slide from this [Caltech IQI] talk. See also illustrations in Big Ed.
Survey questions:

1) Could you be Gork the robot? (Do you split into different branches after observing the outcome of, e.g., a Stern-Gerlach measurement?)

2) If not, why? e.g.,

I have a soul and Gork doesn't!  Copenhagen people, please use exit on your left.

Decoherence solved all that!  Sorry, try again. See previous post.

I don't believe that quantum computers will work as designed, e.g., sufficiently large algorithms or subsystems will lead to real (truly irreversible) collapse. Macroscopic superpositions that are too big (larger than whatever was done in the lab last week!) are impossible.

QM is only an algorithm for computing probabilities -- there is no reality to the quantum state or wavefunction or description of what is happening inside a quantum computer. Tell this to Gork!

Stop bothering me -- I only care about real stuff like the Higgs mass / SUSY-breaking scale / string Landscape / mechanism for high-Tc / LIBOR spread / how to generate alpha. 
[ 2018: Ha Ha -- first 3 real stuff topics turned out to be pretty boring use of the last decade... ]
Just as A. and B. above have become less outlandish assumptions, our ability to create large and complex superposition states with improved technology (largely developed for quantum computing; see Schrodinger's Virus) will make the possibility that we ourselves exist in a superposition state less shocking. Future generations of physicists will wonder why it took their predecessors so long to accept Many Worlds.

Bonus! I will be visiting Caltech next week (Tues and Weds 1/8-9). Any blog readers interested in getting a coffee or beer please feel free to contact me :-)

Monday, December 18, 2017

Quantum Computing near a Tipping Point?

I received an email from a physicist colleague suggesting that we might be near a "tipping point" in quantum computation. I've sort of followed quantum computation and quantum information as an outsider for about 20 years now, but haven't been paying close attention recently because it seems that practical general purpose quantum computers are still quite distant. Furthermore, I am turned off by the constant hype in the technology press...

But perhaps my opinion is due for an update? I know some real quantum computing people read this blog, so I welcome comments.

Here's part of what I wrote back:
I'm not sure what is meant by "tipping point" -- I don't think we know yet what qubit technology can be scaled to the point of making Shor's Algorithm feasible. The threat to classical cryptography is still very far off -- you need millions* of qubits and the adversary can always just increase the key length; the tradeoffs are likely to be in favor of the classical method for a long time.

Noisy quantum simulators of the type Preskill talks about might be almost possible (first envisioned by Feynman in the Caltech class he gave in the 1980s: Limits to Computation). These are scientifically very interesting but I am not sure that there will be practical applications for some time.

* This is from distant memory so might not be quite right. The number of ideal qubits needed would be a lot less, but with imperfect qubits/gates and quantum error-correction, etc., I seem to remember a result like this. Perhaps millions is the number of gates, not qubits? (See here.)
These are the Preskill slides I mentioned -- highly recommended. John Preskill is the Feynman Professor of Theoretical Physics at Caltech :-)



Here's a summary of current and near-term hardware capability:

Thursday, July 25, 2013

Caltech Institute for Quantum Information and Matter

IQIM is the home of John Preskill, the Richard P. Feynman Professor of Theoretical Physics at Caltech. John was celebrated on the occasion of his 60th birthday here.




Dinner meeting with the group.



Working in the Pasadena sunshine.



Inside the Annenberg Center.













On the first floor there are some old plaques, including this one honoring Chris Chang :-)



Some random Caltech photos I took. Go Beavers!






Sunday, July 14, 2013

Microsoft Research Faculty Summit

See you in Seattle.

Conference site. Agenda. Speakers.
This July 15 will mark the start of Microsoft Research’s fourteenth annual Faculty Summit at the Microsoft Conference Center in Redmond, Washington. More than 400 academic researchers from 200 institutions and 29 countries will join Microsoft Research to assess and explore today’s computing opportunities. This year, Bill Gates will join us to set the tone of the summit in a conversation on the topic of “Innovation and Opportunity—the Contribution of Computing to Improving Our World.”

Also delivering keynote presentations this year:

Doug Burger will discuss how changes in the hardware ecosystem will disrupt computer science.
Peter Lee and Jeannette Wing will examine how basic research helps everyone.
Clay Shirky will explore user-centric approaches to data.

Sessions covering topics ranging from “Prediction Engines” and “Big Data Platforms” to “Deep Machine Learning” and “Quantum Computing” adorn the summit agenda and will foster rich and engaging discussions.
You can watch it live.

Sunday, April 27, 2008

Are you Gork?


Slide from this talk.

Survey questions:

1) Could you be Gork the robot? (Do you split into different branches after observing the outcome of, e.g., a Stern-Gerlach measurement?)

2) If not, why? e.g,

I have a soul and Gork doesn't

Decoherence solved all that! See previous post.

I don't believe that quantum computers will work as designed, e.g., sufficiently large algorithms or subsystems will lead to real (truly irreversible) collapse. Macroscopic superpositions larger than whatever was done in the lab last week are impossible.

QM is only an algorithm for computing probabilities -- there is no reality to the quantum state or wavefunction or description of what is happening inside a quantum computer.

Stop bothering me -- I only care about real stuff like the Higgs mass / SUSY-breaking scale / string Landscape / mechanism for high-Tc / LIBOR spread / how to generate alpha.

Blog Archive

Labels