1 "Nicolaas Vroom" 
Quantum Computers.  maandag 2 december 2002 23:07 
2 "Charles DH Williams" 
Re: Quantum Computers.  dinsdag 3 december 2002 0:54 
3 "Aaron Bergman" 
Re: Quantum Computers.  dinsdag 3 december 2002 1:59 
4 "Kevin A. Scaldeferri" 
Re: Quantum Computers.  dinsdag 3 december 2002 20:32 
5 "Greg Kuperberg" 
Re: Quantum Computers.  woensdag 4 december 2002 7:10 
6 "Robert Tucci" 
Re: Quantum Computers.  woensdag 4 december 2002 7:11 
7 "Nicolaas Vroom" 
Re: Quantum Computers.  woensdag 4 december 2002 19:11 
8 "Nicolaas Vroom" 
Re: Quantum Computers.  woensdag 4 december 2002 19:12 
9 "Kevin A. Scaldeferri" 
Re: Quantum Computers.  woensdag 4 december 2002 19:49 
10 "Nicolaas Vroom" 
Re: Quantum Computers.  donderdag 5 december 2002 3:45 
11 "Nicolaas Vroom" 
Re: Quantum Computers.  donderdag 5 december 2002 19:46 
12 "Hans Aschauer" 
Re: Quantum Computers.  donderdag 5 december 2002 19:48 
13 "Nicolaas Vroom" 
Re: Quantum Computers.  donderdag 5 december 2002 20:05 
14 "Greg Kuperberg" 
Re: Quantum Computers.  donderdag 5 december 2002 20:27 
15 "Chris Pollett" 
Re: Quantum Computers.  vrijdag 6 december 2002 3:37 
16 "Robert Tucci" 
Re: Quantum Computers.  vrijdag 6 december 2002 20:37 
17 "Peter Shor" 
Re: Quantum Computers.  vrijdag 6 december 2002 22:07 
18 "Greg Kuperberg" 
Re: Quantum Computers.  zaterdag 7 december 2002 3:49 
19 "Suresh __NoJunkMail kumar" 
Re: Quantum Computers.  maandag 9 december 2002 8:02 
20 "Nicolaas Vroom" 
Re: Quantum Computers.  donderdag 12 december 2002 2:38 
21 "Kevin A. Scaldeferri" 
Re: Quantum Computers.  maandag 16 december 2002 5:05 
22 "Nicolaas Vroom" 
Re: Quantum Computers.  maandag 16 december 2002 5:15 
23 "Robert Tucci" 
Re: Quantum Computers.  maandag 16 december 2002 5:23 
24 "Peter Shor" 
Re: Quantum Computers.  maandag 16 december 2002 5:27 
25 "Kevin A. Scaldeferri" 
Re: Quantum Computers.  maandag 16 december 2002 6:25 
26 "Nicolaas Vroom" 
Re: Quantum Computers.  zaterdag 21 december 2002 1:35 
27 "Nicolaas Vroom" 
Re: Quantum Computers.  zaterdag 21 december 2002 1:37 
28 "Nicolaas Vroom" 
Re: Quantum Computers.  zaterdag 21 december 2002 1:39 
29 "Kevin A. Scaldeferri" 
Re: Quantum Computers.  zaterdag 21 december 2002 16:07 
30 "Nicolaas Vroom" 
Re: Quantum Computers.  maandag 23 december 2002 23:59 
31 "Peter Shor" 
Re: Quantum Computers.  zondag 29 december 2002 20:30 
32 "Mike Stay" 
Re: Quantum Computers.  maandag 30 december 2002 8:03 
33 "Nicolaas Vroom" 
Re: Quantum Computers.  vrijdag 3 januari 2003 1:11 
34 "Anatoly Vorobey" 
Re: Quantum Computers.  zaterdag 8 februari 2003 3:33 
35 "Ralph Hartley" 
Re: Quantum Computers.  zaterdag 15 februari 2003 9:52 
36 "michaelprice" 
Re: Quantum Computers.  vrijdag 21 februari 2003 6:25 
37 "Nicolaas Vroom" 
Re: Quantum Computers.  woensdag 26 maart 2003 4:26 
38 "Nicolaas Vroom" 
Re: Quantum Computers.  Datum: woensdag 23 april 2003 8:12 
39 "Nicolaas Vroom" 
Re: Quantum Computers.  Thu, 10 Apr 2003 22:10:19 
40 "Nicolaas Vroom" 
Re: Quantum Computers.  Wed, 16 Apr 2003 22:47:14 
41 "Nicolaas Vroom" 
Re: Quantum Computers.  Fri, 18 Apr 2003 11:08:33 
The last 3 messages were either rejected or are "still in process" in sci.physics.research
At page 51 of Scientific November 2002 is written:
"How many computational steps are needed to find the
prime factors of a 300digit number?
The best classical algorithm would take about 5*10E24 steps.
By taking advantage of innumerable quantum states
a quantum factoring algorithm would take only 5*10E10 steps"
My question is why are both not the same. How do we explain that a quantum computer requires such a relative small number of steps.
Suppose this the best classical algorithm:
1 INPUT 300dn (300digit number) 2 N1 = 1 3 DO 4 N1 = N1 + 1 5 N2 = 300dn/N1 6 Is N1 * Int(N2) = 300dn 7 Yes Is N2 prime ? 8 Yes Print "Eureka" N1, N2 : Exit 9 No Exit 10 Loop Until N1 > SQR(300dn) 11 Print 300dn "is prime number !"Line 7:
Line 10 Can also be written as:
10 Loop Until N1 * N1 > 300dn
This involves a multiplication of two 150digit numbers
Line 10 is important because it answers the question: How many times do we have to excute the outer loop: Answer: 10E150 times
Roughly speaking each loop consists of 10 steps. That means the total number of steps is 10E151.
The first improvement is to replace line 4 by:
4 N1 = N1 + 2 i.e. test only odd numbers.
The problem is both that the classical computer and
the quantum computer will benefit from this improvement
which does not explain the difference between the two.
One possibility is to perform many loops in parallel, but that means more hardware. On the otherhand this is not a true solution because it can be done for both a classical computer and a quantum computer.
The article in Scientific American gives the answer: "By taking advantage of innumerable quantum states" But how do you do that in practice in order to reduce the number of steps (outer loops ?) of the quantum computer?
Nick
[Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).
 KS]
In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom
> 
Suppose this the best classical algorithm: 
It isn't.
Anyway, practical quantum copmputers can now factor numbers up to 15 in, err, milliseconds. I guess the next milestone will be when one runs Microsoft Office.
Charles
In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote:
>  [Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).  KS] 
Willfully disregarding the disclaimer, is there really any sense in which this is true? It's oftstated, but from what I know of Shor's algorithm (and what I know is fairly minimal), it works because quantum computers can do FFTs very quickly. If one were really able to do as many loops in parallel as one would want, then factoring would only take O(1) speed rather than O(log^3(n)), right?
Aaron
In article
>  In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote: 
>> 
[Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).  KS] 
> 
Willfully disregarding the disclaimer, is there really any sense in which this is true? It's oftstated, but from what I know of Shor's algorithm (and what I know is fairly minimal), it works because quantum computers can do FFTs very quickly. If one were really able to do as many loops in parallel as one would want, then factoring would only take O(1) speed rather than O(log^3(n)), right? 
Well, I'm not an expert in this area by a long shot but there is are at least two issues here.
First, when you rely on a superposition to "parallelize" a computation, you need to somehow amplify the correct answer so that there is high probability that when you collapse the wave function you end up with the right answer. This is different from a classical nondeterministic computation in which you can scan through all the choices that were made to look for a successful execution. (Fortunately for the QC, factorization can be verified in polynomial time, so you can just check the answer and repeat the calculation if you get unlucky.)
Second, I wouldn't expect to be able to get down to constant time, just based on classical computation theory. Parallelization doesn't generically allow you to do this. Also, while nondeterministic machines can be faster than deterministic ones (or, at least it seems that way), they can't reduce any problem to a constanttime solution.
I'd have to go brush up on Shor's algorithm myself to say any more about how accurate this sort of statement really is. (Peter Shor does post here periodically, though... maybe he'll see this and provide a more rigorously correct explanation and save us the work :) )

======================================================================
Kevin Scaldeferri Calif. Institute of Technology
In article
>  In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote: 
>>  [Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).  KS] 
>  Willfully disregarding the disclaimer, is there really any sense in which this is true? It's oftstated, but from what I know of Shor's algorithm (and what I know is fairly minimal), it works because quantum computers can do FFTs very quickly. If one were really able to do as many loops in parallel as one would want, then factoring would only take O(1) speed rather than O(log^3(n)), right? 
In my opinion "parallel computation" is poor terminology for quantum computation. First off it's not clear what it means to allow arbitrary parallel computation. To use Unix terminology, let's say that it means that you can fork processes for free, and that all of the child processes run in parallel but do not communicate. Then the next question is how to reconcile the exponentially many final states of the child processes. One model is the NP model: Children run for polynomial time. If all children say "no", then the parent program outputs "no". If at least one child process says "yes", the parent outputs "yes". Another model is #P: children run for polynomial time and do not communicate; the parent outputs the number of child processes that report "yes".
In the NP model, you have to convert everything to yesno questions. For instance you can ask whether the number n has a factor in some range 1,...,k. This can be computed in nondeterministic polynomial time, because each child process can simply guess a different factor in 1,...,k and test divisibility. But it is not O(1), because the processes time to fork and more time to test divisibility.
A reasonable conjecture is that the quantum computation class BQP, if restricted to yesno questions, neither contains nor is contained in NP.
My preferred way to describe quantum computation is by analogy with randomized classical computation. If a computer executes a randomized algorithm, then in a manner of speaking its state is a classical superposition. For some purposes this does afford some computational speedup. In quantum computation you replace this classical superposition by a quantum superposition. It should be believable that this yields yet more speedup than classical superposition does.
In the most general formalism of this sort, the state of the computer is a density matrix, capturing both classical and quantum superposition. This is in my opinion the best description of quantum computation and quantum probability theory generally.
 /\ Greg Kuperberg (UC Davis) / \ \ / Visit the Math ArXiv Front at http://front.math.ucdavis.edu/ \/ * All the math that's fit to eprint *
Nicolaas Vroom
>  My question is why are both not the same. How do we explain that a quantum computer requires such a relative small number of steps. 
Basically, when one counts the number of elementary operations (EOs) performed by a classical computer, one is counting scalar multiplications. With quantum computers, the EOs are *matrix* multiplications. The matrices being multiplied are not arbitrary they must act on one or two qubits only, but each still represents many classical scalar multiplications.
Sig:
A Petition to the American Physical Society for the Creation of a New
Acronym (R.I.N.N.O.)
http://www.artiste.com/qcomp_onion/rinno.htm
>  My preferred way to describe quantum computation is by analogy with randomized classical computation. 
What do you mean with randomized classical computation?
>  If a computer executes a randomized algorithm, then in a manner of speaking its state is a classical superposition. 
What is a randomized algorithm? Is that an algorithm to calculate random numbers? or is it a Monte Carlo Simulation which uses random numbers? What do you mean with classical superposition?
>  For some purposes this does afford some computational speedup. 
?
>  In quantum computation you replace this classical superposition by a quantum superposition. 
Sorry also not clear.
>  It should be believable that this yields yet more speedup than classical superposition does. 
Hum.
>  In the most general formalism of this sort, the state of the computer is a density matrix, capturing both classical and quantum superposition. This is in my opinion the best description of quantum computation and quantum probability theory generally. 
Is this really the best description possible ?
>  In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote: 
> >  [Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).  KS] 
> 
Willfully disregarding the disclaimer, is there really any sense
in which this is true? It's oftstated, but from what I know of
Shor's algorithm (and what I know is fairly minimal), it works
because quantum computers can do FFTs very quickly. If one were
really able to do as many loops in parallel as one would want,
then factoring would only take O(1) speed rather than
O(log^3(n)), right?
Aaron 
I did a search with: http://www.google.com/search?q=Shor%27s+Algorithm&hl=en&lr=&ie=UTF8&start=0&sa=N
In my first reply to this posting I already discussed some url's which are mentioned.
Unfortunate it is still not clear why you need Shor's Algorithm to factoring a number. For example 15 = 3 * 5. And more specific why this algorithm works faster on a quantum computer compared with a classical computer. For example how do you implement parallization on a quantum computer ?
The following program can be implemented using parallel logic:
For i = 1 to 10000 step 3
Func1, Func2 and Func3 are the same functions. You can only excute those functions in parallel if they are completely indepent of each other. For a classical computer parallel computation requires more hardware i.e. more processors. (I expect that for quantum computers this is the same.)
IMO parallization only is not the true reason why quantum computers are faster. The reason must be that they use "superposition of states" (or a combination of both). But how that works (and what the limits are) is not clear to me.
May be this url gives the answer: http://www.artiste.com/reservoir_noise.html
Nick.
In article
>  Greg Kuperberg wrote: 
>>  My preferred way to describe quantum computation is by analogy with randomized classical computation. 
>  What do you mean with randomized classical computation? 
>>  If a computer executes a randomized algorithm, then in a manner of speaking its state is a classical superposition. 
>  What is a randomized algorithm? Is that an algorithm to calculate random numbers? or is it a Monte Carlo Simulation which uses random numbers? 
More like the latter. It is an algorithm in which you can have steps that say "generate a random number". Usually this is good for getting an answer which has a high probability of being right. Hopefully you can verify an answer efficiently, so at worst you just have to run a couple times to get a correct answer.
An example that you use all the time, but probably don't realize, is authentication for secure communications. If I want to authenicate you (i.e. convince myself you really are who you claim), I generate a random number, encrypt it with your public key, and send it to you to decrypt and return to me. If you return the number I started with then there is a high probability that you are who you claim to be.
Anyway, when you allow randomization, the complexity classes change.
In some vague way you can imagine that quantum computation is similar to randomized classical computation, although as Greg states:
>>  In quantum computation you replace this classical superposition by a quantum superposition. 
So the complexity classes for quantum computations should be different than randomized classical computations.
In another vague sense, you can imagine that quantum computation is also similar to nondeterministic classical computation. I couldn't readily give you a description of the rigorous difference between the two, analogous to the above statement about superposition, though.

======================================================================
Kevin Scaldeferri Calif. Institute of Technology
Charles DH Williams wrote:
> 
In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom

> > 
Suppose this the best classical algorithm: 
> 
It isn't. 
I agree with you. However that is not the issue. The problem is when you start with a given algorithm with consists of a certain number loops which requires a certain number of steps on a classical computer how can you do that SAME problem in much less steps on a quantum computer.
Both computers considered should be the "same" (i.e. both should be standard from the shelf or both can be special purpose) otherwise any comparison is not scientific.
> 
Anyway, practical quantum copmputers can now factor numbers up to 15 in,
err, milliseconds. I guess the next milestone will be when one runs
Microsoft Office.
Charles 
> 
In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote: 
> > 
[Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).  KS] 
> 
Willfully disregarding the disclaimer, is there really any sense in which this is true? It's oftstated, but from what I know of Shor's algorithm (and what I know is fairly minimal), it works because quantum computers can do FFTs very quickly. If one were really able to do as many loops in parallel as one would want, then factoring would only take O(1) speed rather than O(log^3(n)), right? 
In order to improve my knowledge about Shor's algorithm I did this search: http://www.google.com/search?q=Shor%27s+Algorithm&hl=en&lr=&ie=UTF8&start=0&sa=N
Following are some results:
1. Quantum Computing, Shor's Algorithm, and Parallelism
Arun, Bhalla, Kenneth Eguro, Matthew Hayward
http://alumni.imsa.edu/~matth/quant/433/shorpar/
Shor's algorithm is discussed at:
http://alumni.imsa.edu/~matth/quant/433/shorpar/node12.html
The most details are at:
http://alumni.imsa.edu/~matth/quant/433/shorpar/node15.html
This page claims that in order "for factoring a given integer n"
you need (see step 14) a classical computer.
I found that very strange.
Starting with step 5 you need a quantum computer
In step s discrete Fourier transform is computed.
Why do you need that in order "for factoring a given integer n" ?
http://alumni.imsa.edu/~matth/quant/433/shorpar/node16.html claims: "In general Shor's algorithm simulation seems a good candidate for parallelization." Why the word simulation ? Why the word seems ? Does this imply that we are not sure ?
2. The following url also explains Shor's Algorithm: http://tph.tuwien.ac.at/~oemer/doc/quprog/node18.html
3. And also this one does. http://www.wikipedia.org/wiki/Shor's_algorithm
4. IMO the best one is: http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf This document explains in detail: Example factor n = 55
After studying the above documents (I'am still in that
process) I have the following remarks.
1. Shor's Algorithm allows you (As far as I currently
understand) to perform factoring of an integer.
2. In order to do that you need both a classical computer
and a quantum computer.
3. It is not clear to me if you can do this whole calculation
solely on a classical computer.
4. It is not clear to me what the advantages are by using
a quantum computer.
5. Assuming remark 1 is correct and assuming that using a classical
computer and a quantum computer both together, is the best way to
implement shor's algorithm than I'am still not convinced
that quantum computers in general are faster
(and eventualy will replace classical computers)
6. Shor's Algorithm smells like a Monte Carlo simulation.
As part of the algorithm you have to observe register 2
and to measure register 1 (See url in remark 4)
It is not clear to me if you only have to do this once.
7. Assuming that you have to do this more often
what is the chance that you never find the right answer
specific with large numbers ?
8. It is interesting to study what happens if the number
that you want to factor (using Shor's algorithm)
is a prime number.
Nick
>  In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote: 
>> 
[Moderator's note: "Shor's algorithm" is the answer to your question. More generally, quantum computers can be more efficient than classical computers because a superposition of states can "perform many loops in parallel" without requiring more hardware (roughly speaking).  KS] 
> 
Willfully disregarding the disclaimer, is there really any sense in which this is true? It's oftstated, but from what I know of Shor's algorithm (and what I know is fairly minimal), it works because quantum computers can do FFTs very quickly. 
There is a sense in which this quantum parallelism is true. The following example is one part of Shor's algorithm.
Suppose you have a function f:{1,2,3,...,N} > {1,2,3,...,N} for some sufficiently large N. The only thing which you know about f is that it is periodic, i.e. (roughly speaking) there exist a nuber m so that f(i) = f(i+m) for all i. If you only have a classical computer at your disposal, how often will you have to evaluate f in order to learn m? Right, ~N times.
Using a quantum computer, you can evaluate "all values of f at the same time", by feeding a superposition of all inputs into the function evaluation. However, there is no way to get all results out of the QC, but in this specific case, the only thing you are interested in is the periodicity of the function, while all individual function values are irrelevant. And this is precisely what the QC allows you to do: you can read out this global property of the function, without getting any knowledge about individual function values.
Hope that helps,
Hans
[Moderator's note: this discussion is wavering on the edge of appropriateness for SPR. I urge people to try to stick to issues of how quantum computers work & how they are different from classical system, and relegate general discussion of the theory of computability to other forums.  KS]
> 
Nicolaas Vroom 
> > 
My question is why are both not the same. How do we explain that a quantum computer requires such a relative small number of steps. 
> 
Basically, when one counts the number of elementary operations (EOs) performed by a classical computer, one is counting scalar multiplications. 
>  With quantum computers, the EOs are *matrix* multiplications. 
>  The matrices being multiplied are not arbitrary they must act on one or two qubits only, but each still represents many classical scalar multiplications. 
Suppose you have to perform the following program:
1 for i = 2 to 100000 2 for j = 2 to i1 3 A(j) = A(j) * B(j) / C(j) 4 A(j) = A(j1) + A(j) + A(j+1) 5 next j 6 next iLine 3 shows an example of matrix manipulation. The inner loop will benefit from parallelization. This is true for a classical computer and (I expect) for a quantum computer. For a classical computer this requires more hardware (multi processors) and more memmory. For a quantum computer I do not know what the consequences are. I expect certain limits are involved.
Line 4 shows an example of matrix manipulation. However this one will NOT benefit from parallelization This is true for a classical computer. I expect the same for quantum computer.
> 
Sig: A Petition to the American Physical Society for the Creation of a New Acronym (R.I.N.N.O.) http://www.artiste.com/qcomp_onion/rinno.htm 
[Moderator's note: As I noted in another moderator's note, discussion in this thread needs to focus on physics aspects of quantum computation or move to another forum. To a limited extent, posts like this that provide definitions are okay, but, in general, theory of computation is offtopic in SPR.  KS]
In article
>>  If a computer executes a randomized algorithm, then in a manner of speaking its state is a classical superposition. 
>  What is a randomized algorithm? 
>  or is it a Monte Carlo Simulation which uses random numbers? 
A Monte Carlo simulation is an example of a randomized algorithm. In general a randomized algorithm is one that uses a source of random numbers for some computational purpose. For instance the MillerRabin primality test gives statistical evidence as to whether or not a number n is prime by choosing a random residue from 0 to n1.
>  What do you mean with classical superposition? 
In the sense that if you flip a coin, the coin is in a superposition of its heads and tails states until you observe it. Although this is stated using quantum jargon, I am only describing classical probability theory here. In the context of quantum mechanics, a classical superposition is often called a "mixture" and is properly described by a density operator.
In fact there is no strict dichotomy between mixture and quantum superposition, or between classical correlation and quantum entanglement. In both cases the latter is propery understood as a purer form of the former. For instance if you dilute quantum entanglement with noise, it degenerates to classical correlation.
>>  For some purposes this does afford some computational speedup. 
>  ? 
One example is primality tests. The drawback of MillerRabin is that it never provides a guarantee that n is prime, only a likelihood. On the bright side, it is faster than any deterministic primality algorithm.
Other algorithms are in the ZPP class. This means that the output is guaranteed to be correct, but the algorithm only probably runs quickly. Here too there are primality algorithms that use a random number generator and that usually run faster than the fastest deterministic algorithm.
 /\ Greg Kuperberg (UC Davis) / \ \ / Visit the Math ArXiv Front at http://front.math.ucdavis.edu/ \/ * All the math that's fit to eprint *
Hi,
If figured I'd put in my two cents of intuition.
>  In my opinion "parallel computation" is poor terminology for quantum computation. First off it's not clear what it means to allow arbitrary parallel computation. To use Unix terminology, let's say that it means that you can fork processes for free, and that all of the child processes run in parallel but do not communicate. Then the next question is how to reconcile the exponentially many final states of the child processes. One model is the NP model: Children run for polynomial time. If all children say "no", then the parent program outputs "no". If at least one child process says "yes", the parent outputs "yes". Another model is #P: children run for polynomial time and do not communicate; the parent outputs the number of child processes that report "yes". 
BQP can be viewed as being contained in a parallel computation class as it is contained in PSPACE and the latter class can be characterized as what you can do in polynomial time if you had access to polynomial in the input many processors. I think the lowest known class classical class containing BQP is AWPP which is a low class in the counting hierarchy which is due to Fortnow and Rogers. The counting hierarchy is basically what you get when you allow yourself the ability to compute exponential sums and products of the results of PTIME computations. That the counting hierarchy is contained in PSPACE can be seen by realizing that to compute say an exponential sum of PTIME computations, we need to keep track of the sum computed so far and then the next term in the sum to be computed, both of which are polynomial size.
Here is my intuition as to why polynomial time quantum
computations can be simulated using functions from the counting
hierarchy and what makes it hard to simulate a quantum
computation in a smaller class like PTIME. To simulate a quantum
computation (I am simplifying here a bit) the things we need to
be able to compute classically are expressions like:
The main question is when is it easier to compute the i,jth entry of U? Basically, if the layers of U are matrices in which there are only polynomially many nonzero entries in each row and column and those entries are polynomial time to find, then the whole computation will be in PTIME. What kind of layers satisfy this condition? Well, for instance a permutation matrix will only have one 1 in each row or column. Layers made from reversible classical gates also don't tend to present a problem. Bad layers result from doing things like Kronecker powering more than logarithmically many Hadamard gates together or, for instance, having a layer that did the quantum fourier transform. After creating such a layer it seems hard to avoid brute force computation of an exponential sum to figure out the i,j th entry one would get when we multiply this against another in a typical quantum algorithm
Chris Pollett
Nicolaas Vroom
>  What is your definition of scalar multiplications ? What is your definition of matrix multiplications ? 
In linear algebra (vector spaces), one speaks of matrices(linear transformations) and scalars. For example, think of scalars as complex numbers or 1 by 1 matrices.
> 
Suppose you have to perform the following program: 1 for i = 2 to 100000 2 for j = 2 to i1 3 A(j) = A(j) * B(j) / C(j) 4 A(j) = A(j1) + A(j) + A(j+1) 5 next j 6 next i 
I think this "code" is misleading.
What I recommend is that you study
the Quantum Fourier Transform.
A simple introduction to it can be found,
for example, in Mermin's class notes
http://www.ccmr.cornell.edu/~mermin/qcomp/CS483.html
Then study the classical FFT.
An introduction to that may be found, for example,
in the Numerical Recipes book, available online at
http://www.nr.com/
Now study how these guys count
the number of "elementary"
operations in each.
Sig
A Petition to the American Physical Society for the Creation of a New
Acronym (RINNO)
http://www.artiste.com/qcomp_onion/rinno.htm
I just noticed this topic, and I'm going to try to answer a few of the questions.
> > 
Aaron Bergman wrote: 
> > > 
In article <3DE7DD7B.7A23A1A7@pandora.be>, Nicolaas Vroom wrote: 
> > 
I did a search with: http://www.google.com/search?q=Shor%27s+Algorithm&hl=en&lr=&ie=UTF8&start=0&sa=N In my first reply to this posting I already discussed some url's which are mentioned. Unfortunate it is still not clear why you need Shor's Algorithm to factoring a number. For example 15 = 3 * 5. 
You don't need Shor's algorithm to factor a number ... you just factored 15 without even using a computer.
And going back to your first posting on this topic, you say (but maybe forgot)
>  At page 51 of Scientific November 2002 is written: "How many computational steps are needed to find the prime factors of a 300digit number? The best classical algorithm would take about 5*10E24 steps. By taking advantage of innumerable quantum states a quantum factoring algorithm would take only 5*10E10 steps" 
So in theory, you can factor a 300digit number with a classical computer, we just don't have any classical computers fast enough. And 500digit numbers, without truly signifiant advances in either classical computing power, in algorithms, or in quantum computers, are out of reach.
Nicholas Vroom also asked
>  My question is why are both 
>  not the same. How do we explain that a quantum computer requires such a relative small number of steps. 
I'm going to make a fool of myself and answer this by giving a transportation metaphor. To pick an example relatively close to my home, suppose you want to go from Cape May, New Jersey to Lewes, Delaware. Using one of the popular driving directions web sites says it takes 200 miles and four hours. However, you can make the trip in just a little over an hour.
How?
If you look at the map, you find that there's only around 20 miles between these two cities, measured across the mouth of the Delaware bay. And, in fact, there's a regularly scheduled ferry service that runs between them (and which will take your car).
Some people might call this "thinking outside the box." (I would probably call it "taking the ferry.") Notice that you need special hardware to do this (a boat). If you try to do this using only your car (unless you've got a car like James Bond's) you'll drown.
There are computational paths from a number to its factors that only a quantum computer can navigate. Quantum computers can perform steps that might be described in English as "rotate qubit 2 by 90 degrees coherently conditioned on qubit 1 being 0," which translates in mathematics to applying a specific 4x4 matrix on the quantum state space of qubits 1 and 2, and in a specific quantum computater implementation might translate to applying a fixed sequence of properly shaped laser pulses to ions in an ion trap. Try and do this kind of step on a classical computer. A good question here is: how does this kind of step help? For that, you'll have to look more closely at a specific quantum computing algorithm.
There are some fairly good elementary explanations of how these work at http://www.qubit.org
Nicholas Vroom continues
>  Suppose this is the best classical algorithm: 
and gives pseudocode for trial division.
The computer science side of me absolutely insists on saying something here. This is nowhere near the best algorithm, and while it is possible to find slower algorithms (e.g., simulating the quantum factoring algorithm on a classical computer), trial division takes around 10E150 steps for a 300 digit number, as opposed to the 10E24 quoted above for (I assume) the number field sieve algorithm. Notice that the speedup gained by clever use of classical algorithms126 orders of magnitudefar exceeds the speedup gained by switching to a quantum algorithmonly(!) 14 orders of magnitude. Both of these speedups increase dramatically as the number to be factored grows larger.
There was a very nice article by Brian Hayes describing how the quadratic sieve works (this is the fastest classical algorithm you can understand without knowing advanced number theory) called "The magic words are squeamish ossifrage," in the 1994 JulyAugust American Scientist. Unfortunately, it missed by six months the date for the web archival of Hayes's American Scientist columns, so to read it you have to find a hard copy.
> >  And more specific why this algorithm works faster on a quantum computer compared with a classical computer. 
As I said above, there are steps you can do on a quantum computer you just can't do on a classical computer.
> >  IMO parallization only is not the true reason why quantum computers are faster. The reason must be that they use "superposition of states" (or a combination of both). But how that works (and what the limits are) is not clear to me. 
People keep making this comparison to classical parallelism because it kind of explains the speedup you get by using quantum computers. In certain respects, though, it's very misleading.
I hope this helps.
Peter Shor
In article <9VFH9.3143$Eu4.66300173@newssvr13.news.prodigy.com>,
Chris Pollett
>  BQP can be viewed as being contained in a parallel computation class as it is contained in PSPACE and the latter class can be characterized as what you can do in polynomial time if you had access to polynomial in the input many processors. 
When composing my previous posting I had trouble characterizing PSPACE in my mind this way. However, I do remember the result that a sufficiently general tree search with alternating quantifiers is PSPACEcomplete  e.g. "generalized chess" is a PSPACEcomplete problem. Clearly such a search may be done in polynomial time if you can fork processes for free. Conversely, polynomially many processors can be simulated in polynomial space. So yes, PSPACE is a natural model for fully parallel, polynomialtime computation. Thanks for mentioning this observation.
It is certainly important to know that BQP is contained in PSPACE. However, the proper way to understand that is that BQP is *subsumed* by parallel computation. To say that "quantum computation is a kind of parallel computation", or even "quantum computation allows massive parallelism", leaves many people with the impression that BQP = PSPACE, or at least that BQP is not much smaller than PSPACE. I'm sure you would accept the conjecture that BQP is much smaller than PSPACE (for one thing because it's contained in AWPP, as you say).
My point is that conceptually BQP is a lot like BPP (the most general randomized polynomialtime class), only bigger. At least, that's my conception of it.
You could also point out that BPP is contained in PSPACE. But most people have no trouble understanding that randomized computation is parallel only in a very weak sense. To be sure, it is natural to conjecture that BPP = P and that BQP != P. On the other hand, it is also natural to conjecture that randomization yields a quadratic speedup for some problems over determinism (maybe that's even a theorem). People should appreciate that phenomenon as a warmup to understanding BQP.
 /\ Greg Kuperberg (UC Davis) / \ \ / Visit the Math ArXiv Front at http://front.math.ucdavis.edu/ \/ * All the math that's fit to eprint *
Hi Nicolaas
First, let me say that i m just an ordinary student of physics. The people here have a lot more experience than i do.
Anyways, i will give u some pointers.
Quantum Computers do not function like classical computers and so you cannot program them like classical computers. Things like loops, if statements, etc do not directly make sense.
Basically, here a description of a Quantum Computer and how it is supposed to work. The description is a little bit amateurish.
In Quantum mechanics, we talk about ensembles and how things can exists in superposition of states.
For example. Here's the classical analogy
Imagine the boolean gate OR Function
OR(X,Y) = X' * Y + X * Y + X * Y'
Basically, according to classical boolean theory,
OR(X,Y) will be a 1 if X'*Y is 1 , that is to say X=0, Y = 1, or X*Y is a 1, X=1,Y=1, and of if X*Y' is 1, that is X=1, Y=0.
Basically, u also need to notice that X'*Y is orthogonal to X*Y'
that means X'*Y*X*Y' = 0 which is true because X'*X =0 and Y'*Y=0
Now, the above expression can be rewritten as follow
OR = 1><0,1 + 1><1,0 + 1><1,1 + 0><0,0
On a classical computer, at one time X can be either 0 or 1, and so does Y, and so is the output only allowed to be 1 or 0.
Now, on the otherhand, in Quantum mechanics, particles, can exists in superposition of states, that means that,
X can be expressed as a combination of 0 or 1. X = c_x0 0 + c_x11
Basically, c_x0^2 is taken to be the probability of X being a 1. That means that if you measure the state of a particle X>, you are gaurrented to get 0 or 1, not both at the same time. However, you cannot predict with certainty which state you are going to get. However, the only certainty is probability of getting 0 at X is c_x0^2 and probability of getting 1 is c_x1^2
Moreover, interestingly, c_x0 can also be complex numbers. After all, the probability is only governed by c_x0^2.
So, you may wonder if X what it means to say that X is in superposition of both 1 and 0. There are many interpretations. One interesting one is the many world interpretation.
It says that when a particle is prepared in a superposition of states, like our X>. the world diverges with X=0 with c_x0^2 and X=1 with c_x1^2.
where c_x0^2 + c_x1^2 = 1
And we can also express Y as follows Y> = c_y0 0> + c_y11>
Now, if you increase the number of input variables, for example say n = 128, you have in fact, prepared a system with 2^128 states. However, if you try to measure the state of X_1,X_2,...X_n>, you will only get one of all possible 2^128 states. Because of this dynamics, a Quantum computer can try 2^128 states in parallel at the same time.
Now, our expression is still given by
OR(X,Y) = X' * Y + X * Y + X * Y'
However, you need to understand since X and Y are no longer binary, OR(X,Y) is not gaureented to be 1 or 0. Rather OR(X,Y) has a probability of being 1 or 0.
Upto now, you will be inclined to ask, that well, how it is useful. After all, you are preparing a system in 2^128 states. And you get only one answer. So, you may ask, how you can get the answer you are looking.
Now, Quantum Mechanics is not so simple, as the worlds diverging. In fact, the worlds can interfere and cancel themselves out.
c_x0^2 is the probability of getting x=0,however, c_x0 can also be a complex number. So, it has phase. c_x0 = p_x0 exp(i theta_x0).
This makes matters a lot of complicated.
The innocent looking presence of phase allows the different worlds to interfere.
Let's go back to our expression
OR = 1><0,1 + 1><1,0 + 1><1,1 + 0><0,0
Our OR gate, can be seen as an evolution device. In Quantum Mechanics,the word used to denote an OR like device is 'operator'. Basically, the application of the OR gate to the quantum sytem input system X,Y> , evolves it from that state to the state at the output.
After all, this is even classical gates do. Given an input state, they take it to the
output state.
Moreover, just as our classical device, k> where k' != k and i' != i and j' != j. Where a != b means a is not equal to b.
In Quantum mechanics, there is a requirment for probability conservation
Basically, it goes something like this
sum_i c_xi^2 = 1
This is true of any probability theory. The sum of probabilities is always 1. After all, this is an axiom of probability theory.
There is another equivalent statement called the Unitary requirement that has the same effect as probability conservation.
Our expression OR = 1><0,1 + 1><1,0 + 1><1,1 + 0><0,0
[Thanks to Kevin Scaledeferri, for pointing out the correct form of the expression]
Suppose we rewrite our OR expression as
OR = a[01>1] 1><0,1 + a[10>1] 1><1,0 + a[11>1] 1><1,1 + a[00>0] 0><0,0
Comparing the 2 expression above, we can decipher that
a[ij>k] = 1.
However sum_[ij>k] a_[ij>k]^2 /= 1. And so it does not converse probability.
However, for example,
OR = (1/sqrt(4)) ( 1><0,1 + 1><1,0 + 1><1,1 + 0><0,0)
does
a_[ij>k] = 1/2
sum_i a_[ij>k]^2 = 1
Now, suppose multiply X> and Y>, we get X>Y> = c_x0 c_y0 X=0>Y=0> + c_x1 c_y0 X=1>Y=0> + c_x0c_y1 X=0>Y=1> + c_y1 c_x1X=1>Y=1>
othewise written as,
X,Y> = c_x0 c_y0 0,0> + c_x1 c_y0 1,0> + c_x0c_y10,1> + c_y1 c_x1 1,1>
Where c_x0 c_y0^2 is the probability of getting a X=0,Y=0. It is also consitent wth c_x0 c_y0^2 = c_x0^2c_y0^2. This expression means that p(X = 0 and Y=0) = p(X=0) p(Y=0).
This is a natural expression for the probability of independent events.
Now, back to our OR operator
Now, if i apply the OR operator to our input state X,Y>
ORX,Y> = (1/sqrt(4)) ( 1><0,1 + 1><1,0 + 1><1,1 + 0><0,0)
c_x0 c_y0 0,0> + c_x1 c_y0 1,0> + c_x0c_y10,1> + c_y1 c_x1 1,1>
the resulting expression is
= (1/2) 1> ( c_x1 c_y0 + c_x0c_y1 + c_y1 c_x1 ) + (1/2) 0> ( c_x0 c_y0)
So, the probability of getting a 1, is
<1ORX,Y>^2 = (1/4)  c_x1 c_y0 + c_x0c_y1 + c_y1 c_x1^2
= 1/4 p_x1p_y1 exp(i (theta_x1 + theta_y1) ) + p_x1p_y0 exp( i(theta_x1 + theta_y0)) + p_x0 p_y1 exp( i(theta_x0 + theta_y1))^2
In the above expression, it is important to notice that the phases interfere with each other and cancel each other out. This is what creates all the difference in the world. The phases are not innocuous as we had thought them to be.
So, not only do the worlds seperate at each instant, they also interfere with each other. So, voila, parallel computing.
 Revisible Computing and answers to NPcompleteness
A example of factorization 
Basically, suppose we take a (2n)bit multiplier with 2 nbit multiplicand with (2n)bit output. Now, suppose be interested to know a few things. We take the output of the (2nbit) multiplier and put a big AND gate at end. The output of this gate is true, if in fact, an input combination of mutiplicand produces the the number to be factored at the output. Suppose We are interested in factoring [O_0,O_1,O_2...,O_n] = [0,1,1,0...,1] . Let's call this number N . Our AND gate basically, is O_0'*O_1*O_2*O_0'...*O_n
In normal binary computers, certain operations are not reversible.
For example, back to our OR gate.
Suppose X=0,Y=1. The output of the OR gate is 1. However, based on the output, we cannot tell if the input was X=0,Y=1, or X=1,Y=1 or X=1,Y=0. So, you can even say that information is lost from inputs to the outputs. Suppose, i have a random binary generator at the input. Based on the output of a classical gate, mostly impossible to tell what the inputs of the circuit were.
Now, the idea of Quantum Computing, leads to reversible computing. [I m not an expert on this topic either. I am not sure if Quantum Computing requires reservsible computing]. However, a Quantum equivalent of 2bit AND Gate, has 4 not 3 connections coming in and going out.
a Quantum 2qubit AND gate needs to have 2 inputs, one output, and one revisibility channel.
So, now, suppose, we have the quantum equivalent of this classical multiplier, because of reversibility of we can propogate the 1 from the output of our AND gate, in our multiplier, and trace them back to the input combinations that generate the 1 at the output. Notably, these combinations are also factors or our number N.
Shor's algorithm is different. But, hopefully, this helps.
suresh
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus  Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
> 
I just noticed this topic, and I'm going to try to answer a few of the questions.
Nicolaas Vroom 
> >  Unfortunate it is still not clear why you need Shor's Algorithm to factoring a number. For example 15 = 3 * 5. 
> 
You don't need Shor's algorithm to factor a number ... you just factored 15 without even using a computer. 
After studying Shor's algorithm and quantum computers
(I'am still in that process) three items are still not clear to me:
1. Why does one need FFT in order to factor a number
2. Why are quantum computers soo much faster
3. Shor's algorithm does not always work.
4. Why not implementing the whole algorithm on a quantum computer.
See previous posting and study te program https://www.nicvroom.be/findprim.txt https://www.nicvroom.be/findprim.zip
> >  At page 51 of Scientific November 2002 is written: The best classical algorithm would take about 5*10E24 steps. By taking advantage of innumerable quantum states a quantum factoring algorithm would take only 5*10E10 steps" 
> 
So in theory, you can factor a 300digit number with a classical computer, we just don't have any classical computers fast enough. 
>  And 500digit numbers, without truly signifiant advances in either classical computing power, in algorithms, or in quantum computers, are out of reach. 
>  Nicholas Vroom also asked 
> > 
My question is why are both 
>  [e.g., the classical and quantum number of steps needed 
> >  not the same. How do we explain that a quantum computer requires such a relative small number of steps. 
> 
There are computational paths from a number to its factors that only a quantum computer can navigate. Quantum computers can perform steps that might be described in English as "rotate qubit 2 by 90 degrees coherently conditioned on qubit 1 being 0," which translates in mathematics to applying a specific 4x4 matrix on the quantum state space of qubits 1 and 2, and in a specific quantum computater implementation might translate to applying a fixed sequence of properly shaped laser pulses to ions in an ion trap. 
>  Try and do this kind of step on a classical computer. A good question here is: how does this kind of step help? 
> 
For that, you'll have to look more closely at a specific
quantum computing algorithm.
There are some fairly good elementary explanations of how these work at http://www.qubit.org 
I studied also as suggested by Robert Tucci: http://www.ccmr.cornell.edu/~mermin/qcomp/CS483.html Specific chapter 1 and chapter 3 I'am not convinced that a quantum computer is so much richer. I'am not sure if you can calculate 3 * 5 reliable on a quantum computer.
I will address questions to this on a separate posting
> >  Suppose this is the best classical algorithm: 
> 
and gives pseudocode for trial division. The computer science side of me absolutely insists on saying something here. This is nowhere near the best algorithm, 
The pseudocode addresses the first issue. i.e. I want to understand if and how quantum computers are faster using the same algorithm (program)
>  and while it is possible to find slower algorithms (e.g., simulating the quantum factoring algorithm on a classical computer), trial division takes around 10E150 steps for a 300 digit number, as opposed to the 10E24 quoted above for (I assume) the number field sieve algorithm. Notice that the speedup gained by clever use of classical algorithms126 orders of magnitudefar exceeds the speedup gained by switching to a quantum algorithmonly(!) 14 orders of magnitude. Both of these speedups increase dramatically as the number to be factored grows larger. 
Conclusion: Speed up gained by sieve algorithm is 126 orders. Speed up gained by quantum algorithm is 140 orders. How do you explain that quantum algorithm is better than sieve algorithm ? Is this more than only a prediction ? It is IMO easy possible that factoring a number on a quantum computer plus classical computer combination is slower than performing the same factoring on a classical computer which uses the sieve algorithm.
> > >  And more specific why this algorithm works faster on a quantum computer compared with a classical computer. 
> 
As I said above, there are steps you can do on a quantum computer you just can't do on a classical computer. 
This is still not clear to me. It should be possible to simulate any program that runs on a quantum computer on a classical computer. I'am waiting to read why my findprim.exe program is "wrong".
I may get around to addressing more of this later, but for now I want to say something about this one statement
In article <3DF5BFE2.248D9A9A@pandora.be>,
Nicolaas Vroom
>  It should be possible to simulate any program that runs on a quantum computer on a classical computer. 
Yes, this is true. However, the simulation is not efficient.
There are many models of computation which are equivalent, meaning that they can simulate each other. However, computational equivalence doesn't mean the complexity is the same. Frequently the simulation involves an exponential increase in the time or space requirements.

======================================================================
Kevin Scaldeferri Calif. Institute of Technology
Retransmission of previous reply
Nick

>  4. IMO the best one is: http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf This document explains in detail: Example factor n = 55 
2. does not work for the following factors:
5*29
Because the number of numbers (a mod x) in step 2 until you
find the number 1 is even.
For example for 5 * 29 = 145 those numbers are:
16, 111, 36, 141, 81, 136
3. does not work for the following factors:
3*5, 3*11, 3*13
5*19, 5*23
7*11, 7*19, 7* 23
19*31
Because there is no 1
For example for 3*5=15 those numbers are:
9,6,9,6,9,6,9,6,
For example for 3*11=33 those numbers are:
12,12,12,12,12
For example for 7*11=77 those numbers are:
14,42,49,70,56
4. The program listing is at: https://www.nicvroom.be/findprim.txt https://www.nicvroom.be/findprim.bas To get the program in Quick Basic change txt into bas
5. IMO Shor's algorithm is no quarantee that factoring works. (It is clever, but I doubt if it is that fast)
6. Shor's algorithm can not use as prove that quantum computers are faster as classical computers.
Nick
Peter Shor, using his mysterious Klingon/ATT cloaking device that endows him with Google invisibility, gave an analogy about how the "ferry path" is inaccessible to a classical commuter but possible for the quantum commuter. This is misleading, in my opinion. Let NB be the number of bits. All the ~2^NB paths that a qubit circuit follows are accessible and can be followed individually by a classical computer. It's just that they cannot all be advanced upon concurrently, unless one had exponential (in NB) classical computing resources.
(performing a classical elementary operation) = multiplying classical state (given by a complex number) by another complex number
(performing a quantum elementary operation) = multiplying quantum state (given by a 2^NB long, column vector of complex numbers) by a 2^NB times 2^NB elementary matrix
This suggests that sometimes one elementary op of a quantum computer accomplishes on the order of 2^NB elementary classical operations
An elementary matrix, by definition, can only *change* the state of one or two qubits, not more. The rest of the qubits are acted upon without changing them. This acting upon certain qubits of the qubit array without changing them (while at the same time changing other qubits in the array), "is key" for vector spaces and quantum physics, but not as key in classical physics.
Sig:
A Petition to the American Physical Society for the Creation of a New
Acronym (RINNO)
http://www.artiste.com/qcomp_onion/rinno.htm
In article <3DEE794E.EA881D8D@pandora.be>,
Nicolaas Vroom
> 
1. Quantum Computing, Shor's Algorithm, and Parallelism
Arun, Bhalla, Kenneth Eguro, Matthew Hayward
http://alumni.imsa.edu/~matth/quant/433/shorpar/
Shor's algorithm is discussed at: http://alumni.imsa.edu/~matth/quant/433/shorpar/node12.html The most details are at: http://alumni.imsa.edu/~matth/quant/433/shorpar/node15.html This page claims that in order "for factoring a given integer n" you need (see step 14) a classical computer. I found that very strange. 
Well, you don't _need_ to do those steps on a classical computer, but they are efficient on a classical computer. Given that the clock speeds of classical computers are massively higher that quantum computers for the forseeable future, it makes sense to take such a hybrid approach.
>  Starting with step 5 you need a quantum computer In step s discrete Fourier transform is computed. Why do you need that in order "for factoring a given integer n" ? 
You don't need a Fourier transform to factor an integer. There are many algorithms for factorization that don't use this technique at all. However, the key in this approach is the periodicity of x^a mod n, and you should be able to see that a Fourier transform is handy if you want to pick out periods of a functions.
>  http://alumni.imsa.edu/~matth/quant/433/shorpar/node16.html claims: "In general Shor's algorithm simulation seems a good candidate for parallelization." Why the word simulation ? 
Simulation means modeling one type of machine on another. In this case, we are talking about modeling the execution of a quantum computer (quantum register machine) on a classical computer (classical register machine).
>  Why the word seems ? Does this imply that we are not sure ? 
The don't prove this to be true at any point that I can see.
> 
After studying the above documents (I'am still in that
process) I have the following remarks.
1. Shor's Algorithm allows you (As far as I currently
understand) to perform factoring of an integer. 2. In order to do that you need both a classical computer and a quantum computer. 
No, you could do it all on a quantum computer (in theory)
>  3. It is not clear to me if you can do this whole calculation solely on a classical computer. 
Yes, but it will be slower
>  4. It is not clear to me what the advantages are by using a quantum computer. 
Speed
>  5. Assuming remark 1 is correct and assuming that using a classical computer and a quantum computer both together, is the best way to implement shor's algorithm than I'am still not convinced that quantum computers in general are faster (and eventualy will replace classical computers) 
Good! No one knows that practical QCs are possible nor that the requirements for general error correction won't outweigh the theoretical speedup under perfect conditions.
>  6. Shor's Algorithm smells like a Monte Carlo simulation. As part of the algorithm you have to observe register 2 and to measure register 1 (See url in remark 4) It is not clear to me if you only have to do this once. 
No, you might have to do it more than once. However, the speedup is great enough that the possibility of having to repeat the calculation a couple times is insignificant.
>  7. Assuming that you have to do this more often what is the chance that you never find the right answer specific with large numbers ? 
That you never find the right answer? 0 (ignoring the issues of error correction and such, which we are generally ignoring already)
What you really want to know is, what is the chance that you don't find the answer in an exponential number of trials. Unfortunately, I don't know the answer offhand, but it should be exponentially small.
>  8. It is interesting to study what happens if the number that you want to factor (using Shor's algorithm) is a prime number. 
Maybe, although there are efficient algorithms for determining primality, so there isn't much point in using Shor's algorithm.

======================================================================
Kevin Scaldeferri Calif. Institute of Technology
>  After studying Shor's algorithm and quantum computers (I'am still in that process) three items are still not clear to me: 1. Why does one need FFT in order to factor a number 
It's the only way we know of to factor a number fast on a quantum computer. The twosentence intution: We factor by finding periodicity. FFT's are very good at finding periodicity.
>  2. Why are quantum computers soo much faster 
It's not that they're faster ... it's that they can do kinds of subroutines and algorithms that classical computers can't, so certain problems will take many fewer steps. If you run the SAME algorithm on a classical computer and a quantum computer, the quantum computer will probably be slower because in building a quantum computer, you have to worry about reversibility and coherence, which are likely going to force you to operate more slowly than a classical computer.
>  3. Shor's algorithm does not always work. 
The fastest classical algorithms for factoring are randomized, as well. If you run Shor's algorithm enough times, you will almost certainly find the factors. This is exactly the same situation as the quadratic or number field sieves. If you want an algorithm which deterministically factors, you have to use a substantially slower one.
>  4. Why not implementing the whole algorithm on a quantum computer. 
You could. But because quantum computers are going to be so much more expensive than classical computers, this is not goint to be a good idea in practice.
I said:
> >  Try and do this kind of step on a classical computer. A good question here is: how does this kind of step help? 
>  Exactly Why do you need this. Does it work in practice. How far are we in the laboratory ? 
You need this type of step to compute a quantum Fourier transform. The quantum computing algorithm has factored 15 (there's some debate about whether this was a true quantum computation or not, because it was done using NMR, but I suspect that 15 will be factored in the next few years by a less controversial technology). This took 6 qubits and a few hundred steps. To factor a 300digit number, you probably need several thousand qubits (depending on how much error correction is required) and several billion steps.
About trial division (which Nicholas Vroom suggested as a factoring algorithm), I said:
> >  The computer science side of me absolutely insists on saying something here. This is nowhere near the best algorithm, 
>  There are two issues involved: 1. a) Start from a given algorithm, b) implement that algorithm on both a classical computer and a quantum computer c) calculate solution time i.e. speed 2. Find the fastest algorithm which solves a given problem for both a classical computer and a quantum computer. 
>  The pseudocode addresses the first issue. i.e. I want to understand if and how quantum computers are faster using the same algorithm (program) 
Finally, you have expressed this question in a way that I can understand it. No, quantum computers are not faster using the same algorithm. That's why you need all these funny quantum coherent steps and the FFT.
>  Conclusion: Speed up gained by sieve algorithm is 126 orders. Speed up gained by quantum algorithm is 140 orders. How do you explain that quantum algorithm is better than sieve algorithm ? 
How do you explain why the sieve algorithm so much better than trial division? Why trial division is better than guessing an answer and then checking it (this will also, theoretically, eventually factor). Why is the number field sieve better than the quadratic sieve?
Determining how many steps a given algorithm uses is one of the central concerns of theoretical computer science, and in the case of the number field sieve, involves some fairly difficult mathematics. I discovered the quantum factoring algorithm and showed that it was polynomial time. Nobody has figured out how to factor in polynomial time on a classical computer.
>  Is this more than only a prediction ? 
Since nobody has actually factored a generic 300digit number, using either a classical or quantum computer, how could this be more than just a prediction?
If you mean, can you write down an algorithm which is guaranteed with high probability to factor a 300digit number, with a specified number of steps, assuming that somebody has a quantum computer to run it on. yes.
>  It is IMO easy possible that factoring a number on a quantum computer plus classical computer combination is slower than performing the same factoring on a classical computer which uses the sieve algorithm. 
Why?
Do you not believe that quantum computers will ever be built? (a reasonable point of view).
Do you not believe that quantum mechanics is correct? (a view shared by a number of otherwise reasonable people, but one which I think requires ignoring a great deal of evidence).
Do you not believe the prediction of the number of steps taken by a quantum computer using the factoring algorithm is correct, assuming that quantum computers can be built? (I don't believe I've encountered somebody holding this view before).
Do you believe that conventional computers will be 14 orders of
magnitude faster than quantum computers?
(Very optimistic prediction of technology for conventional computers,
even if quantum computers have the ridiculously slow millisecond clock
times of NMR technology.)
Do you believe that the number field sieve algorithm is orders of
magnitude faster than the observed and theoretically predicted running
times for it?
(The running times for the number field sieve have not been rigorously
proved mathematically, but believing this would be ridiculously optimistic.)
I hope this helps,
Peter Shor
> 
Nicolaas Vroom 
> > 
After studying Shor's algorithm and quantum computers (I'am still in that process) three items are still not clear to me: 1. Why does one need FFT in order to factor a number 
> 
It's the only way we know of to factor a number fast on a quantum computer. The twosentence intution: We factor by finding periodicity. FFT's are very good at finding periodicity. 
In document:
http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf
Shor's algorithm is described in 6 Steps. (With large S)
However IMO you only need 2 Steps (And a GCD operation) to find
(if you are "lucky") the two prime numbers.
See https://www.nicvroom.be/shor.htm for more details.
IMO you de not need FFT's to find the periodicity.
However IMO already those 2 Steps are much slower than if you
use a classical factorization algorithm.
If you want to factorize 29*31 it requires 420 calculations to calculate
a string of numbers. Number at position 210 is the result you need.
i = 210, y = 869, p=29, q=31
A classical algorithm requires only 30 calculations.
See above url and QBasic program for details.
> >  2. Why are quantum computers soo much faster 
> 
It's not that they're faster ... it's that they can do kinds of subroutines and algorithms that classical computers can't, so certain problems will take many fewer steps. 
I expect you mean that quantum computers can solve FFT's in one Step while classical computers need many steps (LOOPS).
>  If you run the SAME algorithm on a classical computer and a quantum computer, the quantum computer will probably be slower 
> >  3. Shor's algorithm does not always work. 
> 
The fastest classical algorithms for factoring are randomized, as well. 
IMO Shor's algorithm falls in category b. In my matrix of 43 by 43 (See url) Shor's algorithm works 58 times and fails 20 times. In a type c solution you can be lucky and find the solution immediate. To compare a with c you have to test many examples in order to decide if your type c solution is the best.
>  I said: 
> > > 
Try and do this kind of step on a classical computer. A good question here is: how does this kind of step help? 
> 
> > 
Exactly Why do you need this. Does it work in practice. How far are we in the laboratory ? 
> 
You need this type of step to compute a quantum Fourier transform. 
As I explained above in order to find periodicity IMO you do not need FFT. A complete different question is which computer is the best to calculate FFT's: A Quantum or a Classical Computer.
I will go in more detail in a separate posting.
>  Finally, you have expressed this question in a way that I can understand it. No, quantum computers are not faster using the same algorithm. 
It is IMO easy possible that factoring a number
on a quantum computer plus classical computer combination is slower
than performing the same factoring on a classical computer
which uses the sieve algorithm.
Why?
Do you not believe the prediction of the number of steps taken by a
quantum computer using the factoring algorithm is correct, assuming
that quantum computers can be built?
(I don't believe I've encountered somebody holding this view before).
>
> >
>
Yes that is my opinion. However I have to be carefull  See below.
To calculate the string of numbers in Step 2 (Of the above mentioned
document) in case of 29*31 for Shor's algorithm you need 420 steps
while on Classical Computer you need in total 30 steps.
If in addition with Shor's Algorithm you still need a FFT is of
no "importance".
I have to be carefull because in order to implement Shor's Algorithm
(all the 6 Steps) you need both a Classical computer and a Quantum
Computer.
IMO to factorize a number you do not need the Steps required
on the Quantum Computer. The 2 first Steps on a classical computer
are enough to generate the results of the 43*43 matrix.
I'am eager to know if the full 6 Steps give better performance.
> 
I hope this helps,
Peter Shor 
Many thanks for your time and effort to help me.
Nicolaas Vroom.
>  You need this type of step to compute a quantum Fourier transform. 
To understand if FFT's work on a Quantum Computer (QC) I have a number of questions.
See http://www.ccmr.cornell.edu/~mermin/qcomp/chap1.pdf page 18
1. Suppose I have a QC which contains 100 Qubits. all with the same amplitudes amplitude a and b that p = a^2 = b^2 = 1/2
If I measure now 50 of those Qubits than I will find on average 25 which are in state 1 and 25 which are in state 0
If I measure so time later the 50 other Qubits than I will find (I expect) on average the same result i.e. 25 which are in state 1 and 25 which are in state 0
The reason for this question is that if we consider Qubits which are build the same i.e. with a 50% probability to be in state 1 and 50% probability to be in state 0 than they maintain this state.
In Schrodinger's Cat experiment this is not the case
because it starts from radio active decay element which
has a certain half life time value.
If you open the box shortly after the start of the
experiment you have a high chance that the cat is alive.
If you open the box long after the start of the
experiment you have a low chance that the cat is alive.
As such the outcome of the Schrodingers cat experiment is time
critical.
I assume that measuring Qbits is not time critical (on average)
2. Suppose I have a Register R with 3 Qubits. What can I do with this Register
a) Can we initialize R with 8 (all) zero's?
b) Can we initialize R with the numbers 0 to 7 ?
c) Can we initialize R with 4 zero's and 4 sevens ?
In case a does this mean we alsways measure a zero ?
In case b does this mean we have the same chance
to measure a 0 or a 1 or a 2 etc or a 7 ?
In case c does this mean that we never will measure
the values 1 to 6 ?
3. Suppose I have two Registers R1 and R2
each consisting of 6 Qubits and
each initialized (the same) with the values
0,1,2,3 and 4. (with equal probability)
Now I want to Multiply those 2 Registers and store
the result in a third Register R3.
That means 0 * 0, 1 * 1, 2 * 2, 3 * 3 etc.
Does R3 contain the values 0,1,4, 9 and 16 ?
Are you sure that R3 does not contain any other value?
For example 6 or 12 (When measured)
i.e. 0*1, 1*2, 2*3, 3*4, 4*0,
The reason of this question is: are we sure that
the corresponding values are correctly selected ?
i.e. the cycle time of the two Registers R1 and R2
should be identical and stable
and the corresponding values should be correctly "linked".
Nick
> 
> > 
Starting with step 5 you need a quantum computer In step s discrete Fourier transform is computed. Why do you need that in order "for factoring a given integer n" ? 
> 
You don't need a Fourier transform to factor an integer. There are many algorithms for factorization that don't use this technique at all. However, the key in this approach is the periodicity of x^a mod n, and you should be able to see that a Fourier transform is handy if you want to pick out periods of a functions. 
> >  3. It is not clear to me if you can do this whole calculation solely on a classical computer. 
> 
Yes, but it will be slower 
> >  4. It is not clear to me what the advantages are by using a quantum computer. 
> 
Speed 
Nick
In article <3E00886B.59869D25@pandora.be>,
Nicolaas Vroom
>  If you study https://www.nicvroom.be/shor.htm and the simulation program than you will see that IMO you can factorize a number with (part of) Shor's Algorithm only on a standard PC without FFT's. IMO Shor's Algorithm requires more steps/calculations than a classical factorization algorithm 
>  (For example 29*31 requires 420 steps versus 30 steps) 
That's nice, but it's really beside the point. What does your simulation say when instead of the factors being 29 and 31, they are 29 and 31 digits long? Or 290 and 310 digits?
Because your later comments in the post lead me to believe that you are confused on this point, let me explain that when computer scientists talk about the "speed" of an algorithm, they usually mean the asymptotic scaling of the algorithm for large inputs.
However, when
> 
Peter Shor wrote: 
He was actually talking about the fact that the clock speeds of quantum computers are much lower than classical computers, for the forseeable future.
When I said:
> 
>> > 
3. It is not clear to me if you can do this whole calculation solely on a classical computer. 
>> 
Yes, but it will be slower 
I was referring to the fact that a classical computer can't truly execute the same algorithm as a quantum computer. When a classical computer simulates Shor's algorithm, it is running a different algorithm which scales much worse.
This is a little subtle, since people are often a little sloppy with the terminology. However, to really talk about the complexity of an algorithm, you need to know the machine architecture well enough to know what the fundamental operations are.

======================================================================
Kevin Scaldeferri Calif. Institute of Technology
> 
In article <3E00886B.59869D25@pandora.be>,
Nicolaas Vroom 
> > 
If you study
https://www.nicvroom.be/shor.htm
and the simulation program
than you will see etc.
(For example 29*31 requires 420 steps versus 30 steps) 
> 
That's nice, but it's really beside the point. 
IMO this is a very important statement Shor's Algorithm consists of three parts: 1. Pre Processing. 2. FFT 3. Post Processing. The Pre and Post Processing part are performed on a classical comp. The FFT part is performed on a quantum post. IMO in order to do factorization you do not need the FFT part.
See: http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf 1. The preprossing part involves the following parameters: n, q, x and a n is de number to factorize. q = 2^x 2*n*n < q < 3*n*n For example: n = 5*11=55 x = 13 and q = 8192
In the case of 5 * 11: x^a mod n = 13^20 mod 55 = 1
a/2 = 10 13^10 mod 55 = 34 = y
GCD of (y1 and 55) = (33 and 55) = 11
GCD of (y+1 and 55) = (35 and 55) = 5
In case of n = 29 * 31: 21^420 mod n = 1 a/2 = 210 21^210 mod n = 869 = y That means you still need 420 calculations while classical factorization requires 30 calculations.
In case of
97*89 a = 353 y = 6498 classical 89 calculations
89*101 a = 2200 y = 7475 classical 89 calculations
97*101 a = 800 y = 4849 classical 97 calculations
prediction:
10^150*10^150 classical 10^150 calculations a > 2*10^150 q > 10^600
What is worse Shor's algorithm does not always work
What I still do not understand that the artical by Scientific American claims that only 5*10^10 steps are required. Apparently they do not take the preprocessing into account.
>  What does your simulation say when instead of the factors being 29 and 31, they are 29 and 31 digits long? Or 290 and 310 digits? 
See above. All the numbers increase, but preprocessing is the worst.
> 
Because your later comments in the post lead me to believe that you
are confused on this point, let me explain that when computer
scientists talk about the "speed" of an algorithm, they usually mean
the asymptotic scaling of the algorithm for large inputs.
However, when 
> > 
Peter Shor wrote: No, quantum computers are not faster using the same algorithm. 
> 
He was actually talking about the fact that the clock speeds of quantum computers are much lower than classical computers, for the forseeable future. 
I understand that.
In the case of 5 * 11, Step 4 "Discrete Fourier Transform of
Register 1" requires on a Classical Comp. q * q/20 =
= 8192 * 410 calculations = 32*10^5 calculations
In case of 29*31 : q * q/420 = 2097152 * 2097152/420 calculations
On the quantum computer this requires ALWAYS (?) 1 calculation. Even if this ALWAYS takes 1 second very soon with more digits the quantum computer will outperform FFT calculations on a classical computer.
The most important to ask question is: Does FFT work on a QC ?
>  When I said: 
> > 
> >> > 
3. It is not clear to me if you can do this whole calculation solely on a classical computer. 
> >> 
Yes, but it will be slower 
> 
I was referring to the fact that a classical computer can't truly execute the same algorithm as a quantum computer. When a classical computer simulates Shor's algorithm, it is running a different algorithm which scales much worse. 
If you want to compare you have to do also the pre processing and the post processing on a Quantum Computer.
The question is than how many calculations are involved to do Step 4: 1? The same as on a classical computer ?
A different question is how many steps are involved and how long does it take to do a full factorization with a classical algorithm on a Quantum Computer (without FFT.) without any optimization (sieve) i.e. are this 10^150 calculations for two 150 digit numbers?
Nick
>  "Kevin A. Scaldeferri" wrote: 
> > 
In article <3E00886B.59869D25@pandora.be>,
Nicolaas Vroom 
> > > 
If you study
https://www.nicvroom.be/shor.htm
and the simulation program
than you will see etc.
(For example 29*31 requires 420 steps versus 30 steps) 
> > 
That's nice, but it's really beside the point. 
> 
IMO this is a very important statement Shor's Algorithm consists of three parts: 1. Pre Processing. 2. FFT 3. Post Processing. The Pre and Post Processing part are performed on a classical comp. The FFT part is performed on a quantum post. IMO in order to do factorization you do not need the FFT part. See: http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf 1. The preprossing part involves the following parameters: n, q, x and a n is de number to factorize. q = 2^x 2*n*n < q < 3*n*n For example: n = 5*11=55 x = 13 and q = 8192 
Step 2 isn't preprocessing in the quantum factoring algorithm; it's intrinsically part of the quantum piece of the algorithm. If you want to simulate the quantum algorithm on a classical computer, you do indeed need these 8192 calculations. If you had a quantum computer, you would only do this calculation one time, on a superposition of 8192 different inputs, and end up with a superposition of 8192 different values, which gets fed into the FFT.
... long sections snipped ...
>  A different question is how many steps are involved and how long does it take to do a full factorization with a classical algorithm on a Quantum Computer (without FFT.) without any optimization (sieve) i.e. are this 10^150 calculations for two 150 digit numbers? 
Saying that the quadratic (or number field) sieve is an optimized version of trial division is kind of like saying that an automobile is an optimized version of a donkey cart. They work using completely different principles; if you were programming them, the only code you could reuse is the code for multiplication and division of large numbers. I would recommend reading a good explanation of the quadratic sieve. Unfortunately, the only one I know of offhand (Brain Hayes' article "The magic words are squeamish ossifrage") is not available online.
Peter Shor
peterwshor@aol.com (Peter Shor) wrote in message
> 
I would recommend reading a good
explanation of the quadratic sieve. Unfortunately, the only one
I know of offhand (Brain Hayes' article "The magic words are
squeamish ossifrage") is not available online.
Peter Shor 
I found this link to be helpful: http://www.math.ksu.edu/math511/notes/iqs2000.htm
> 
Nicolaas Vroom 
> > 
Step 2 involves modular exponentation i.e.
the calculation of x^a mod n with a going from 0 to 8191
This are 8192 values, calculations or 8192 steps !!!
Those steps are required to initialize FFT on the
quantum comp. 
> 
Step 2 isn't preprocessing in the quantum factoring algorithm; it's intrinsically part of the quantum piece of the algorithm. If you want to simulate the quantum algorithm on a classical computer, you do indeed need these 8192 calculations. 
I doubt that. If you test if x^a mod n = 1 (for a from 0 to 8191) you can stop. Than you know the periodicity.
In total there are four possibilities: Two that work and two that are in error. (See above url for details)
In case of an error you can also test other values of x.
For example you can also try x = 14 and x = 15
On a quantum computer you get then a superposition of resp
16384 and 32768 different values.
In case of x = 14: 14^10 mod 55 = 1 ("Same" as for x=13)
In case of x = 15 the string of values is:
0,1,15,5,20,25,45,15,5,20,25,45,15 etc etc
>  If you had a quantum computer, you would only do this calculation one time, on a superposition of 8192 different inputs, and end up with a superposition of 8192 different values, which gets fed into the FFT. 
I call this time: time1
In case of 300 digit number x is roughly 600 and q = 2^600 that means you only do the calculation one time (time2) on a superposition of 2^600 different inputs, and end up etc.
The question is are time1 and time2 identical ? If not what are they ?
In the original article by Scientific American they write that a quantum algorithm (to find the prime factors of a 300 digit number) would take only 5*10^10 steps, or less than a second at tetrahetz speed.
The question is: how do those 5*10^10 steps compare with the 6 Steps of the article under discussion i.e. the above mentioned time (Step 2) and the time to perform the FFT ?
Nick
In article
>  Some poor soul not cited by Anatoly Vorobey wrote: 
>>  It is IMO easy possible that factoring a number on a quantum computer plus classical computer combination is slower than performing the same factoring on a classical computer which uses the sieve algorithm. 
> 
Why?
Do you not believe that quantum computers will ever be built? (a reasonable point of view). Do you not believe that quantum mechanics is correct? (a view shared by a number of otherwise reasonable people, but one which I think requires ignoring a great deal of evidence). 
Is it possible that nature might turn out to not be indifferent to the number of states in a superposition?
I mean, one "naive" interpretation of how quantum factoring and other quantum algorithms work is that we take advantage of the fact that when we act on a superposition of states our action is performed "in parallel" on each state in the superposition, and we are kind of gaining massively parallel execution without paying anything for that, right? It's as if the nature was "storing" huge, in fact unbounded in principle, amounts of "data" for us. So my question is, is it possible that in reality the nature might turn out to be limited in terms of how much data it can "store" in this way? Could the theory of quantum mechanics be changed so that superpositions with very very large number of states in them would be outlawed, and the theory would still remain logically coherent? Perhaps by positing that such "large" superpositions would collapse as if observed, or in some other manner?
Forgive me if these questions are stupid or too naive, I'm but an amateur.
 Anatoly Vorobey, my journal (in Russian): http://www.livejournal.com/users/avva/ mellon@pobox.com http://pobox.com/~mellon/ "Angels can fly because they take themselves lightly"  G.K.Chesterton
>  Is it possible that nature might turn out to not be indifferent to the number of states in a superposition? 
I'm not going to say anything is *impossible*, but I would say *highly* implausible.
You should note that in Quantum theory, the "number of states in a superposition" is not a physical quantity. It depends strongly on your choice of a basis. Looked at one way, you might have a superposition of a very large number of states, but just by *thinking* about it differently, it becomes a single state with no superposition involved at all.
The "Quantum" answer is no, it isn't possible.
>  we are kind of gaining massively parallel execution without paying anything for that, right? It's as if the nature was "storing" huge, in fact unbounded in principle, amounts of "data" for us. 
Maybe "kind of". Quantum and clasical information are sort of like Dollars and Soviet Rubles used to be. Dolars were obviously better than Rubles, but if you wanted to covert Dollars to Rubles (or buy something priced in Rubles) you had to accept the official 1 to 1 exchange rate. The universe has no black market.
It *looks* like nature *must* be storing lots of data, but when you try to retrieve that data, you only get one classical bit per qbit. (Similarly, by violating Bell's inequality, it *looks* like nature *must* be transmiting information instantainiously, but if *you* want to send some, you can't.)
>  Could the theory of quantum mechanics be changed so that superpositions with very very large number of states in them would be outlawed, and the theory would still remain logically coherent? Perhaps by positing that such "large" superpositions would collapse as if observed, or in some other manner? 
I doubt it, and I'm pretty sure such a theory wouldn't look much like quantum mechanics. Ordinary quantum effects typically do involve superpositions of very large (if not infinite) numbers of states.
For instance, light can be focused to a fine point, but can also be prepared in a state with a very small uncertianty in momentum. In quantum mechanics, position states can be described as infinite superpositions of momentum states, and vice versa.
For what you are suggesting to happen, one type of states (lets assume for concreteness that it is position states) would have to be the "real" states and the other (momentum) the "superpositions". If there were a bound on the number of "real" states in a superposition, there would be a limit to how precicely the "superpositions" could be measured. In our example that would be a limit to how coherent a laser could be.
I don't know what the experimental limit on such an effect would be, but I would guess that it is *very* tight. Probabally tight enough that it wouldn't bother any practical quantum computer (there *are* lots of things that *do* make a quantum computer hard to build).
Ralph Hartley
>  It *looks* like nature *must* be storing lots of data, but when you try to retrieve that data, you only get one classical bit per qbit. 
This has always made me sceptical about whether quantum computers could ever work in theory, even if the practical problems could be over come (which I'm sure they will, eventually).
This point often seems overlooked by proponents of quantum computers. I have never seen a cogent explanation of how the result is read out of a quantum computer, in a fashion that is superior to a classical computer.
Bucksbaum's "Library of Congress in an electron" claim fails at this reading out stage; this major difficulty is only obscured because his experiments are operating on millions of identically prepared electrons. By operating on such a large number of electrons or atoms Bucksbaum' is able to exploit the millionfold parallelism present and thereby extract information to digitally reconstruct the electrons' wavefunctions. In this experiment, information recovery is limited 3 bits per electron  each electron in his setup can occupy one of 8 levels. The more atoms used the more information they can recover.
In effect they haven't got a newfangled quantum computer but an oldfashioned parallelprocessing computer.
Is this a generic problem with all quantum computers?
Cheers, Michael C Price
http://mcp.longevityreport.com http://www.hedweb.com/manworld.htm
> 
In article 
> > 
Some poor soul not cited by Anatoly Vorobey wrote: 
> 
> >> 
It is IMO easy possible that factoring a number on a quantum computer plus classical computer combination is slower than performing the same factoring on a classical computer which uses the sieve algorithm. 
> 
I wrote that in message 7 in this thread.
[Moderator's note: "message 7" is not anything welldefined  KS]
But unfortunate after studying documents related to quantum computers my understanding of how quantum computers operate is becoming less.
My basic reading are the eight documents mentioned in:
https://www.nicvroom.be/shor.htm
specific the documents 1, 6, 7 and 8.
I have also improved my similation program of the 6 steps
outlined in document 1.
https://www.nicvroom.be/progrm19.htm
One very interesting factorization numbers is n = 85.
Step 1 with n=85 gives x=14 and q= 2^x=16384
Step 2 gives the following sequence of 16 numbers:
1,14,26,24,81,29,66,74,16,54,76,44,21,39,36,79
and again.
This sequence repeats itself 1024 (=M) times.
(GCD a=8, y=16, p = 17 and q = 5)
Step 3 you will measure in Register 2 one of those values.
For example try 1
Step 4 is Fast Fourier Transform.
with c going from 0 to q1 = 16383
Step 5 you will measure Register 1.
The question is what can you measure and what are the probabilities. My simulation program shows that the FFT values for c=0 we get 1024, for c = 1024 : 1024 for c = 2048: 1024 for c = 3072: 1024 etc ie. for c = n*1024 we get 1024, with n going from 0 to 15 All the intermediate values are all zero. See https://www.nicvroom.be/shorfft.htm The chance p(c) are in all equal to 6.25 ie. for c = n*1024 we get p(c) = 6.25 with n going from 0 to 15 The grand total of all those values 1s 100. I do not known if that is correct.
The question is what does the value 6.25 means.
Accordingly to litterature it means that when you measure Register 1
you have a 6.25% chance of either measuring:
0? or 1024 or 2048 or 3072 or 4096 etc until 15360.
Using any of those numbers (except 0) you perform
in Step 6 Continued Fraction Convergent to find periodicity.
My interpretation of the FFT values in step 5 is that we only have a 16/16384=1/1024 chance of finding the value 1024 and a 1023/1024 chance of finding the value 0.
What is the correct interpretation?
Is my simmulation program correct?
Nick https://www.nicvroom.be/ Question 23 Discusses Shor's Algorithm
> 
My interpretation of the FFT values in step 5 is that we only have a 16/16384=1/1024 chance of finding the value 1024 and a 1023/1024 chance of finding the value 0. What is the correct interpretation? Is my simmulation program correct? Nick https://www.nicvroom.be/ Question 23 Discusses Shor's Algorithm 
My simulation program is based around the following document: "Shor's Algorithm for factorizing large numbers" by G. Eric Moorhouse, UW Math http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf In this document Shor's Algorithm is implemented in 6 steps.
In step 4 of this document the state phi is calculated
as a summation of two loops:
an inner loop and an outer loop
The inner loop consists of a summation of
e^iwcd with d going from 0 to 409 (incase n=55)
The outer loop consists of a summation of
e^ixc * "result of innerloop"
with c going from 0 to 8191 (incase n=55)
Inorder to calculate state phi I use both the cos part and the sin part
As such I combined both in order to calculate
a) the value of the FFT
b) the probability Pr(c)
In fact in order to calculate the value of the FFT
I called the summation of the cos part: tot1
I called the summation of the sin part: tot2
tot1 for the inner loop then becomes:
tot1 = cos(owc)+cos(1wc)+cos(2wc)+cos(3wc) etc.
tot1 can be split up in two parts: even and odd part:
f1c = cos(0wc)+cos(2wc)+cos(4wc)+cos(6wc) etc.
f2c = cos(1wc)+cos(3wc)+cos(5wc)+cos(7wc) etc.
Each of those functions f1c and f2c can be seen
as points on a circle.
When you do that each summation f1c and f2c can be combined
in one line.
For details see:
https://www.nicvroom.be/shoropt.htm
The same you can also do for tot2 tot2 = sin(owc)+sin(1wc)+sin(2wc)+sin(3wc) etc.
This strategy improves tremendously the calculation of both the FFT value and or the Pr(c) value.
Is this optimisation allowed ?
Nick.
> 
I have also improved my similation program of the 6 steps outlined in document 1. https://www.nicvroom.be/progrm19.htm Is my simmulation program correct? 
My simulation program is based around the following document: "Shor's Algorithm for factorizing large numbers" by G. Eric Moorhouse, UW Math http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf In this document Shor's Algorithm is implemented in 6 steps.
The central part is FFT or QFT
FFT implies two summations:
an innerloop summation of the function e^iw
over d (From 0 to M1) with w being a function of d and c
an outerloop summation over c (from 0 to q1).
There are two ways to calculate the values of the function e^iw: Cos functionality or Sin functionality. With Cos functionality in order to calculate the value you take the cos part. With Sin functionality you take the Sin part.
My first question is when you Measure Register 1
which values do you observe:
Based on Cos Functionality or Based on Sin Functionality ?
The afore mentioned document discusses the case for n=55 but does not tell you which values are observed (indirectly it does) but shows the probabilities of finding certain values. (Those probabilities follow the Sin functionality)
As explained above FFT calculation involves a summation
of an inner loop.
For large values of n this calculation becomes very
time consuming.
You can divide the function e^iw in two parts f1 and f2
f1 = e^i0w + e^i2w + e^i4w + e^i6w (even)
f2 = e^i1w + e^i3w + e^i5w + e^i7w (odd)
when you do that each function becomes like points on a circle
and it is than easy to calculate the summation of e^iw
in one line.
for details see https://www.nicvroom.be/shoropt.htm
see also https://www.nicvroom.be/progrm19.htm
My second question is: is this correct ?
When you include this optimisation trick simulation of shor's algorithm with an Excell spreadsheet with values like 8383 become feasible.
At my home page a free Excell program is available. Is this simulation program correct ?
>  Nick https://www.nicvroom.be/ Question 23 Discusses Shor's Algorithm 
> 
My interpretation of the FFT values in step 5 is that we only have a 16/16384=1/1024 chance of finding the value 1024 and a 1023/1024 chance of finding the value 0. What is the correct interpretation? Is my simmulation program correct? Nick https://www.nicvroom.be/ Question 23 Discusses Shor's Algorithm 
My simulation program is based around the following document: "Shor's Algorithm for factorizing large numbers" by G. Eric Moorhouse, UW Math http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf In this document Shor's Algorithm is implemented in 6 steps.
In step 4 of this document the state phi is calculated as a summation of two loops: an inner loop and an outer loop The inner loop consists of a summation of e^iwcd with d going from 0 to 409 (incase n=55) The outer loop consists of a summation of e^ixc * "result of innerloop" with c going from 0 to 8191 (incase n=55) To calculate state phi I introduced the concepts Cos funtionality and Sin functionality
Based on information received by John Baez
inorder to calculate state phi I should use both
the cos part and the sin part
(and not one or the other).
As such I have combined both in order to calculate
a) the value of the FFT
b) the probability Pr(c)
In fact in order to calculate the value of the FFT
I called the summation of the cos part: tot1
I called the summation of the sin part: tot2
The value of the FFT the becomes:
tot = SQR(tot1*tot1 + tot2*tot2)
The same (approximate) to calculate Pr(C)
The above mentioned document at page 20
Step 5: Measure register 1
writes:
Let's say we observe register 1 to be in state 4915>
(This happens with probabality 4.4 %)
My simulation program shows 4.379 %
Next the document depicts the values of Pr(c) in a graphical figure with 16 vertical bars.
My simulation program matches this figure
but not completely. See for details
https://www.nicvroom.be/shorfft.htm
4 bars are missing:
One of 5% each for c=0, c=2048 c=4096 and c= 6144
If you take those 4 bars into account than together
with the 16 vertical bars the total of all the Pr(c)
values becomes 100%
Why shows the figure of page 20 only 16 bars ? Is this correct ?
If you Measure a value of 0 than the Contioned Fraction
Convergent CFC does not work
Is that correct ?
If you measure 4096 than the result of CFC gives:
Possible values for r are multiples of r1 = 2
Is that correct ?
My simulation tells me that the FFT value for c = 4915
is 383.50
Is 383.50 the value measured in this example ?
However you get the same value 383.50 in total for 8 values: c = 819, 1229, 2867, 3277, 4915, 5325, 6963 and 7373
Assuming that you have measured 383.50 how do you know that the "cause" was c = 4915 ? and not one other values of c ?
I have updated the programs findprim.bas and findprim.xls
Nick
> 
What is the correct interpretation? Is my simmulation program correct? 
Than you measure register 1 on your QC
You again can enter nothing than the QC selects something random
But you can also select for example 65472
or (205123972)
As a result of Step 6 on your PC you perform
Continued Fraction Convergent of 65472
0 a = 0 p = 65472 / 268435456 1 a = 4100 p = 256 / 65472 Convergent 1 / 4100 2 a = 255 p = 192 / 256 Convergent 255 / 1045501 3 a = 1 p = 64 / 192 Convergent 256 / 1049601 4 a = 3 p = 0 / 64 Convergent 1023 / 4194304Possible values for periodicity r are multiples of r1 = 4100 r = 4100
The next step is to calculate y = 28^2050 mod 8383
The result is 3736
But how do you do that "fast" in "one step" ?
On my pc I do that in a do loop of 2050 calculations
but that is a slow methode.
In order to calculate 28^2050 you need a tremendous powerfull PC which huge registers.
Remember this is only a simple example for really large numbers of 100 digits this becomes impossible to handle except if you do it in a loop but that makes shor's algorithm very slow.
If some one thinks that you can perform this single calculation on a Quantum Computer than please give me the details how this works internally.
Finally If you have y = 3736 Than you have to calculate GCD(3737,8383) and GCD(3735,8383) which will give you the final answer 83 and 101 But that is easy and gives no problem on a PC
>  Nick https://www.nicvroom.be/ Question 23 Discusses Shor's Algorithm 
Created: 23 December 2002
Last Updated: 27 March 2003
Back to my home page Contents of This Document