Is this an actually good explanation? The introduction immediately made me pause:
> In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit
No in classical computers memory is corrected for using error correction not duplicating bits and majority voting. Duplicating bits would be a very wasteful strategy if you can add significantly fewer bits and achieve the same result which is what you get with error correction techniques like ECC. Maybe they got it confused with logic circuits where there’s not any more efficient strategy?
Physicist here. Classical error correction may not always be a straight up repetition code, but the concept of redundancy of information still applies (like parity checks).
In a nutshell, in quantum error correction you cannot use redundancy because of the no-cloning theorem, so instead you embed the qubit subspace in a larger space (using more qubits) such that when correctable errors happen the embedded subspace moves to a different "location" in the larger space. When this happens it can be detected and the subspace can be brought back without affecting the states within the subspace, so the quantum information is preserved.
Just an example to expand on what others are saying: in the N^2-qubit Shor code, the X information is recorded redundantly in N disjoint sets of N qubits each, and the Z information is recorded redundantly in a different partitioning of N disjoint sets of N qubits each. You could literally have N observers each make separate measurements on disjoint regions of space and all access the X information about the qubit. And likewise for Z. In that sense it's a repetition code.
That’s also correct but not what the sibling comments are saying ;)
There are quantum error correction methods which more resemble error correction codes rather than replication, and that resemblance is fundamental: they ARE classical error correction codes transposed into quantum operations.
The electric signals inside a (classical) processor or digital logic chip are made up of many electrons. Electrons are not fully well behaved and there are often deviations from ideal behavior. Whether a signal gets interpreted as 0 or 1 depends on which way the majority of the electrons are going. The lower the power you operate at, the fewer electrons there are per signal, and the more errors you will see.
So in a way, there is a a repetition code in a classical computer (or other similar devices such as an optical fiber). Just in the hardware substrate, not in software.
I don't believe it is the result of a LLM, more like an oversimplification, or maybe a minor fuckup on the part of the author as simple majority voting is often used in redundant systems, just not for memories as there are better ways.
And for a LLM result, this is what ChatGPT says when asked "How does memory error correction differ from quantum error correction?", among other things.
> Relies on redundancy by encoding extra bits into the data using techniques like parity bits, Hamming codes, or Reed-Solomon codes.
And when asked for a simplified answer
> Classical memory error correction fixes mistakes in regular computer data (0s and 1s) by adding extra bits to check for and fix any errors, like a safety net catching flipped bits. Quantum error correction, on the other hand, protects delicate quantum bits (qubits), which can hold more complex information (like being 0 and 1 at the same time), from errors caused by noise or interference. Because qubits are fragile and can’t be directly measured without breaking their state, quantum error correction uses clever techniques involving multiple qubits and special rules of quantum physics to detect and fix errors without ruining the quantum information.
Absolutely no mention of majority voting here.
EDIT: GPT-4o mini does mention majority voting as an example of a memory error correction scheme but not as the way to do it. The explanation is overall more clumsy, but generally correct, I don't know enough about quantum error correction to fact-check.
People always have made bad assumptions or had misunderstandings. Maybe the author just doesn't understand ECC and always assumed it was consensus-based. I do things like that (I try not to write about them without verifying); I'm confident that so do you and everyone reading this.
>Maybe the author just doesn't understand ECC and always assumed it was consensus-based.
That's likely, or it was LLM output and the author didn't know enough to know it was wrong. We've seen that in a lot of tech articles lately where authors assume that something that is true-ish in one area is also true in another, and it's obvious they just don't understand other area they are writing about.
Frankly every state of the art LLM would not make this error. Perhaps GPT3.5 would have, but the space of errors they tend to make now is in areas of ambiguity or things that require deductive reasoning, math, etc. Areas that are well described in literature they tend to not make mistakes.
I've gone to lots of talks on quantum error correction, and most of them start out with explaining the repetition code. Not because it's widely used, but because it is very easy to explain to someone who knows nothing about classical coding theory. And because the surface code is essentially the product of two repetition codes, so if you want to understand surface code quantum error correction you don't need to understand any classical codes besides the repetition code.
All that is to say that someone who had been to a few talks on quantum error correction but didn't directly work on that problem might reasonably believe that the repetition code is an important classical code.
That threw me off as well. Majority voting works for industries like aviation, but that's still about checking results of computations, not all memory addresses.
By a somewhat generous interpretation classical computer memory depends on implicit duplication/majority vote in the form of increased cell size of each bit instead of discrete duplication. Same way as repetition of signal sent over the wire can mean using lower baudrate and holding the signal level for longer time. A bit isn't stored in single atom or electron. A cell storing single bit can be considered a group of smaller cells connected in parallel storing duplicate value. And the majority vote happens automatically in analog form as you read total sum of the charge within memory cell.
Depending on how abstractly you talk about computers (which can be the case when contrasting quantum computing with classical computing), memory can refer not just to RAM but anything holding state and classical computer refer to any computing device including simple logic circuits not your desktop computer. Fundamentally desktop computers are one giant logic circuits.
Also RAID-1 is a thing.
At higher level backups are a thing.
So I would say there enough examples of practically used duplication for the purpose of error resistance in classical computers.
RAID-1 does not do on the fly error detection or correction. When you do a read you read from one of the disks with a copy but don't validate. You can probably initiate an explicit recovery if you suspect there's an error but that's not automatic. RAID is meant to protect against the entire disk failing but you just blindly assume the non-failing disk is completely error free. FWIW no formal RAID level I'm aware of does majority voting. Any error detection/correction is implemented through parity bits with all the problems that parity bits entail unless you use erasure code versions of RAID 6.
The reason things work this way is you'd have 2x read amplification on the bus for error detection and 3x read amplification on the bus for majority-voting error correction & something in the read I/O hot path validating the data reducing latency further. Additionally, RAID-1 is 1:1 mirroring so it can't do error correction automatically at all because it doesn't know which copy is the error-free. At best it can transparently handle errors when the disk refuses to service the request but it cannot handle corrupt data errors that the disk doesn't notice. If you do FDE then you probably would notice corruption at least and be able to reliably correct even with just RAID-1 but I'm not sure if anyone leverages this.
RAID-1 and other backup / duplication strategies are for durability and availability but importantly not for error correction. Error correction for durable storage is typically handled by modern techniques based on erasure codes while memory typically uses Hamming codes because they were the first ones, are cheaper to implement, and match better to RAM needs than Reed Solomon codes. Raptor codes are more recent but patents are owned by Qualcomm; some have expired but there are continuation patents that might cover it.
Yes, and it's worth pointing out these examples because they don't work as quantum memories. Two more: magnetic memory based on magnets which are magnetic because they are build from many tiny (atomic) magnets, all (mostly) in agreement. Optical storage is similar, much like parent's example of a signal being slowly sent over a wire.
So the next question is why doesn't this work for quantum information? And this is a really great question which gets at the heart of quantum versus classical. Classical information is just so fantastically easy to duplicate that normally we don't even notice this, it's just too obvious a fact... until we get to quantum.
Maybe they were thinking of control systems where duplicating memory, lockstep cores and majority voting are used. You don't even have to go to space to encounter such a system, you likely have one in your car.
The explanation of Google's error correction experiment is basic but fine. People should keep in mind that Quantum Machines sells control electronics for quantum computers which is why they focus on the control and timing aspects of the experiment. I think a more general introduction to quantum error correction would be more relevant to the Hackernews audience.
It's just a standard example of a code that works classically but not quantumly to demonstrate the differences between the two. More or less any introductory talk on quantum error correction would mention it.
> > In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit
The author clearly doesn't know about the topic neither him studied the basics on some undegraduate course.
ECC is not easy to explain, and sounds like a tautology rather than an explanation "error correction is done with error correction"- unless you give a full technical explanation of exactly what ECC is doing.
It's categorically wrong to say that that's how memory is error corrected in classical computers because it is not and never has been how it was done. Even for systems like S3 that replicate, there's no error correction happening in the replicas and the replicas are eventually converted to erasure codes.
I'm being a bit pedantic here, but it is not categorically wrong. Categorically wrong doesn't just mean "very wrong" it is a specific type of being wrong, a type that this isn't.
Repetition codes are a type of error correction code. It is thus in the category of error correction codes. Even if it is not the right error correction codes, it is in the correct category, so it is not a categorical error.
That's the definition of a normal error not a category error.
If you disagree, what do you see as something that would be in the correct category but wrong in the sentence?
The normal definition of category error is something that is so wrong it doesn't make sense on a deep level. Like for example if they suggested quicksort as an error correction code.
The mere fact we are talking about "real" computers should be a tip off its not a category error, since people can build new computers. Category errors are wron a priori. Its possible someone tomorrow will build a computer using a repetition code for error correcting. It is not possible they will use quicksort for ECC. Repetition codes is in the right category of things even if it is the wrong instance. Quicksort is not in the right category.
Well it's about as categorically wrong as saying quantum computers use similar error correction algorithms as classical computers. Categorically both are are error correction algorithms.
Yeah, I couldn't quite remember if ECC is just hamming codes or is using something more modern like fountain codes although those are technically FEC. So in the absence of stating something incorrectly I went with the tautology.
Eh, I don’t think it is categorically wrong… ECCs are based on the idea of sacrificing some capacity by adding redundant bits that can be used to correct for some number of errors. The simplest ECC would be just duplicating the data, and it isn’t categorically different than real ECCs used.
Then you're replicating and not error correcting. I've not seen any replication systems that use the replicas to detect errors. Even RAID 1 which is a pure mirroring solution only fetches one of the copies when reading & will ignore corruption on one of the disks unless you initiate a manual verification. There are technical reasons why that is related to read amplification as well as what it does to your storage cost.
I guess that is true, pure replication would not allow you to correct errors, only detect them.
However, I think explaining the concept as duplicating some data isn’t horrible wrong for non technical people. It is close enough to allow the person to understand the concept.
To be clear. A hypothetical replication system with 3 copies could be used to correct errors using majority voting.
However, there's no replication system I've ever seen (memory, local storage, or distributed storage) that detects or corrects for errors using replication because of the read amplification problem.
The ECC memory page has the same non sensical statement:
> Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy (TMR). The latter is preferred because its hardware is faster than that of Hamming error correction scheme.[16] Space satellite systems often use TMR,[17][18][19] although satellite RAM usually uses Hamming error correction.[20]
So it makes it seem like TMR is used for memory only to then back off and say it’s not used for it. ECC RAM does not use TMR and I suggest that the Wikipedia page is wrong and confused about this. The cited links on both pages are either dead or are completely unrelated, discussing TMR within the context of fpgas being sent into space. And yes, TMR is a fault tolerance strategy for logic gates and compute more generally. It is not a strategy that has been employed for storage full stop and evidence to the contrary is going to require something stronger than confusing wording on Wikipedia.
I think it's fundamentally misleading, even on the central quantum stuff:
I missed what you saw, that's certainly a massive oof. It's not even wrong, in the Pauli sense, i.e. it's not just a simplistic rendering of ECC.
It also strongly tripped my internal GPT detector.
Also, it goes on and on about realtime decoding, the foundation of the article is Google's breakthrough is real time, and the Google article was quite clear that it isn't real time.*
I'm a bit confused, because it seems completely wrong, yet they published it, and there's enough phrasing that definitely doesn't trip my GPT detector. My instinct is someone who doesn't have years of background knowledge / formal comp sci & physics education made a valiant effort.
I'm reminded that my throughly /r/WSB-ified MD friend brings up "quantum computing is gonna be big what stonks should I buy" every 6 months, and a couple days ago he sent me a screenshot of my AI app that had a few conversations with him hunting for opportunities.
* "While AlphaQubit is great at accurately identifying errors, it’s still too slow to correct errors in a superconducting processor in real time"
This is not about AlphaQubit. It's about a different paper, https://arxiv.org/abs/2408.13687 and they do demonstrate real-time decoding.
> we show that we can maintain below-threshold operation on the 72-qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1 μs cycle duration
Yeah, I didn't want to just accuse the article of being AI generated since quantum isn't my specialty, but this kind of error instantly tripped my "it doesn't sound like this person knows what they're talking about alarm" which likely indicates a bad LLM helped summarize the quantum paper for the author.
They effectively do. All css absolute units are effectively defined as ratios of each other and zoom*DPI*physicalPixels sets the ratio of how many physical pixels each absolute unit will end up turning into. Increase zoom and the screen seems to have shrunk to some smaller 'cm' and so on.
For things like 'vh' and 'vw' it just doesn't matter "how many cm" the screen is as 20% of the viewing space always comes out to 20% of the viewing space regardless how many 'cm' that is said to be equivalent to.
While I'm still eager to see where Quantum Computing leads, I've got a new threshold for "breakthrough": Until a quantum computer can factor products of primes larger than a few bits, I'll consider it a work in progress at best.
If qubit count increased by 2x per year, largest-number-factored would show no progress for ~8 years. Then the largest number factored would double in size each year, with RSA2048 broken after a total of ~15 years. The initial lull is because the cost of error correction is so front loaded.
Depending on your interests, the initial insensitivity of largest-number-factored as a metric is either great (it reduces distractions) or terrible (it fails to accurately report progress). For example, if the actual improvement rate were 10x per year instead of 2x per year, it'd be 3 years until you realized RSA2048 was going to break after 2 more years instead of 12 more years.
What's the rough bit count of the largest numbers anyone's quantum computer can factor today? Breaking RSA2048 would be a huge breakthrough, but I'm wondering if they can even factor `221 = 13*17` yet (RSA8).
And as I've mentioned elsewhere, the other QC problems I've seen sure seem like simulating a noisy circuit with a noisy circuit. But I know I don't know enough to say that with confidence.
Like I said above, the size of number that can be factored will sit still for years while error correction spins up. It'll be a good metric for progress later; it's a terrible metric for progress now. Too coarse.
Heh, that seems evasive. Good metric or not, it makes me think they aren't at the point where they can factor `15 = 3*5`.
I'm not trying to disparage quantum computing. I think the topic is fascinating. At one point I even considered going back to school for a physics degree so I would have the background to understand it.
I'm not trying to be evasive. I'm directly saying quantum computers won't factor interesting numbers for years. That's more typically described as biting the bullet.
There are several experiments that claim to factor 15 with a quantum computer (e.g. [1][2]). But beware these experiments cheat to various degrees (e.g. instead of performing period finding against multiplication mod 15 they do some simpler process known to have the same period). Even without cheating, 15 is a huge outlier in the simplicity of the modular arithmetic. For example, I think 15 is the only odd semiprime where you can implement modular multiplication by a constant using nothing but bit flips and bit swaps. Being so close to a power of 2 also doesn't hurt.
Beware there's a constant annoying trickle of claims of factoring numbers larger than 15 with quantum computers, but using completely irrelevant methods where there's no reason to expect the costs to scale subexponentially. For example, Zapata (the quantum startup that recently went bankrupt) had one of those [3].
People dramatically underestimate how important incremental unsung progress is, perhaps because it just doesn't make for a nice memorable story compared to Suddenly Great Person Has Amazing Idea Nobody Had Before.
> While I'm still eager to see where Quantum Computing leads
Agreed. Although I'm no expert in this domain, I've been watching it a long time as a hopeful fan. Recently I've been increasing my (currently small) estimated probability that quantum computing may not ever (or at least not in my lifetime), become a commercially viable replacement for SOTA classical computing to solve valuable real-world problems.
I wish I knew enough to have a detailed argument but I don't. It's more of a concern triggered by reading media reports that seem to just assume "sure it's hard, but there's no doubt we'll get there eventually."
While I agree quantum algorithms can solve valuable real-world problems in theory, it's pretty clear there are still a lot of unknown unknowns in getting all the way to "commercially viable replacement solving valuable real-world problems." It seems at least possible we may still discover some fundamental limit(s) preventing us from engineering a solution that's reliable enough and cost-effective enough to reach commercial viability at scale. I'd actually be interested in hearing counter-arguments that we now know enough to be reasonably confident it's mostly just "really hard engineering" left to solve.
In general to the ‘Is crypto still safe’ question, the answer is typically no - not because we have a quantum computer waiting in the wings ready to break RSA right now, but because of a) the longevity of the data we might need to secure and b) the transition time to migrate to new crypto schemes
While the NIST post quantum crypto standards have been announced, there is still a long way to go for them to be reliably implemented across enterprises.
Shor’s algorithm isn’t really going to be a real time decryption algorithm, it’s more of a ‘harvest now, decrypt later’ approach.
I have a pseudo-theory that the universe will never allow quantum physics to provide an answer to a problem where you didn't already know the result from some deterministic means. This will be some bizarre consequence of information theory colliding with the measurement problem.
You can use quantum computers to ask about the behavior of a random quantum computer. Google actually did this a while ago. And the result was better than a real computer could simulate.
The one a few years ago where Google declared "quantum supremacy" sounded a lot like simulating a noisy circuit by implementing a noisy circuit. And that seems a lot like simulating the falling particles and their collisions in an hour glass by using a physical hour glass.
The only one I can think of is simulating physical systems, especially quantum ones.
Google's supremacy claim didn't impress me; besides being a computationally uninteresting problem, it really just motivated the supercomputer people to improve their algorithms.
To really establish this field as a viable going concern probably needs somebody to do "something" with quantum that is experimentally verifiable but not computable classically, and is a useful computation.
That is equivalent to proving BQP ≠ P. We currently don't know that any problem even exists that can be solved efficiently (in polynomial time) by quantum computers but not by classical computers.
Thank you for the link. I appreciate the write-up. This sentence though:
> breaking some cryptography schemes it not exactly the most exciting thing IMHO
You're probably right that we'll migrate to QC-resistant algorithms before this happens, but if factoring was solved today, I think it would be very exciting :-)
Who knows. "It's difficult to make predictions, especially about the future", but it might be a good thing to accelerate switching to new crypto algorithms sooner, leaving fewer secrets to be dug up later.
Yeah I think that's the issue that makes it hard to assess quantum computing.
My very layman understanding is that there are certain things it will be several orders of magnitude better at, but "simple" things for a normal machine quantum will be just as bad if not massively worse.
It really should be treated as a different tool for right now. Maybe some day in the very far future if it becomes easier to make quantum computers an abstraction layer will be arrived at in some manner that means the end user thinks it's just like a normal computer, but from a "looking at series of 1/0's" or "looking at a series of superimposed particles" it's extremely different in function.
I'm someone not really aware of the consequences of each quantum of progress in quantum computing. But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.
How much closer does this work bring us to the Quantum Crypto Apocalypse? How much time do I have left before I need to start budgeting it into my quarterly engineering plan?
> But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.
Probably not. Unless a real sudden unexpected breakthrough happens, best practise will be to use crypto-resistant algorithms long before this becones a relavent issue.
And practically speaking its only public-key crypto that is an issue, your symmetric keys are fine (oversimplifying slightly, but practically speaking this is true)
You'll need to focus on asym and DH stuff. If your symmetric keys are 256 bits you should be fine there.
The hope is that most of this should just be: Update to the latest version of openssl / openssh / golang-crypto / what have you and make sure you have the handshake settings use the latest crypto algorithms. This is all kind of far flung because there is very little consensus around how to change protocols for various human reasons.
At some point you'll need to generate new asym keys as well, which is where I think things will get interesting. HW based solutions just don't exist today and will probably take a long time due to the inevitable cycle of: companies want to meet us fed gov standards due to regulations / selling to fedgov, fedgov is taking their sweet time to standardize protocols and seem to be interested in wanting to add more certified algorithms as well, actually getting something approved for FIPS 140 (the relevant standard) takes over a year at this point just to get your paperwork processed, everyone wants to move faster. Software can move quicker in terms of development, but you have the normal tradeoffs there with keys being easier to exfiltrate and the same issue with formal certification.
Maybe my tinfoil hat is a bit too tight, but every time fedgov wants a new algo certified I question how strong it is and if they've already figured out a weakness. Once bitten twice shy or something????
The NSA has definitely weakened or back-doored crypto. It’s not a conspiracy or even a secret! It was a matter of (public) law in the 90s, such as “export grade” crypto.
Most recently Dual_EC_DRBG was forced on American vendors by the NSA, but the backdoor private key was replaced by Chinese hackers in some Juniper devices and used by them to spy on westerners.
Look up phrase likes “nobody but us” (NOBUS), which is the aspirational goal of these approaches, but often fails, leaving everyone including Americans and their allies exposed.
You should look up the phrase "once bitten twice shy" as I think you missed the gist of my comment. We've already been bitten at least once by incidents as you've described. From then on, it will always be in the back of my mind that friendly little suggestions on crypto algos from fedgov will always be received with suspicion. Accepting that, most people that are unawares will assume someone is wearing a tinfoil hat.
Perhaps, but you got to ask yourself how valuable will your data be 20-30 years in the future. For some people that is a big deal maybe. For most people that is a very low risk threat. Most private data has a shelf life where it is no longer valuable.
I'm not sure anyone really knows this although there is no shortage of wild speculation.
If you have keys that need to be robust for 20 years you should probably be looking into trying out some of the newly NIST approved standard algorithms.
This is another hype piece from Google's research and development arm. This is a theoretical application to increase the number of logical qubits in a system by decreasing the error caused by quantum circuts. They just didn't do the last part yet so the application is yet to be seen.
It's the opposite of a theoretical application, and it's not a hype piece. It's more like an experimental confirmation of a theoretical result mixed with an engineering progress report.
They show that a certain milestone was achieved (error rate below the threshold), show experimentally that this milestone implies what theorists predicted, talk about how this milestone was achieved, and characterize the sources of error that could hinder further scaling.
They certainly tested how it scales up to the scale that they can build. A major part of the paper is how it scales.
>> "Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."
> Google forgot to test if it scales I guess?
Remember that quantum computers are still being built. The paper is the equivalent of
> We tested the scaling by comparing how our algorithm runs on a chromebook, a server rack, and google's largest supercomputing cluster and found it scales well.
The sentence you tried to interpret was, continuing this analogy, the equivalent of
>Google's largest supercomputing cluster is not large enough for us, we are currently building an even bigger supercomputing cluster, and when we finish, our algorithm should (to the best of our knowledge) continue along this good scaling law.
The experiment is literally all about scaling. It tests scaling from distance 3 to 5 to 7. It shows the logical qubit lifetime doubles each time the distance is increased. The sentence you quoted is describing an expectation that this doubling will continue to larger distances, when larger chips are built.
This is the first quantum error correction experiment showing actual improvement as size is increased (without any cheating such as postselection or only running for a single step). It was always believed in theory that bigger codes should have more protection, but there are have been various skeptics over the years saying you'd never actually see these improvements in practice due to the engineering difficulty or due to quantum mechanics breaking down or something.
Make no mistake; much remains to be done. But this experiment is a clear indication of progress. It demonstrates that error correction actually works. It says that quantum computers should be able to solve qubit quality with qubit quantity.
Lol yeah the whole problem with quantum computation is the scaling, that's literally the entire problem. It's trivial to make a qbit, harder to make 5, impossible to make 1000. "If it scales" is just wishy washy language to cover, "in the ideal scenario where everything works perfectly and nothing goes wrong, it will work perfectly"
The fact that there is a forward-looking subsection about “the vision for fault tolerance” (emphasis mine) almost entirely composed of empty words and concluding in “we are just starting this exciting journey, so stay tuned for what’s to come!” tells you “not close at all”.
It's not a major part of the paper, but Google tested a neural network decoder (which had the highest accuracy), and some of their other decoders used priors that were found using reinforcement learning (again for greater accuracy).
Is this an actually good explanation? The introduction immediately made me pause:
> In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit
No in classical computers memory is corrected for using error correction not duplicating bits and majority voting. Duplicating bits would be a very wasteful strategy if you can add significantly fewer bits and achieve the same result which is what you get with error correction techniques like ECC. Maybe they got it confused with logic circuits where there’s not any more efficient strategy?
Physicist here. Classical error correction may not always be a straight up repetition code, but the concept of redundancy of information still applies (like parity checks).
In a nutshell, in quantum error correction you cannot use redundancy because of the no-cloning theorem, so instead you embed the qubit subspace in a larger space (using more qubits) such that when correctable errors happen the embedded subspace moves to a different "location" in the larger space. When this happens it can be detected and the subspace can be brought back without affecting the states within the subspace, so the quantum information is preserved.
You are correct in the details, but not the distinction. This is exactly how classical error correction works as well.
Just an example to expand on what others are saying: in the N^2-qubit Shor code, the X information is recorded redundantly in N disjoint sets of N qubits each, and the Z information is recorded redundantly in a different partitioning of N disjoint sets of N qubits each. You could literally have N observers each make separate measurements on disjoint regions of space and all access the X information about the qubit. And likewise for Z. In that sense it's a repetition code.
That’s also correct but not what the sibling comments are saying ;)
There are quantum error correction methods which more resemble error correction codes rather than replication, and that resemblance is fundamental: they ARE classical error correction codes transposed into quantum operations.
This happens to be the same way that classical error correction works, but quantum.
While you are correct, here is a fun side fact.
The electric signals inside a (classical) processor or digital logic chip are made up of many electrons. Electrons are not fully well behaved and there are often deviations from ideal behavior. Whether a signal gets interpreted as 0 or 1 depends on which way the majority of the electrons are going. The lower the power you operate at, the fewer electrons there are per signal, and the more errors you will see.
So in a way, there is a a repetition code in a classical computer (or other similar devices such as an optical fiber). Just in the hardware substrate, not in software.
This seems like the kind of error an LLM would make.
It is essentially impossible for a human to confuse error correction and “majority voting”/consensus.
I don't believe it is the result of a LLM, more like an oversimplification, or maybe a minor fuckup on the part of the author as simple majority voting is often used in redundant systems, just not for memories as there are better ways.
And for a LLM result, this is what ChatGPT says when asked "How does memory error correction differ from quantum error correction?", among other things.
> Relies on redundancy by encoding extra bits into the data using techniques like parity bits, Hamming codes, or Reed-Solomon codes.
And when asked for a simplified answer
> Classical memory error correction fixes mistakes in regular computer data (0s and 1s) by adding extra bits to check for and fix any errors, like a safety net catching flipped bits. Quantum error correction, on the other hand, protects delicate quantum bits (qubits), which can hold more complex information (like being 0 and 1 at the same time), from errors caused by noise or interference. Because qubits are fragile and can’t be directly measured without breaking their state, quantum error correction uses clever techniques involving multiple qubits and special rules of quantum physics to detect and fix errors without ruining the quantum information.
Absolutely no mention of majority voting here.
EDIT: GPT-4o mini does mention majority voting as an example of a memory error correction scheme but not as the way to do it. The explanation is overall more clumsy, but generally correct, I don't know enough about quantum error correction to fact-check.
People always have made bad assumptions or had misunderstandings. Maybe the author just doesn't understand ECC and always assumed it was consensus-based. I do things like that (I try not to write about them without verifying); I'm confident that so do you and everyone reading this.
>Maybe the author just doesn't understand ECC and always assumed it was consensus-based.
That's likely, or it was LLM output and the author didn't know enough to know it was wrong. We've seen that in a lot of tech articles lately where authors assume that something that is true-ish in one area is also true in another, and it's obvious they just don't understand other area they are writing about.
Frankly every state of the art LLM would not make this error. Perhaps GPT3.5 would have, but the space of errors they tend to make now is in areas of ambiguity or things that require deductive reasoning, math, etc. Areas that are well described in literature they tend to not make mistakes.
I've gone to lots of talks on quantum error correction, and most of them start out with explaining the repetition code. Not because it's widely used, but because it is very easy to explain to someone who knows nothing about classical coding theory. And because the surface code is essentially the product of two repetition codes, so if you want to understand surface code quantum error correction you don't need to understand any classical codes besides the repetition code.
All that is to say that someone who had been to a few talks on quantum error correction but didn't directly work on that problem might reasonably believe that the repetition code is an important classical code.
That threw me off as well. Majority voting works for industries like aviation, but that's still about checking results of computations, not all memory addresses.
By a somewhat generous interpretation classical computer memory depends on implicit duplication/majority vote in the form of increased cell size of each bit instead of discrete duplication. Same way as repetition of signal sent over the wire can mean using lower baudrate and holding the signal level for longer time. A bit isn't stored in single atom or electron. A cell storing single bit can be considered a group of smaller cells connected in parallel storing duplicate value. And the majority vote happens automatically in analog form as you read total sum of the charge within memory cell.
Depending on how abstractly you talk about computers (which can be the case when contrasting quantum computing with classical computing), memory can refer not just to RAM but anything holding state and classical computer refer to any computing device including simple logic circuits not your desktop computer. Fundamentally desktop computers are one giant logic circuits.
Also RAID-1 is a thing.
At higher level backups are a thing.
So I would say there enough examples of practically used duplication for the purpose of error resistance in classical computers.
RAID-1 does not do on the fly error detection or correction. When you do a read you read from one of the disks with a copy but don't validate. You can probably initiate an explicit recovery if you suspect there's an error but that's not automatic. RAID is meant to protect against the entire disk failing but you just blindly assume the non-failing disk is completely error free. FWIW no formal RAID level I'm aware of does majority voting. Any error detection/correction is implemented through parity bits with all the problems that parity bits entail unless you use erasure code versions of RAID 6.
The reason things work this way is you'd have 2x read amplification on the bus for error detection and 3x read amplification on the bus for majority-voting error correction & something in the read I/O hot path validating the data reducing latency further. Additionally, RAID-1 is 1:1 mirroring so it can't do error correction automatically at all because it doesn't know which copy is the error-free. At best it can transparently handle errors when the disk refuses to service the request but it cannot handle corrupt data errors that the disk doesn't notice. If you do FDE then you probably would notice corruption at least and be able to reliably correct even with just RAID-1 but I'm not sure if anyone leverages this.
RAID-1 and other backup / duplication strategies are for durability and availability but importantly not for error correction. Error correction for durable storage is typically handled by modern techniques based on erasure codes while memory typically uses Hamming codes because they were the first ones, are cheaper to implement, and match better to RAM needs than Reed Solomon codes. Raptor codes are more recent but patents are owned by Qualcomm; some have expired but there are continuation patents that might cover it.
Yes, and it's worth pointing out these examples because they don't work as quantum memories. Two more: magnetic memory based on magnets which are magnetic because they are build from many tiny (atomic) magnets, all (mostly) in agreement. Optical storage is similar, much like parent's example of a signal being slowly sent over a wire.
So the next question is why doesn't this work for quantum information? And this is a really great question which gets at the heart of quantum versus classical. Classical information is just so fantastically easy to duplicate that normally we don't even notice this, it's just too obvious a fact... until we get to quantum.
Maybe they were thinking of control systems where duplicating memory, lockstep cores and majority voting are used. You don't even have to go to space to encounter such a system, you likely have one in your car.
The explanation of Google's error correction experiment is basic but fine. People should keep in mind that Quantum Machines sells control electronics for quantum computers which is why they focus on the control and timing aspects of the experiment. I think a more general introduction to quantum error correction would be more relevant to the Hackernews audience.
It's just a standard example of a code that works classically but not quantumly to demonstrate the differences between the two. More or less any introductory talk on quantum error correction would mention it.
> > In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit
The author clearly doesn't know about the topic neither him studied the basics on some undegraduate course.
Error correction? Took a graduate course that used
W.\ Wesley Peterson and E.\ J.\ Weldon, Jr., {\it Error-Correcting Codes, Second Edition,\/} The MIT Press, Cambridge, MA, 1972.\ \
Sooo, the subject is not nearly new.
There was a lot of algebra with finite field theory.
ECC is not easy to explain, and sounds like a tautology rather than an explanation "error correction is done with error correction"- unless you give a full technical explanation of exactly what ECC is doing.
Regardless of whether the parent's sentence is a tautology, the explanation in the article is categorically wrong.
Categorically might be a bit much. Duplicating bits with majority voting is an error correction code, its just not a very efficient one.
Like its wrong, but its not like its totally out of this world wrong. Or more speciglficly its in the correct category.
It's categorically wrong to say that that's how memory is error corrected in classical computers because it is not and never has been how it was done. Even for systems like S3 that replicate, there's no error correction happening in the replicas and the replicas are eventually converted to erasure codes.
I'm being a bit pedantic here, but it is not categorically wrong. Categorically wrong doesn't just mean "very wrong" it is a specific type of being wrong, a type that this isn't.
Repetition codes are a type of error correction code. It is thus in the category of error correction codes. Even if it is not the right error correction codes, it is in the correct category, so it is not a categorical error.
I interpret that sentence as taking about real computers, which does put it outside the category.
That's the definition of a normal error not a category error.
If you disagree, what do you see as something that would be in the correct category but wrong in the sentence?
The normal definition of category error is something that is so wrong it doesn't make sense on a deep level. Like for example if they suggested quicksort as an error correction code.
The mere fact we are talking about "real" computers should be a tip off its not a category error, since people can build new computers. Category errors are wron a priori. Its possible someone tomorrow will build a computer using a repetition code for error correcting. It is not possible they will use quicksort for ECC. Repetition codes is in the right category of things even if it is the wrong instance. Quicksort is not in the right category.
> The normal definition of category error is something that is so wrong it doesn't make sense on a deep level.
Can you show me a definition that says that about the phrase "categorically wrong"?
And I think the idea that computers could change is a bit weak.
Well it's about as categorically wrong as saying quantum computers use similar error correction algorithms as classical computers. Categorically both are are error correction algorithms.
Yeah, I couldn't quite remember if ECC is just hamming codes or is using something more modern like fountain codes although those are technically FEC. So in the absence of stating something incorrectly I went with the tautology.
Eh, I don’t think it is categorically wrong… ECCs are based on the idea of sacrificing some capacity by adding redundant bits that can be used to correct for some number of errors. The simplest ECC would be just duplicating the data, and it isn’t categorically different than real ECCs used.
Then you're replicating and not error correcting. I've not seen any replication systems that use the replicas to detect errors. Even RAID 1 which is a pure mirroring solution only fetches one of the copies when reading & will ignore corruption on one of the disks unless you initiate a manual verification. There are technical reasons why that is related to read amplification as well as what it does to your storage cost.
I guess that is true, pure replication would not allow you to correct errors, only detect them.
However, I think explaining the concept as duplicating some data isn’t horrible wrong for non technical people. It is close enough to allow the person to understand the concept.
To be clear. A hypothetical replication system with 3 copies could be used to correct errors using majority voting.
However, there's no replication system I've ever seen (memory, local storage, or distributed storage) that detects or corrects for errors using replication because of the read amplification problem.
https://en.wikipedia.org/wiki/Triple_modular_redundancy
The ECC memory page has the same non sensical statement:
> Error-correcting memory controllers traditionally use Hamming codes, although some use triple modular redundancy (TMR). The latter is preferred because its hardware is faster than that of Hamming error correction scheme.[16] Space satellite systems often use TMR,[17][18][19] although satellite RAM usually uses Hamming error correction.[20]
So it makes it seem like TMR is used for memory only to then back off and say it’s not used for it. ECC RAM does not use TMR and I suggest that the Wikipedia page is wrong and confused about this. The cited links on both pages are either dead or are completely unrelated, discussing TMR within the context of fpgas being sent into space. And yes, TMR is a fault tolerance strategy for logic gates and compute more generally. It is not a strategy that has been employed for storage full stop and evidence to the contrary is going to require something stronger than confusing wording on Wikipedia.
I think it's fundamentally misleading, even on the central quantum stuff:
I missed what you saw, that's certainly a massive oof. It's not even wrong, in the Pauli sense, i.e. it's not just a simplistic rendering of ECC.
It also strongly tripped my internal GPT detector.
Also, it goes on and on about realtime decoding, the foundation of the article is Google's breakthrough is real time, and the Google article was quite clear that it isn't real time.*
I'm a bit confused, because it seems completely wrong, yet they published it, and there's enough phrasing that definitely doesn't trip my GPT detector. My instinct is someone who doesn't have years of background knowledge / formal comp sci & physics education made a valiant effort.
I'm reminded that my throughly /r/WSB-ified MD friend brings up "quantum computing is gonna be big what stonks should I buy" every 6 months, and a couple days ago he sent me a screenshot of my AI app that had a few conversations with him hunting for opportunities.
* "While AlphaQubit is great at accurately identifying errors, it’s still too slow to correct errors in a superconducting processor in real time"
This is not about AlphaQubit. It's about a different paper, https://arxiv.org/abs/2408.13687 and they do demonstrate real-time decoding.
> we show that we can maintain below-threshold operation on the 72-qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1 μs cycle duration
Oh my, I really jumped to a conclusion. And what fantastic news to hear. Thank you!
Yeah, I didn't want to just accuse the article of being AI generated since quantum isn't my specialty, but this kind of error instantly tripped my "it doesn't sound like this person knows what they're talking about alarm" which likely indicates a bad LLM helped summarize the quantum paper for the author.
Wow, they managed to make a website that scales everything except the main text when adjusting the browser's zoom setting.
They set the root font size relative to the total width of the screen (1.04vw) with the rest of the styling using rem units
Ive never seen anyone do that before.. It may well be the only way to circumvent browser zoom
Why don't browsers reduce the screen width when you zoom in, as they adjust every other unit (cm, px)?
They effectively do. All css absolute units are effectively defined as ratios of each other and zoom*DPI*physicalPixels sets the ratio of how many physical pixels each absolute unit will end up turning into. Increase zoom and the screen seems to have shrunk to some smaller 'cm' and so on.
For things like 'vh' and 'vw' it just doesn't matter "how many cm" the screen is as 20% of the viewing space always comes out to 20% of the viewing space regardless how many 'cm' that is said to be equivalent to.
Oh duh, right. Thanks!
Why is it so desirable to circumvent browser zoom? I hate it.
There should be a law for this. Who in their right mind wants this?
It's a quantum zoom: it's zoomed in and not zoomed in at the same time.
It's interesting how this (and other css?) means the website is readable in a phone in portrait, but the text is tiny in landscape!
Note the paper they are referring to was published August 27, 2024
https://arxiv.org/pdf/2408.13687
While I'm still eager to see where Quantum Computing leads, I've got a new threshold for "breakthrough": Until a quantum computer can factor products of primes larger than a few bits, I'll consider it a work in progress at best.
If qubit count increased by 2x per year, largest-number-factored would show no progress for ~8 years. Then the largest number factored would double in size each year, with RSA2048 broken after a total of ~15 years. The initial lull is because the cost of error correction is so front loaded.
Depending on your interests, the initial insensitivity of largest-number-factored as a metric is either great (it reduces distractions) or terrible (it fails to accurately report progress). For example, if the actual improvement rate were 10x per year instead of 2x per year, it'd be 3 years until you realized RSA2048 was going to break after 2 more years instead of 12 more years.
What's the rough bit count of the largest numbers anyone's quantum computer can factor today? Breaking RSA2048 would be a huge breakthrough, but I'm wondering if they can even factor `221 = 13*17` yet (RSA8).
And as I've mentioned elsewhere, the other QC problems I've seen sure seem like simulating a noisy circuit with a noisy circuit. But I know I don't know enough to say that with confidence.
Like I said above, the size of number that can be factored will sit still for years while error correction spins up. It'll be a good metric for progress later; it's a terrible metric for progress now. Too coarse.
Heh, that seems evasive. Good metric or not, it makes me think they aren't at the point where they can factor `15 = 3*5`.
I'm not trying to disparage quantum computing. I think the topic is fascinating. At one point I even considered going back to school for a physics degree so I would have the background to understand it.
I'm not trying to be evasive. I'm directly saying quantum computers won't factor interesting numbers for years. That's more typically described as biting the bullet.
There are several experiments that claim to factor 15 with a quantum computer (e.g. [1][2]). But beware these experiments cheat to various degrees (e.g. instead of performing period finding against multiplication mod 15 they do some simpler process known to have the same period). Even without cheating, 15 is a huge outlier in the simplicity of the modular arithmetic. For example, I think 15 is the only odd semiprime where you can implement modular multiplication by a constant using nothing but bit flips and bit swaps. Being so close to a power of 2 also doesn't hurt.
Beware there's a constant annoying trickle of claims of factoring numbers larger than 15 with quantum computers, but using completely irrelevant methods where there's no reason to expect the costs to scale subexponentially. For example, Zapata (the quantum startup that recently went bankrupt) had one of those [3].
[1]: https://www.nature.com/articles/414883a
[2]: https://arxiv.org/abs/1202.5707
[3]: https://scottaaronson.blog/?p=4447
Thank you for the reply and links. Good stuff.
I guess like most of these kinds of projects, it'll be smaller, less flashy breakthroughs or milestones along the way.
People dramatically underestimate how important incremental unsung progress is, perhaps because it just doesn't make for a nice memorable story compared to Suddenly Great Person Has Amazing Idea Nobody Had Before.
> While I'm still eager to see where Quantum Computing leads
Agreed. Although I'm no expert in this domain, I've been watching it a long time as a hopeful fan. Recently I've been increasing my (currently small) estimated probability that quantum computing may not ever (or at least not in my lifetime), become a commercially viable replacement for SOTA classical computing to solve valuable real-world problems.
I wish I knew enough to have a detailed argument but I don't. It's more of a concern triggered by reading media reports that seem to just assume "sure it's hard, but there's no doubt we'll get there eventually."
While I agree quantum algorithms can solve valuable real-world problems in theory, it's pretty clear there are still a lot of unknown unknowns in getting all the way to "commercially viable replacement solving valuable real-world problems." It seems at least possible we may still discover some fundamental limit(s) preventing us from engineering a solution that's reliable enough and cost-effective enough to reach commercial viability at scale. I'd actually be interested in hearing counter-arguments that we now know enough to be reasonably confident it's mostly just "really hard engineering" left to solve.
My first question any time I see another quantum computing breakthrough: is my cryptography still safe? Answer seems like yes for now.
Depends on your personal use.
In general to the ‘Is crypto still safe’ question, the answer is typically no - not because we have a quantum computer waiting in the wings ready to break RSA right now, but because of a) the longevity of the data we might need to secure and b) the transition time to migrate to new crypto schemes
While the NIST post quantum crypto standards have been announced, there is still a long way to go for them to be reliably implemented across enterprises.
Shor’s algorithm isn’t really going to be a real time decryption algorithm, it’s more of a ‘harvest now, decrypt later’ approach.
I have a pseudo-theory that the universe will never allow quantum physics to provide an answer to a problem where you didn't already know the result from some deterministic means. This will be some bizarre consequence of information theory colliding with the measurement problem.
:-)
You can use quantum computers to ask about the behavior of a random quantum computer. Google actually did this a while ago. And the result was better than a real computer could simulate.
There will be a thousand breakthroughs before that point.
That just means that the word "breakthrough" has lost it's meaning. I would suggest the word "advancement", but I know this is a losing battle.
>That just means that the word "breakthrough" has lost it's meaning.
This. Small, incremental and predictable advances aren't breakthroughs.
quantum computers can (should be able to; do not currently) solve many useful problems without ever being able to factor primes.
What are some good examples?
The one a few years ago where Google declared "quantum supremacy" sounded a lot like simulating a noisy circuit by implementing a noisy circuit. And that seems a lot like simulating the falling particles and their collisions in an hour glass by using a physical hour glass.
The only one I can think of is simulating physical systems, especially quantum ones.
Google's supremacy claim didn't impress me; besides being a computationally uninteresting problem, it really just motivated the supercomputer people to improve their algorithms.
To really establish this field as a viable going concern probably needs somebody to do "something" with quantum that is experimentally verifiable but not computable classically, and is a useful computation.
That is equivalent to proving BQP ≠ P. We currently don't know that any problem even exists that can be solved efficiently (in polynomial time) by quantum computers but not by classical computers.
I wrote a long-ish comment about what you can expect of QC just yesterday
https://news.ycombinator.com/item?id=42212878
Thank you for the link. I appreciate the write-up. This sentence though:
> breaking some cryptography schemes it not exactly the most exciting thing IMHO
You're probably right that we'll migrate to QC-resistant algorithms before this happens, but if factoring was solved today, I think it would be very exciting :-)
I think it would be very __impactful__, but it is not really useful for humanity, rather opposite.
Who knows. "It's difficult to make predictions, especially about the future", but it might be a good thing to accelerate switching to new crypto algorithms sooner, leaving fewer secrets to be dug up later.
Yeah I think that's the issue that makes it hard to assess quantum computing.
My very layman understanding is that there are certain things it will be several orders of magnitude better at, but "simple" things for a normal machine quantum will be just as bad if not massively worse.
It really should be treated as a different tool for right now. Maybe some day in the very far future if it becomes easier to make quantum computers an abstraction layer will be arrived at in some manner that means the end user thinks it's just like a normal computer, but from a "looking at series of 1/0's" or "looking at a series of superimposed particles" it's extremely different in function.
I'm someone not really aware of the consequences of each quantum of progress in quantum computing. But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.
How much closer does this work bring us to the Quantum Crypto Apocalypse? How much time do I have left before I need to start budgeting it into my quarterly engineering plan?
> But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.
Probably not. Unless a real sudden unexpected breakthrough happens, best practise will be to use crypto-resistant algorithms long before this becones a relavent issue.
And practically speaking its only public-key crypto that is an issue, your symmetric keys are fine (oversimplifying slightly, but practically speaking this is true)
You'll need to focus on asym and DH stuff. If your symmetric keys are 256 bits you should be fine there.
The hope is that most of this should just be: Update to the latest version of openssl / openssh / golang-crypto / what have you and make sure you have the handshake settings use the latest crypto algorithms. This is all kind of far flung because there is very little consensus around how to change protocols for various human reasons.
At some point you'll need to generate new asym keys as well, which is where I think things will get interesting. HW based solutions just don't exist today and will probably take a long time due to the inevitable cycle of: companies want to meet us fed gov standards due to regulations / selling to fedgov, fedgov is taking their sweet time to standardize protocols and seem to be interested in wanting to add more certified algorithms as well, actually getting something approved for FIPS 140 (the relevant standard) takes over a year at this point just to get your paperwork processed, everyone wants to move faster. Software can move quicker in terms of development, but you have the normal tradeoffs there with keys being easier to exfiltrate and the same issue with formal certification.
Maybe my tinfoil hat is a bit too tight, but every time fedgov wants a new algo certified I question how strong it is and if they've already figured out a weakness. Once bitten twice shy or something????
The NSA has definitely weakened or back-doored crypto. It’s not a conspiracy or even a secret! It was a matter of (public) law in the 90s, such as “export grade” crypto.
Most recently Dual_EC_DRBG was forced on American vendors by the NSA, but the backdoor private key was replaced by Chinese hackers in some Juniper devices and used by them to spy on westerners.
Look up phrase likes “nobody but us” (NOBUS), which is the aspirational goal of these approaches, but often fails, leaving everyone including Americans and their allies exposed.
You should look up the phrase "once bitten twice shy" as I think you missed the gist of my comment. We've already been bitten at least once by incidents as you've described. From then on, it will always be in the back of my mind that friendly little suggestions on crypto algos from fedgov will always be received with suspicion. Accepting that, most people that are unawares will assume someone is wearing a tinfoil hat.
The primary threat model is data collected today via mass surveillance that is currently unbreakable will become breakable.
There are already new “quantum-proof” security mechanisms being developed for that reason.
Perhaps, but you got to ask yourself how valuable will your data be 20-30 years in the future. For some people that is a big deal maybe. For most people that is a very low risk threat. Most private data has a shelf life where it is no longer valuable.
Yes, and people are recording encrypted conversations communications now for this reason.
I'm not sure anyone really knows this although there is no shortage of wild speculation.
If you have keys that need to be robust for 20 years you should probably be looking into trying out some of the newly NIST approved standard algorithms.
Does anyone on HN have a understanding how close this achievement brings us to useful quantum computers?
This is another hype piece from Google's research and development arm. This is a theoretical application to increase the number of logical qubits in a system by decreasing the error caused by quantum circuts. They just didn't do the last part yet so the application is yet to be seen.
https://arxiv.org/abs/2408.13687
"Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."
Google forgot to test if it scales I guess?
It's the opposite of a theoretical application, and it's not a hype piece. It's more like an experimental confirmation of a theoretical result mixed with an engineering progress report.
They show that a certain milestone was achieved (error rate below the threshold), show experimentally that this milestone implies what theorists predicted, talk about how this milestone was achieved, and characterize the sources of error that could hinder further scaling.
They certainly tested how it scales up to the scale that they can build. A major part of the paper is how it scales.
>> "Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."
> Google forgot to test if it scales I guess?
Remember that quantum computers are still being built. The paper is the equivalent of
> We tested the scaling by comparing how our algorithm runs on a chromebook, a server rack, and google's largest supercomputing cluster and found it scales well.
The sentence you tried to interpret was, continuing this analogy, the equivalent of
>Google's largest supercomputing cluster is not large enough for us, we are currently building an even bigger supercomputing cluster, and when we finish, our algorithm should (to the best of our knowledge) continue along this good scaling law.
The experiment is literally all about scaling. It tests scaling from distance 3 to 5 to 7. It shows the logical qubit lifetime doubles each time the distance is increased. The sentence you quoted is describing an expectation that this doubling will continue to larger distances, when larger chips are built.
This is the first quantum error correction experiment showing actual improvement as size is increased (without any cheating such as postselection or only running for a single step). It was always believed in theory that bigger codes should have more protection, but there are have been various skeptics over the years saying you'd never actually see these improvements in practice due to the engineering difficulty or due to quantum mechanics breaking down or something.
Make no mistake; much remains to be done. But this experiment is a clear indication of progress. It demonstrates that error correction actually works. It says that quantum computers should be able to solve qubit quality with qubit quantity.
disclaimer: worked on this experiment
Very neat!
Lol yeah the whole problem with quantum computation is the scaling, that's literally the entire problem. It's trivial to make a qbit, harder to make 5, impossible to make 1000. "If it scales" is just wishy washy language to cover, "in the ideal scenario where everything works perfectly and nothing goes wrong, it will work perfectly"
The fact that there is a forward-looking subsection about “the vision for fault tolerance” (emphasis mine) almost entirely composed of empty words and concluding in “we are just starting this exciting journey, so stay tuned for what’s to come!” tells you “not close at all”.
Doesn't feel like a breakthrough. A positive engineering step forward, sure, but not a breakthrough.
And wtf does AI have to do with this?
It's not a major part of the paper, but Google tested a neural network decoder (which had the highest accuracy), and some of their other decoders used priors that were found using reinforcement learning (again for greater accuracy).