Google+

What is Quantum Computing?

Quantum computers hold the promise of massive increases in computational speed for certain classes of problems. However, what quantum computers are and how they work can be a bit of a mystery, mostly because quantum physics is a mystery. We are going to dive in and find out how they work.

First, a little bit about quantum physics. At the turn of the century, physicists were a pretty confident lot. Maxwell had described electromagnetism with Maxwell’s equations and most problems appeared to be solve. However, a couple of items persisted that could not be described by “classical” physics, such as blackbody radiation or the photoelectric effect. It is safe to say, understanding these phenomena and the discovery of Quantum Mechanics was the accomplishment of the 20th century.

Quantum Mechanics describes the physical world at the size of the atom, at which point, the classical laws of Newton and Maxwell fall apart. One of the main takeaways, especially for Quantum Computing, is the idea that NOTHING is specific at the quantum level. Particles, such as electrons, at the quantum level exists in states; aka their position, speed and energy. They may be low energy, they may be high energy, but knowing these states is not always doable unless we measure them. You may know a probability for the energy of an electron, but not the exact value. Schrödinger’s Equation, which describes the probability of an electron being in any state, not exactness. So while the laws of Newton allowed us to put a person on the moon with great accuracy, the laws at the quantum level just tell us probabilities. It is more feelings than certainty!

Schrödinger’s Equations lead us to one of the key components of Quantum Computing, and that is superposition. One of the outcomes of Schrodinger’s Equation, is that any valid quantum states can be added together to create another valid quantum state. Another way of saying this, there are many valid solutions at any given time, and ALL solutions are valid. In the physical world, observing what the answer is, chooses one of those valid solutions. It is a lot to get one’s head around.

To illustrate the madness of what we are talking about, consider the famous example of Schrödinger’s Cat, a thought experiment in physics. The version I have heard, is you put a cat in a box with a radioactive isotope that has a 50% chance of decaying in a minute. If the particle decays within a minute, the cat dies, if it doesn’t the cat lives. With quantum physics, the craziness comes is that you have no way of knowing if the cat is alive or dead, it exists in both states. This is superposition. There are two valid solutions to the equation, the possible states for the cat are alive or dead. However, superposition states that any solution can be added together and that is also a valid solution, so the cat is also alive and dead. It isn’t until we observe, that one of the states is chosen. If you really want to have your mind blown, check out this YouTube video describing Schrödinger’s Cat.

So if we think of our current computers and transistors, they are based on the idea of a bit; either on or off or 0 and 1. In quantum computing, bits are replaced by quantum bits, or qubits. A qubit can be a 0 or a 1, it can be both a 0 and 1, or any value in between. In the current world with bits, only two options exist at any single time, and steps are performed linearly, in the quantum world, with the concept of qubits you can store multiple values at all at once and all the solutions are processes in parallel. Once you determine the state by measuring or observing, you get a single answer.

Qubit source: Wikipedia

In the real world, what does this mean. So far, not a whole lot. You aren’t going to get a tremendous increase in performance playing Minecraft. Quantum Computing has so far been proven to be much faster than traditional computing for a small subset of problems. One of these is Shor’s Algorithm, which is very helpful in figuring out prime numbers, which has very important implications for cryptography.

As more and more research happens, more types of problems are discovered that can be solved with quantum computing. This should continue to grow over time.

So your next PC will not be a quantum computer, but the types and classes of problems that can be solved by Quantum Computing will continue to grow and provide real world benefits in the years ahead!


 

Google I/O 2016 Recap

In the Developer world, there are usually three big conferences each year; Microsoft’s Build, Google I/O and Apple’s World Wide Developer Conference (WWDC). Google’s I/O conference, its 10th, just wrapped up last week. Let us take a spin around some of the more interesting announcements from the conference.

Google I/O 2016

Google Assistant – One of the main things underlying Google I/O this year, was Artificial Intelligence. The impressive DeepMind technology that drove AlphaGo to victory in March (2016) is making its way into Google’s technology. Google Assistant is really the upgrade to Google Now, making it a more conversational assistant. Similar to announcements this year from Microsoft and Facebook, bots are a huge emerging platform. Lots of tech companies definitely believe in a bot future, now it is up to consumers to see if they agree. Developers will be able to integrate with Google Assistant in the future, but no dates or APIs were announced.

Project Ara – Ara is going to be Google’s first manufactured smartphone, remembering that whole Motorola being a separate company when Google owned it. That in itself makes it interesting and worth paying attention to. However, the Project Ara part is what makes it REALLY cool. Project Ara has 6 swappable modules on a phone, including things such as camera and speakers. However, there are many interesting use cases such as blood readers for Glucose and eInk screens. Consumers will be easily able to swap and upgrade their phone components. So if you want the best possible camera, you will be able to buy a module and swap it in. Really want great sounding music on your phone, buy a better speaker. Wired has a great write-up on Google’s vision.

Google Home – Google announced their me too competitor to Amazon’s Alexa, a voice activated search appliance for the home. You will be able to ask Google Home things like what is the weather for the day, did the Chicago Fire win last night, etc. Pricing and availability dates were not announced. Google’s introduction video.

TensorFlow – This may be, long term, the most impactful of Google’s announcements during I/O. TensorFlow is Google’s machine learning platform that was opened source late last year. At I/O, Google showed the specialty hardware they have created, called tensor processing unit (TPU) that will enable massive improvements in performance when compared to power consumption versus other platforms. If TPUs can be easily consumed via programming in a cost efficient mechanism, it could see a huge increase in use cases for machine learning.

DayDream – Google’s VERY low cost Google Cardboard hardware provides a cheap entry to Virtual Reality. Everyone from Star Wars to the New York Times has created VR apps for the platform. DayDream appears to be the spiritual, and more ambitious, successor to Cardboard, a new virtual reality platform. DayDream will enable special phones running Android N to use VR headset to make compelling VR experiences. It appears to be similar to Samsung or Oculus approaches. Again, no date or APIs were announced. You can see the introduction video on.

Android N – The next version of Google’s flagship Android OS was released and is currently in Beta 3. Android N should be available, with new hardware, this fall. Google is also asking for help with the name, so if you would like to name Google N, head here. Mostly, I wonder why they haven’t doubled down on Nutella yet….

Allo/Duo – Google added to its already long list of messaging applications with two new messaging platforms; Allo and Duo. Allo can be seen as a showcase for the upcoming Google Assistant platform, where interaction with Bots can make for a more purposeful interaction. Duo, is a video chat application (think FaceTime) that includes a feature called Knock-Knock, which allows you to see the video conversation as it starts, before answering it. Neither of these applications require a Google Account, just a phone number. However, they enter a VERY crowded field which includes WhatsApp, Facebook Messenger, etc. Not sure how much traction Google will be able to attain, unless they are made the default apps for Android N devices.

In general, to highlight Google’s message this year at I/O, I believe it is doubling down on the computational power that Google’s Cloud can deliver. Tensor flow, Google Assistant and Google Home are showing where the world is heading when our Smartphones are peaking, at least in terms of features. The other thing, is that Google, like Microsoft, is trying to move quickly, and a lot of things that were announced are not ready yet. This is markedly different from previous Google I/O conferences. There is a land rush out there for using AI to power consumer experiences on the phone and Facebook, Microsoft and Google are rushing to claim it. It will be interesting to see Apple’s take next month at WDDC since Apple tends to be a much better hardware company than software company. Google has made the I/O conference available for all to watch.

This blog post originally appeared at Skyline Technologies.


 

The Death of Moore's Law and the Game of Go

For the last 50 years or so, the land of computers has been ruled by Moore’s Law, but that time is coming to an end.

TLDR; Moore’s Law is dying, but instead of going smaller with transistors for computing power, we will go large with scale, AI and quantum computer for computing power

For those of you unfamiliar with Moore’s Law, it was an observation made in the 1960s by Gordon Moore, that the number of transistors in an integrated circuit would double approximately every two years. Moore, who went on to form Intel, was quite prescient in this declaration.

Intel released its first chip in 1971 and it contained approximately 2300 transistors. Fast forward to today, where a quad core Skylake chip from Intel has approximately 1.75 billion transistors. Obviously, the last 45 years has seen amazing gains in the engineering and the manufacturing processes involved in making chips for computing devices, so much so, the supercomputers of 30 years ago are the $500 smartphones we carry in our pockets.

Moore's Law source: Wikipedia

This growth in chip size has transformed our world. Between the rise of computers, then smartphones and high speed data networks, the world is a different place. Communication costs have plummeted. We are more connected globally than ever before. Opportunities have expanded globally for millions and millions of people. The economic and social benefits of this have been tremendous. It has been breathtaking in the scope of impact it has had on society; from the inane tweets of Kardashians to the Arab spring.

So Moore’s Law has been a huge success, however, it is more of an observation than a physical law. For example, Newton’s Law of Gravity is (almost always) the law of the Universe with F = ma. Moore’s Law though is not that, and its time is coming to an end. The amazing advances in the reduction of chip size will soon be a thing of the past. Without Moore’s Law, which helped create the economic engine of the Information age, does that mean GDP growth will be disappearing? Of course not, at least in terms of computing power. Instead of Moore’s law being about doubling of transistors on a chip, we now think of computation growth coming from other places. First off, to the ancient game of Go!

The game of Go is a 2500-year-old game from China that is played with black and white stones on a 19x19 grid. The game has relatively simple rules, but the size of the board makes it an exceedingly complex game. While games like Chess are complex, the pieces have limited moves and varying strength, so it is somewhat easier to determine the relative value of certain moves and use brute force computation to analyze all the possible outcomes on the, relatively smaller, 8x8 chess board. In Go, the moves are much more nuanced, and players take years and years of playing, to recognize patterns to be a successful player. In computer terms, chess is a solvable problem by applying computation as demonstrated by Deep Blue back in the 90s when world Chess champion Gary Kasparov was (nerd) famously beat.

Go Game

The approach used by Deep Blue though, wouldn’t work with Go, so it has always been considered an artificial intelligence problem, more than a CPU intensive problem. The ability to beat the Go equivalent of a grandmaster was thought to be at least a decade away. So this March (2016), much of the computer world, was amazed then to see Google’s AlphaGo (developed by Google DeepMind) beat a Korean champion Lee Se-Dol in a 5 game battle. You can read about the matches at The Verge.

We learn two things from this… First, we can replace the previous decades of massive growth of transistors on a chip with the massive growth of computing resources at scale. Google’s DeepMind program utilized a cluster of computers to derive massive computational resources to solving a very difficult problem. As connectivity becomes more pervasive and network speed grow, these computing resources can be used to solve complex problems, without needing more CPU resources on our devices.

The second thing we learn from Google’s DeepMind victory against a Go champion is that there are other solutions to problems. While computing power could win a game of chess, it would never win at something like Go. Instead of the brute force approach that was so effective for Deep Blue, different solutions were needed. Many years of research came to fruition, including pattern recognition, deep learning and neural networks that were bundled into DeepMind. The AlphaGo program was taught to learn by “watching” old Go games from champions, and then constantly playing itself. The more it learned, the better it became. So much so, that during the actual tournament, commentators were shocked by some of the moves the AI made, and there was one move the human player was so taken aback by, he took a very long break. This kind of growth in alternative approaches to problem solving can again, provide greater computational power while not increasing transistor size.

Last, another future path to computation growth is quantum computing. Quantum computing requires a bit of a deeper dive (future blog post!), but suffice to say, the world of qbits and superposition principle provides orders and orders of magnitude increases in speed for some computational cases.

So…. Moore’s law is going away. We are reaching the literal physical limitations of how much closer we can put transistors on a chip. We are also moving to a world where lower power usage, which equates to less chip performance is much more preferred than previous decades. This all means different computing platforms will help a variations of Moore’s law continue, where computing power continues to increase, but it will be distributed into massive scale cloud infrastructure with novel approaches like neural networks and quantum computing coupled with speedy networks to make sure today’s super computers are in our pockets in 30 years.

This blog post originally appeared at Skyline Technologies.


 

John Ptacek I'm John Ptacek, a software developer for Skyline Technologies. This blog is my contains my content and opinionss, which are not those of my employer.

Currently, I am reading Norse Mythology by Neil Gaiman

@jptacekGitHubLinkedInStack OverflowGoogle+