The silicon photonics stumbles at the last meter of
We have already laid the optics up to the house, but to pave it to the processor while it's problematic
If you think that today we are on the verge of a technological revolution, imagine what it was like in the mid-1980s. Silicon chips used transistors with a characteristic size, measured by microns. Fiber-optic systems moved trillions of bits around the world at great speed. It seemed that everything is possible - it is only necessary to combine digital silicon logic, optoelectronics and data transmission over fiber.
Engineers imagined how all these breakthrough technologies will continue to evolve and converge at the point where photonics merges with the electronics and gradually replaces it. Photonics would allow you to move bits not only between countries, but also within data centers, and even inside computers. The fiber would move the data from the chip to the chip - so they thought. And even the chips themselves would be photonic - many believed that incredibly fast logic chips would someday work using photons instead of electrons.
Hundreds of millions of dollars were swallowed up in the development of new photonic components and systems that combine racks of computer servers in data centers using fiber. And today, such photonic devices really connect racks in many data centers. But the photons stop there. Inside the rack, individual servers are connected to each other with inexpensive copper wires and high-speed electronics. And, naturally, on the boards themselves are placed metal conductors, all the way up to the processor.
Attempts to push the technology into the servers themselves, to directly feed the optical fiber to the processors, were based on an economic foundation. Indeed, there is a market of optical transceivers for Ethernet with a volume of almost $ 4 billion per year, which should grow to $ 4.5 billion and 50 million components by 202? according to market research company LightCounting . But the photonics did not pass these last few meters, dividing the computer stand in the data center and the processor.
Nevertheless, the enormous potential of this technology continued to support the dream. While technical problems remain significant. But now, finally, new ideas about data center schemes offer feasible ways to organize a photon revolution that can help contain the tide of large data.
Inside the photonic module
Every time you go online, watch digital TV or perform almost any action in today's digital world, you use the data that passed through the optical transceiver modules. From the task - to convert the signal between optical and digital views. These devices live on each end of the optical fiber, chasing the data inside the data center of any large cloud service or social network. The devices are connected to a switch located on top of the server rack and convert the optical signals into electrical ones so that they then reach several servers in this rack. Transceivers also convert data from these servers into optical signals to transfer them to other racks or through a whole network of switches to the Internet.
Each optical module contains three main components: a transmitter with one or more optical modulators, a receiver with one or more photodiodes, and CMOS chips encoding and decoding data. Ordinary silicon emits light very badly, so photons generate a laser, separated from the chips (although it can be placed in the same housing with them). The laser does not represent bits by turning it on and off - it is on all the time, and the bits are coded in the beam of its light by means of an optical modulator.
This modulator, the heart of the transmitter, can be of different types. Particularly successful and simple is called the Mach-Zehnder modulator. In it, a narrow silicon waveguide directs the laser light. The waveguide branches out into two, and after a few millimeters they again converge. In the normal situation, this fork and connection would not affect the light output in any way, since both arms of the waveguide have the same length. Connecting back, the light waves remain in phase with each other. However, if you apply an electrical voltage to one branch, this will change its refractive index, which will slow or speed up the light wave. As a result, after meeting two waves, they destructively interfere with each other, suppressing the signal. Therefore, by varying the voltage across the tap, we use an electrical signal to modulate the optical.
The receiver is simpler: it's just a photodiode and its supporting circuits. Light, passing through the fiber, reaches the germanium or silicon-germanium photodiode receiver, which produces a current - usually each light pulse is converted into voltage.
The modulator and receiver are serviced by circuits that amplify, process the packets, correct errors, buffer and other tasks that need to be solved to satisfy the Gigabit Ethernet standard for fiber. How many tasks are performed on the same chip or at least in the same housing that manages the photonics - depends on the manufacturer, but most of the electronic logic is separated from the photonics.
Photonics will never be able to transfer data between different parts of a silicon chip. The circular oscillator of an optical switch performs the same function as a single transistor, however it occupies a 1?000 times larger area.
There are more and more silicon integrated circuits in which there are optical components, and this may make you think that the integration of photonics into the processor was inevitable. And for a while it was considered.
However, the growing discrepancy between the rapid decrease in chip sizes with electronic logic and the inability of photonics to keep up with them has been underestimated or even ignored. Today, transistors have a characteristic size of several nanometers. At 7 nm CMOS technology, you can place more than a hundred logical transistors of general purpose on each square millimeter. And this we still do not mention the labyrinth of complex copper wires above them. In addition to the presence of billions of transistors on each chip, there are a dozen more levels of metal connections linking these transistors to registers, multipliers, arithmetic logic devices and more complex designs, which consist of processor cores and other necessary circuits.
The problem is that a typical optical component, for example, a modulator, can not be made much smaller than the wavelength of light that it carries - which limits its minimum width to 1 micrometer. No Moore law can overcome this limitation. This is not a question of using more and more advanced lithography technologies. Just electrons - the wavelength of which is several nanometers - lean, and the photons - thick.
But can not manufacturers simply integrate the modulator and accept the fact that the chip will have fewer transistors? After all, where are they already placed billions? Can not. Due to the huge number of system functions that each square micrometer of a silicon electronic chip can perform, it will be very expensive to replace even a few transistors with poorly operating components such as optical chips.
A simple calculation. Suppose, on a square micrometer, there are an average of 100 transistors. Then the optical modulator occupying an area of 10 μm per 10 μm replaces the circuit consisting of 1?000 transistors! Recall that a conventional optical modulator works as a single switch that turns the light on and off. But each transistor itself can work as a switch. Therefore, roughly speaking, the cost of including this primitive function in the circuit is ???: ? since for each optical modulator there are ??? electronic switches, which the scheme designer can use. No manufacturer will accept such a high cost, even in exchange for a tangible increase in speed and efficiency that could be obtained from the integration of modulators directly into the processor.
The idea of replacing electronics on chips with photonics has other drawbacks. For example, the chip performs critical tasks, such as working with memory, for which optics do not have the capabilities. Photons are simply incompatible with the basic functions of a computer chip. And in cases where it is not so, it does not make sense to arrange competition between optical and electronic components on the same chip.
The scheme of work of the data center.
Today (on the left) photonics transmits data on a multi-tier network. Communication with the Internet is at the top (main) level. The switch transmits fiber data to the upper rack switches.
Tomorrow (right) photonics can change the architecture of data centers. The rack-scale architecture could make data centers more flexible by physically separating computers from memory and linking these resources over the optical network.
But this does not mean that the optics can not approach the processors, memory and other key chips. Today, the optical communications market in data centers revolves around "top-of-rack" (TOR) switches, which include optical modules. At the top of the two-meter racks, where servers, memory and other resources are installed, the fiber connects TORs together through a separate layer of switches. And they are connected to another set of switches, which form the output of the data center on the Internet.
The panel of a typical TOR, where transceivers are stuck, gives an idea of the movement of data. Each TOR is connected to one transceiver, and that, in turn, is connected to two optical cables (one for transmission, the second for reception). In a TOR of 45 mm high, 32 modules can be inserted, each of which is capable of transmitting data at a speed of 40 Gbit /s in both directions, as a result of which data can be transmitted between the two racks at a rate of ??? Tb /s.
However, within the racks and inside the servers, the data is still flowing through the copper wires. And this is bad, as they become an obstacle to the creation of faster and more energy efficient systems. Optical solutions of the last meter (or a couple of meters) - optics to the server or even directly to the processor - are probably the best opportunity to create a huge market for optical components. But until then, it is necessary to overcome serious obstacles both in the field of prices and in the field of speed.
Schemes called "fiber to processor" are not new. The past gives us a lot of lessons about their cost, reliability, energy efficiency and channel width. Approximately 15 years ago I participated in the development and creation of experimental transceiver , which demonstrated a very high throughput. The demonstration connected a cable of 12 optical cores with a processor. Each vein transmitted digital signals generated separately by the four surface-emitting lasers with a vertical resonator (VCSEL). It is a laser diode emitting light from the chip surface, and light has a higher density than conventional laser diodes. Four VCSELs encoded the bits by turning the light on and off, and each of them worked on its own frequency in the same veins, which quadrupled its bandwidth thanks to coarse spectral multiplexing of channels . Therefore, if each VCSEL issued a data stream of 25 Gbit /s, then the total throughput of the system reached 1.2 Tbit /s. Today, the industry standard for the distance between adjacent wires in a 12-core cable is ??? mm, which gives a throughput of 0.4 Tbit /s /mm. In other words, in 100 seconds each millimeter can process as much data as the US Congress Library's web archive stores for a month.
Today, to transfer data from the optics to the processor, even greater speeds are required, but the beginning was not bad. Why is this technology not accepted? Partly because this system was both insufficiently reliable, and impractical. At that time, it was very difficult to manufacture 48 VCSELs for the transmitter and ensure that there was no malfunction during its lifetime. An important lesson was that one laser with many modulators can be made much more reliable than 48 lasers.
Today, the reliability of VCSEL has increased so much that transceivers working on this technology can be used in solutions for short distances in data centers. The optical wires can be replaced by stranded optics, which carries as much data, redirecting them to different threads that are inside the main fiber. Also recently, it became possible to implement more complex standards for the transfer of digital data - for example, PAM4 , increasing the data transfer rate, using not two but four light power values. Research is underway to increase the density of the bandwidth in data transmission systems from optics to the processor - for example, the Shine program from MIT allows us to achieve 17 times the density that was available to us 15 years ago.
All these are quite significant breakthroughs, but taken together, they will not be enough to allow photonics to take the next step towards the processor. However, I still believe that such a step is possible - because now the movement for changing the system architecture of data centers is gaining momentum.
Today, processors, memory and storage systems are assembled in the so-called. blade servers , special cases of which are located in the racks. But it is not necessary. Instead of placing memory on the chips in the server, it can be placed separately - on the same, or even on a different rack. It is believed that such rack-scale architecture (
? rack-scale architecture
, RSA) can more effectively use computing resources, especially for social networks like Facebook, where the amount of computation and memory needed to solve problems grows with time. It also simplifies the maintenance and replacement of equipment.
Why would such a configuration help photonics penetrate deeper? Because this is the simplicity of changing the configuration and the dynamic allocation of resources can be afforded by a new generation of efficient, low-cost optical switches that transmit several terabit per second.
The technology of connecting optics directly to the processor has existed for more than 10 years
The main obstacle to this change in data centers is the cost of components and their production. Silicon photonics already have one advantage in value - it can take advantage of existing production capacities, huge infrastructure of chip production and its reliability. Nevertheless, silicon and light combine imperfectly: in addition to interfering with inefficiencies in the emission of light, silicon components suffer from large light losses. A typical silicon optical transceiver shows an optical loss of 10 dB (90%). This inefficiency does not matter for short connections between TOR switches, as long as the potential advantage of silicon in cost outweighs its shortcomings.
An important part of the cost of a silicon optical module is such a modest but critically important detail as an optical connection. This is the physical connection of the optical fiber and the receiver or transmitter, and the connection between the fibers. Every year, hundreds of millions of optical optics connectors are built with the highestaccuracy. To imagine this accuracy, note that the diameter of a human hair is usually only slightly less than the diameter of a single 125 micrometer quartz glass fiber used to connect optical cables. The accuracy with which to align the fiber in the connector is about 100 nm - a thousandth of a fraction of the thickness of a human hair - or the signal will die out too much. It is necessary to develop innovative methods for the production of connectors for two cables and for connecting the cable to the transceiver in order to meet the growing customer demands for high accuracy and low cost. However, there are very few production technologies that make production quite inexpensive.
One way to reduce costs is to reduce the cost of chips of the optical module. The technology of implementing systems at the level of an entire substrate can help ( ? wafer-scale integration
, WSI). According to this technology, the photonics are placed on one silicon substrate, the electronics on the other, and then the substrates are connected (a laser made not of silicon but of another semiconductor remains separate). This approach provides savings in production costs, since it allows for parallel production and assembly.
Another factor of cost reduction is, of course, the volume of production. Suppose that the entire market of optical Gigabit Ethernet is 50 million transceivers per year, and each chip of an optical transceiver occupies 25 mm square. Assuming that the factory uses 200 mm diameter substrates for their production and that 100% of the products are then used, this market requires 4?000 substrates.
This may seem like a large number, but this figure actually describes only two weeks of work in a typical factory. In reality, any transceiver manufacturer can capture 25% of the market in a few days of production. There must be a way to increase volumes if we really want to reduce the cost. The only way to do this is to understand how to use the photonics below the TOR switch, down to the processors in the servers.
If the silicon photonics ever penetrate to where all electronic systems work, this will require compelling technical and economic reasons. The components will have to solve all the important problems and seriously improve the system as a whole. They must be small, energy efficient and extremely reliable, and must also transmit data very quickly.
Today, a solution that meets all these requirements does not exist, therefore, electronics will continue to develop without integration with optics. Without serious breakthroughs, thick photons will not enter the places where the lean electrons dominate. However, if optical components can be reliably produced in very large volumes at a very low price, a dream of several decades to connect optics to the processor and related architectures can become a reality.
Over the past 15 years, we have made serious progress. We are better versed in optical technologies and in where they can and where can not be used in data centers. A stable multi-billion dollar commercial market for optical components has been developed. Optical connectors have become a critical part of the global information structure. However, the integration of a large number of optical components into the very heart of electronic systems remains impractical. But will it continue to be so? I think no.
It may be interesting
Thank you for your post, I look for such article along time, today i find it finally. this post give me lots of advise it is very useful for me.
This is a wonderful article, Given so much info in it, These type of articles keeps the users interest in the website, and keep on sharing more ... good luck.
This option of Model G20 created by the Knovva Academy can turn out to be a huge opportunity for students who wish to join Ivy League Colleges.
After i obtained on your blog although placing interest merely slightly little bit submits. Enjoyable technique for long term, I'll be book-marking at any given time obtain types complete comes upward. Monthly SEO Packages
If it's not too much trouble share more like that.Polkadot