Semiconductors Reimagined by Jacob Bloch W19

Society is hungry.  It wants information, pleasure, and access to the things it values instantaneously.  It wants to analyze enormous amounts of data.  What was deemed a speedy route a moment ago becomes phlegmatic, and now has to go faster.  If it doesn’t, then it could be disastrous.  Moore’s Law states that technological development follows an exponential growth curve, which keeps doubling performance.  Society expects Moore’s Law to never falter.  The notion that technology might slow down its pace is a predicament best off avoided.  


Yet, when it comes to transistors, which are the diminutive switches that are the infrastructure of computer processors, they have gotten so miraculously small and efficient that they seemingly cannot be improved upon.  Society may soon face the dim prospect of possessing super-computers that have reached the upper limits of their computational powers due to lack of advances in transistors.  Data centers, molecular dynamics, artificially intelligent “brains,” weather forecasting, climate change modeling, drug designs, and 3D nuclear test simulations that rely on supercomputers may cease to inspire confidence, inhibited by the technological parameters of early 2017.  


Imagine the processor as an engine, and the transistors as the cylinders that accelerate it.  To increase the processing power, the solution is to condense more cylinders into the engine.  This is precisely what makes today’s iPhone 7, which has an A10 processor with 3.3-billion transistors, faster than the ‘TRADIC,’ the first American transistorized computer containing 800 transistors within three cubic feet.  


Processing power has significantly increased since Dr. Gordon Moore, Intel’s founder, established the earliest version of Moore’s Law when he declared in 1965 that transistors were shrinking at such a rate that twice as many could fit in a computer processor every year. Thirty years later, Moore declared that the doubling rate would occur every two years; however this has not been the sentiment of Silicon Valley, which fervently believes that the exponential growth has never slowed.  


Yet Silicon Valley may run out of counterarguments.  The industry is reaching the final frontier of transistor technology for several reasons.  It is so expensive to produce these miniature transistors that only a few muscular companies are able to compete in the industry.  The number of manufacturing companies has dwindled from about twenty in 2000 to just four today: Intel, TSMC, GlobalFoundries, and Samsung.


Furthermore, notwithstanding the robust financial resources necessary to miniaturize further, transistors are unlikely to do so because it is impossible to defy the laws of physics. When the transistors reached 90 nm in the 1990s, the industry discovered that the transistor gates were becoming so thin that electric current was leaking out into the substrate.  In other words, electrons that direct the power of the transistors are tempted to “jump” into the surrounding space, unless there is some way to insulate them.  The smallest transistors today range between 14 and 22 nanometers, and to further decrease the size would only increase the quantity of electrons jumping into the substrate. The only effective method of insulating the electrons at this microscopic level is to block any movement of electrons, in other words, keeping the insulator at a temperature close to absolute zero.  Temperatures approaching absolute zero have only been obtained in a laboratory setting, so it would be implausible to incorporate such insulation in transistor technology, thus halting its development.  


The industry today recognizes that the ceiling has been hit.  Intel has repeatedly delayed its release of the newest technology, with more time between subsequent “generations,” or upgraded products.  It has even placed delays on specific launches of new inventions, such as a 10 nm transistor.  Intel’s Chief of Manufacturing, William Holt, has noted in February of 2016 that Intel will have to move away from silicon transistors in about four years and that “the new technology will be fundamentally different,” but he has admitted that silicon’s successor is not yet established.  


Even the introduction of transistors in the range between 22 and 14 nm has been contingent upon a radical redesign.  While the transistors of the past were flat, the Tri-Gate transistor has innovated an approach to three dimensional space.  Instead of having current-carrying channels lying under the gates of the flat transistors, the channels rise up vertically over the gates. This constitutes a disruption to transistor manufacturing because it delays silicon replacement for a few more generations, according to Marc Bohr, Director of Intel’s Technology and Manufacturing Group.  Simultaneously, it is a step toward an even smaller and more energy efficient transistor, like a 10 nm transistor that Intel had hoped to release in late 2016/early 2017, before being set back by difficulties.  




The International Technology Roadmap for Semiconductors has been published almost annually by semiconductor industry experts from across the globe since 1993.  The group forecasts in its most recent report, published in 2015, that production of chips in their current form will no longer be economically viable by 2021.  The industry will require another disruption, whether that be in new areas like photonics or carbon transistors, as well as other reformulations of current transistors.  


This type of massive technological disruption, which entirely reinvents product manufacturing, is important for software development.  Neil Thompson, an assistant professor at MIT Sloan School, affirmed that “one of the biggest benefits of Moore’s Law is as a coordination device.  I know that in two years we can count on this amount of power and that I can develop this functionality - and if you’re Intel you know that people are developing for that and that there’s going to be a market for a new chip.”  Without reassurance that Moore’s Law will continue, software development which relies upon its confidence is impeded.  


A technological hypothesis that hangs in the balance of evolving transistors is the singularity, which is the theoretical future date when developments in artificial intelligence will create sentient, autonomous computer beings.  The delays in transistor development mean that technological development is likewise paused.  Perhaps this is a positive result.  Should singularity occur, it could give rise to a rival class of beings more intelligent and cunning than humans.  Computer engineers who are involved in incrementally achieving singularity ought to pause to weigh the consequences of such a situation.   But with this degree of efficiency, it is unlikely that individuals forwarding these innovations are grappling with the implications of their choices. It would be prudent, even necessary, for those hastening singularity to take advantage of the inevitable delay in transistor development to examine the ramifications of their actions.