Semiconductors Reimagined by Jacob Bloch W19

Society is hungry.  It wants information, pleasure, and access to the things it values instantaneously.  It wants to analyze enormous amounts of data.  What was deemed a speedy route a moment ago becomes phlegmatic, and now has to go faster.  If it doesn’t, then it could be disastrous.  Moore’s Law states that technological development follows an exponential growth curve, which keeps doubling performance.  Society expects Moore’s Law to never falter.  The notion that technology might slow down its pace is a predicament best off avoided.  

 

Yet, when it comes to transistors, which are the diminutive switches that are the infrastructure of computer processors, they have gotten so miraculously small and efficient that they seemingly cannot be improved upon.  Society may soon face the dim prospect of possessing super-computers that have reached the upper limits of their computational powers due to lack of advances in transistors.  Data centers, molecular dynamics, artificially intelligent “brains,” weather forecasting, climate change modeling, drug designs, and 3D nuclear test simulations that rely on supercomputers may cease to inspire confidence, inhibited by the technological parameters of early 2017.  

 

Imagine the processor as an engine, and the transistors as the cylinders that accelerate it.  To increase the processing power, the solution is to condense more cylinders into the engine.  This is precisely what makes today’s iPhone 7, which has an A10 processor with 3.3-billion transistors, faster than the ‘TRADIC,’ the first American transistorized computer containing 800 transistors within three cubic feet.  

 

Processing power has significantly increased since Dr. Gordon Moore, Intel’s founder, established the earliest version of Moore’s Law when he declared in 1965 that transistors were shrinking at such a rate that twice as many could fit in a computer processor every year. Thirty years later, Moore declared that the doubling rate would occur every two years; however this has not been the sentiment of Silicon Valley, which fervently believes that the exponential growth has never slowed.  

 

Yet Silicon Valley may run out of counterarguments.  The industry is reaching the final frontier of transistor technology for several reasons.  It is so expensive to produce these miniature transistors that only a few muscular companies are able to compete in the industry.  The number of manufacturing companies has dwindled from about twenty in 2000 to just four today: Intel, TSMC, GlobalFoundries, and Samsung.

 

Furthermore, notwithstanding the robust financial resources necessary to miniaturize further, transistors are unlikely to do so because it is impossible to defy the laws of physics. When the transistors reached 90 nm in the 1990s, the industry discovered that the transistor gates were becoming so thin that electric current was leaking out into the substrate.  In other words, electrons that direct the power of the transistors are tempted to “jump” into the surrounding space, unless there is some way to insulate them.  The smallest transistors today range between 14 and 22 nanometers, and to further decrease the size would only increase the quantity of electrons jumping into the substrate. The only effective method of insulating the electrons at this microscopic level is to block any movement of electrons, in other words, keeping the insulator at a temperature close to absolute zero.  Temperatures approaching absolute zero have only been obtained in a laboratory setting, so it would be implausible to incorporate such insulation in transistor technology, thus halting its development.  

 

The industry today recognizes that the ceiling has been hit.  Intel has repeatedly delayed its release of the newest technology, with more time between subsequent “generations,” or upgraded products.  It has even placed delays on specific launches of new inventions, such as a 10 nm transistor.  Intel’s Chief of Manufacturing, William Holt, has noted in February of 2016 that Intel will have to move away from silicon transistors in about four years and that “the new technology will be fundamentally different,” but he has admitted that silicon’s successor is not yet established.  

 

Even the introduction of transistors in the range between 22 and 14 nm has been contingent upon a radical redesign.  While the transistors of the past were flat, the Tri-Gate transistor has innovated an approach to three dimensional space.  Instead of having current-carrying channels lying under the gates of the flat transistors, the channels rise up vertically over the gates. This constitutes a disruption to transistor manufacturing because it delays silicon replacement for a few more generations, according to Marc Bohr, Director of Intel’s Technology and Manufacturing Group.  Simultaneously, it is a step toward an even smaller and more energy efficient transistor, like a 10 nm transistor that Intel had hoped to release in late 2016/early 2017, before being set back by difficulties.  

 

 

 

The International Technology Roadmap for Semiconductors has been published almost annually by semiconductor industry experts from across the globe since 1993.  The group forecasts in its most recent report, published in 2015, that production of chips in their current form will no longer be economically viable by 2021.  The industry will require another disruption, whether that be in new areas like photonics or carbon transistors, as well as other reformulations of current transistors.  

 

This type of massive technological disruption, which entirely reinvents product manufacturing, is important for software development.  Neil Thompson, an assistant professor at MIT Sloan School, affirmed that “one of the biggest benefits of Moore’s Law is as a coordination device.  I know that in two years we can count on this amount of power and that I can develop this functionality - and if you’re Intel you know that people are developing for that and that there’s going to be a market for a new chip.”  Without reassurance that Moore’s Law will continue, software development which relies upon its confidence is impeded.  

 

A technological hypothesis that hangs in the balance of evolving transistors is the singularity, which is the theoretical future date when developments in artificial intelligence will create sentient, autonomous computer beings.  The delays in transistor development mean that technological development is likewise paused.  Perhaps this is a positive result.  Should singularity occur, it could give rise to a rival class of beings more intelligent and cunning than humans.  Computer engineers who are involved in incrementally achieving singularity ought to pause to weigh the consequences of such a situation.   But with this degree of efficiency, it is unlikely that individuals forwarding these innovations are grappling with the implications of their choices. It would be prudent, even necessary, for those hastening singularity to take advantage of the inevitable delay in transistor development to examine the ramifications of their actions.

Digital Governance in the Gulf by Jonathan Lahdo (C'20 W'20)

The Middle East continues to grow and become ever more important on the global stage in numerous fields; the Gulf countries are leading this revolution and are at the forefront of innovation and development in the region due in no small part to the increased digitisation of these countries’ governments.

Increasing the availability of public services through electronic means has allowed the Gulf States to capitalize on a general trend towards increased investment in digitisation initiatives and  not only create positive change in the present, but also explore options for devising solutions to future problems

Facilitating Business Development

One of the largest and most obvious areas in which increased governmental digitisation’s benefits can be seen are in trade and the economy at large. In the United Arab Emirates a strong start-up culture exists, in which entrepreneurs have flourished. Uber-competitor Careem, founded in Dubai in 2013, has not only been able to maintain its dominance over the larger global player in the UAE, but also “expanded in 26 cities across the Middle East & North Africa (MENA) as well as Pakistan” by raising “a total of $71.7 million in funding.” In the e-commerce market, a nascent industry in the Middle East that has still not achieved the ubiquity it has in other parts of the world, Souq.com dominates after starting in Dubai and going on to become the region’s “first unicorn” [1].

To further stimulate business creation and small-to-medium enterprise growth, the Emirati government has taken several steps to streamline registration processes through digital methods. The Department of Economic Development (DED), for example, signed a memorandum of understanding (MoU) with Servcorp, an Australian-based company specialises in serviced and virtual office solutions, that allows the firm’s clients “to complete their business transactions within the least possible time.” By partnering with a company that works with both start-ups and large enterprises worldwide, the DED is leveraging a public-private partnership that can help businesses begin in Dubai with Servcorp and their MoU-conferred efficient services, including “trade name reservation, renewal of reserved trade name, license renewal, and initial approval” among others, before expanding outwards [2].

Developing Location Infrastructure

From an infrastructural perspective, a common theme in the Gulf countries is rapid growth, leading to poor urban planning. A recent example of an initiative taken by the UAE government highlights how digitisation can be implemented to ameliorate fundamental issues in a state. The average UAE resident cannot navigate solely using street names, and often has to rely on nearby landmarks in order to reach their desired destination. The implications for the general public are obvious: it can be difficult to get around, whether it be to a friend’s house or a specific building. As for businesses, the effects can have a dramatic impact; inefficiencies due to difficult navigation can prevent  a company from turning a profit. Furthermore, with the rise of delivery businesses, like Talabat.com, the popular “online food ordering service operating across the [Gulf Coordination Council which] hit a record-breaking 100,000 orders on February 3, 2017,” [3] having a solid location infrastructure has never been so important.

The Emirati government’s response was Makani (meaning “my place” in Arabic), a “smart mapping system that was initially launched to help the delivery-industry, in addition to first emergency responders and courier services, locate residents’ homes;” it also aims “to have all the buildings in the UAE installed with a plate, displaying the location’s 10-digit geo-coordinates,” according to Abdul Hakim Malek, director of the Geographic Information Systems (GIS) Department at Dubai Municipality [4]. In addition to its own proprietary app, the government team behind Makani continues to work on integrating it with popular navigation apps Google Maps and HERE Maps to make the service more accessible to everyone.

Solving Problems Digitally

Looking forward, there are both potential threats and current problems that the governments of the Gulf states are looking to prevent and has the ability to alleviate respectively.

The most recent Interpol Digital Currency Conference, for example, was held in the Qatari capital Doha and was organised by the Qatar National Anti-Money Laundering and Terrorism Financing Committee, a unit of the emirate’s central bank. Deputy central bank governor Sheikh Fahad Faisal Al-Thani was quoted as saying “We expect from this conference to contribute in enhancing the capacity of the relevant competent authorities in conducting investigations in any crimes related to virtual currencies; and in establishing a network of practitioners and experts of this field,” a testament to the Qatari government’s dedication to digital financial security.

Elsewhere in the digital finance sphere, a key area in which the Middle East continues to lag is venture capital, and more specifically digital venture capital. There is evidently a strong interest in tech start-ups, with “half a dozen tech start-ups in the MENA region today [being] valued at more than USD 100 million each, and investors sunk more than USD 750 million into MENA tech start-ups from 2013 to 2015.” These have the potential to be the next potential generation of digital unicorns, of which “the Middle East claims [only] 1 out of 200 globally,” but given that “relative to GDP, the Middle East has only 10 percent of the VC funding of the United States,” it is clear that the government needs to engage in more efforts to encourage digital investment. In general, the strong culture and prevalence of family businesses in the region are a major barrier to the growth of venture capital, but through further modernisation and digitisation the region’s governments will be able to take advantage of the untapped potential of this sector that could further boost their economies [5].

Reflecting and Progressing

It is evident that in the Arabian Gulf, many strides have been taken to further digital development. The investments of the region’s governments in electronic initiatives have bolstered their economies, created an accessible environment for exciting new start-ups, and addressed logistical issues in location infrastructure and urban planning, to name a few examples.

Nevertheless, there are many untapped opportunities and potential areas of growth for these governments to focus on. In the future, their priorities will include a strong push for widespread adoption of the innovative services they have developed, in addition to further research on and investment in pressing issues like cybersecurity and digital finance.

1.     Suparna Dutt D’Cunha, “Is Dubai The Next Big Tech Startup Hub?”, Forbes, 22nd August, 2016

2.     Tamara Pupic, “DED and Servcorp to ease company formation in Dubai”, Arabian Business, 2nd June, 2015

3.     Claudie De Brito, “Talabat.com hits 100,000 orders in a day,” HotelierMiddleEast.com, 12th February, 2017

4.     Mariam M. Al Serkal, “All UAE buildings to get Makani coordinates,” Gulf News, 13th April, 2016, updated 24th February, 2017

5.     Enrico Beni, Tarek Elmasry, Jigar Patel, and Jan Peter aus dem Moore, “Digital Middle East: Transforming the region into a leading digital economy,” McKinsey&Company, October, 2016

 

 

Telemedicine: The Future of Healthcare? By Jonathan Silverman W'19

Smartphone functionality has skyrocketed over the past decade: Snapchat, Uber, celebrities on Instagram, and…sending one’s blood pressure to the doctor?! Yet recently, increasing numbers of US healthcare consumers are pivoting towards services that fulfill their medical needs without them ever having to leave the comfort of their own bedrooms. Drawing upon the convenience, speed, and economic efficiency of the Internet, this range of services – aptly categorized as “telemedicine” – constitutes a rapidly expanding segment of healthcare services that has vendors and consumers alike scrambling to implement new initiatives in their quest to forecast the long-term effects of telemedicine on the general healthcare industry. By utilizing the prevalence of smartphones and connectivity, doctors are now able to communicate with patients via webcam, immediately consult a digital network of disease professionals, and issue prescriptions over email; it also helps that these services are often priced cheaper than live, in-person substitutes.

While disrupting the traditional doctor-patient model typifying centuries of medicine, telemedicine has emerged as a dominant force in healthcare for a variety of reasons. Specifically, telemedicine has been causally linked to fewer hospital readmissions, lower medical costs, improved accessibility for rural patients, and favorable levels of care. Their popularity has become so widespread that, according to James Tong, a mobile health lead and engagement manager at QuintilesIMS (the nation’s largest vendor of healthcare information), recent studies have demonstrated that two out of every three Americans are exhibiting a willingness to use technological devices to supplement traditional health methods.[1]

Furthermore, telemedicine has proven itself capable of optimizing patient outflow at typically overcrowded hospitals. According to a recently published article appearing in Clinical Infectious Diseases, telemedicine “promotes more efficient use of hospital beds, resulting in cost savings,” as well as positive upsides for patients themselves, noting a correlation between at-home medical care, and the rapidity of patient convalescence. [2]

However, the road to telemedicine’s institutionalization has not been an easy one. Lending itself to a host of legal ramifications, with practicing physicians treating patients as far as 6,000 miles away, telemedicine’s array of ambiguities have challenged accepted definitions longstanding in the healthcare industry, such as the notions of medical malpractice and physician licensure. For example, at the Mayo Clinic – a renowned cancer treatment center – doctors treating out-of-state patients follow-up with emails and video consultations; yet may only do so regarding matters that were initially discussed in-person. [3] Designed to preempt any potential issues, these policies highlight the tentative hesitancy of key healthcare organizations in their assessment of telemedicine’s future.

Nonetheless, despite the obstacles and doubts surrounding telemedicine’s quality and feasibility, many of the US’ major healthcare players have made strong moves pushing for expanded telemedicine coverage and service. Indeed, insurers such as Anthem and UnitedHealthGroup have begun offering their own direct consumer-to-virtual-doctor consultations, bypassing traditional medical channels. Additionally, Johns Hopkins Medicine and Stanford Medical Center have also introduced their own digital consultation services. As one doctor from the Cleveland Clinic stated in the Wall Street Journal: “This will open up a world of relationships across a spectrum of health-care providers that we haven’t seen to date.”[4]

 

http://health.usnews.com/health-news/hospital-of-tomorrow/articles/2015/10/19/telehealth-is-changing-patient-care-now

[2] https://academic.oup.com/cid/article/51/Supplement_2/S224/383896/Telemedicine-The-Future-of-Outpatient-Therapy

[3] https://www.wsj.com/articles/how-telemedicine-is-transforming-health-care-1466993402

[4] https://www.wsj.com/articles/how-telemedicine-is-transforming-health-care-1466993402

 

What Comes After the Donald Trump Market Rally? By David Cahn W'17 ENG'17

The stock market has hit record highs since Donald Trump was elected in January. By mid-February, this exuberance appeared to have calmed, before the “third wave” of the Donald Trump rally took hold, driving markets even higher this week.

How should we be interpreting this rally?

One view says that the Trump rally is being driven by fundamental value. After all, Trump claims he’ll renegotiate trade deals in America’s favor, lower corporate taxes, deregulate Wall Street, and reinvigorate the U.S. economy. If we believed that he’d deliver on these promises, then surely the market rally is justified.

Bears have been crying wolf for weeks now, to no avail. MarketWatch has cited numerous reasons to be worried: the S&P’s average P/E ration is now 21, the highest level since 2009, the CBOE Volatility Index (Wall Street’s Fear Index) is “suspiciously low” and we appear to be in a “late leg” of the economic cycle. Even if Donald Trump is America’s most pro-business President, it’s unclear his first few weeks in office have justified the $3.1Tn in market value that’s been created since November.

Even while everyone worries when the euphoria will end, bears are getting trounced as the market inches higher and higher. As is often the case, it’s hard to predict when the rally will tip into recession.   

Responsible Financial Media: Lessons from the Internet Bubble By Andrew Howard W'20

The Internet bubble of the late 1990s, otherwise known as the “Dot-com Bubble,” erased billions of dollars from the economy seemingly overnight. The NASDAQ, a market index of technology and biotech companies, fell nearly 4,000 points from March, 2000 through October, 2002. This collapse demonstrated how a massive scheme of mispricing, accounting fraud, and the unjustifiable promotion of digital products and services brought down the entire technology sector. The crash was so far-reaching that even successful companies like Amazon and Cisco lost nearly 90% of their market value; their value diluted by the plethora of overvalued companies that had never earned a single dollar for their investors. Similarly, the majority of companies with artificially inflated prices were ostensibly worthless: boo.com, pets.com, Nortel, and even startup.com all had market capitalizations of zero after the crash.

 

How is it possible that such companies were valued at billions of dollars before turning any profit? Why were institutional and individual investors incentivized to back companies they could not understand? One proposed answer pinpoints the popularization of the financial media as a leading culprit. Indeed, Bloomberg and CNBC rose to prominence in 1996 and 1990 respectively, signifying the rise of the importance of financial media at the height of the bubble.

 

Within the financial media, columnists and reporters held the power to change the market capitalization of a stock by adding it to either the “buy” or “sell” list. During this period, when a reporter published a purchase recommendation for an internet company - regardless of its actual merit - average investors with limited information would inflate its price by buying in. The work of behavioral economists like Robert Shiller demonstrates that news media biases shape investor behavior and consumer sentiment. Shiller concludes that rational-market behavior is dependent upon accurate information. However, Shiller claims, when the media ignores its assumed fiduciary responsibility to its readership, the market is unable to correct overvalued companies and prices subsequently spiral out of control.

 

It is no coincidence, therefore, that the financial media published more stories about technology companies during the Dot-com bubble than any other industry combined[1]. As a corollary to the popularization of financial news media, two new financial publications – Business 2.0 and Red Herring - emerged and shuttered during the period of the Internet bubble from 1996-2002, highlighting the unstable dynamic cultivated by the emergent demand for financial reporting. Even more mainstream organizations like CNBC and the Wall Street Journal increased their coverage on IPO’s by nearly 1000% relative to the actual number of new companies that went public during this time.  However, even more troubling than the prevalence of the financial news disseminated throughout this period was the magnitude of its measurable impact on market capitalization. Average investors depended on widely read analysts to publish lists of stock recommendations, classifying companies as either “buy,” “hold,” or “sell.” Alarmingly, during the 1996-2000 period, only 1% of published stocks were classified as “sell,” while 70% were classified as “buy.” Two of the most famous analysts of the time, Morgan Stanley’s Mary Meeker and Merrill Lynch’s Henry Blodget, became business media celebrities for their market insights. Most troublingly, investigations into compensation packages revealed that analysts were given higher bonuses if they classified a company as “buy” – highlighting the biases and external motivators of those releasing stock recommendations, and emphasizing the role the news played in promoting the bubble.

 

In a comparison of IPO’s between Internet and non-Internet stocks over the four-year period, a study in the Journal of Financial Analysis made an astonishing discovery:

 

“Internet firms average a stunning 84% initial return during our sample period, more than twice the return for non-Internet firms. The Internet IPO sample also had a cumulative return of 2,016% from January 1, 1997, to March 24, 2000, whereas the non-Internet IPO sample had a return of only 370%. The difference is an astonishing 1,646%.”

 

Outside of the realm of IPO’s, Internet stocks also enjoyed advantages causally related to their specific classification. Research from Purdue University shows that a sample of 63 companies who changed their names saw an average stock price increase of 125% relative to their peers within five days. The implications of this finding suggest that non-technology companies could multiply their market capitalizations simply by placing a technology-related buzzword in their name.

 

Eventually, however, the supply of non-traditional, inexperienced, and lesser informed investors ready to overpay for worthless stocks disappeared, and with them, the artificial over-inflation of American technology companies. The average loss per household in the United States, as a result of the crash, was a shocking $63,500. The subsequent recession caused millions of Americans to lose their jobs, and Henry Paulson of Goldman Sachs estimated that investors lost over 7 trillion USD from the bubble’s devastation in March, 2000.

 

Drawing lessons from this crash to contemporary market behavior, with private companies like Snapchat, Uber, and Airbnb achieving current multi-billion dollar valuations, it is reasonable to question whether we are living in a similar bubble today. Indeed, these unicorn stocks are no longer uncommon – according to the Wall Street Journal, at least 145 private companies have achieved valuations of over $1 billion! Additional signs of concern spotlight the declining access to venture capital, and falling valuations across various industries. For example, Gawker recently leaked an income statement for Snapchat for the year of 2014: The supposedly $20-25 billion company reported earnings of just $3.1 million.
 

If we are to prevent another collapse, we must not forget the fundamentals of value investing when selecting stocks. Healthy skepticism, acknowledgment of risk, and a wide array of responsible financial reporting are necessary to protect the American public from the biases and misinformation of the financial news media. Without this formula, we may be looking back at Snapchat the same way we view pets.com today. Investors, be careful out there. The market is irrational.

[1] Journal of Financial and Quantitative Analysis, 2009

A Net-Neutral Catch 22 by August Papas W'19

Water versus Netflix. Electricity versus YouTube streaming. While pragmatism might clearly delineate the relative importance of such things, the difference between traditionally defined utilities and media service outlets is less clear when glimpsed through a legal lens. When a DC appeals court made its ruling in June to uphold the FCC’s 2015 reclassification of internet access as a public utility, it placed the router on par with any other fixture in the home. The analogy between what comes out of a faucet and what appears on a screen, however, erases the seminal distinction of government and private supply -- a distinction that has embroiled big-money telecomm entities and legislators in a decade-long battle of public interest and lobbying dollars alike.

As the Federal Communications Commission’s laid out an initiative for zero-pricing to content creators, the custodians of internet infrastructure at which it was aimed responded in full financial force. The joint spending of Comcast, Verizon, and AT&T (some 44 million dollars in 2014) on Capitol Hill campaigning focused heavily on net neutrality, and is only expected to continue as all eyes are fixed on the case heading to the Supreme Court.

These giants, essentially the conduits of all broadband-based media consumption, oppose legislation that would limit their ability to selectively speed up or slow down certain applications and price-discriminate between content providers on the basis of information strain. That is, no longer could a provider induce streaming sources or online gaming platforms to pay for “fast lane” data by slackening delivery to the consumer until buffering and glitching degrades the application. Proponents of the free market and telecomm lobbyists agree: the revenue raked in from the platforms that can pay encourages new infrastructure growth to the ultimate benefit of improving digital networks at all levels.

Those favoring net neutrality argue this practice jeopardizes the unique democracy of the internet. Not even the next Kickstarter-to-be could hope to kickstart enough for optimal service delivery (read customer satisfaction and business survival) in a discriminate broadband sphere. The opposing side expounds other philosophical cases. How could remote surgery in the possible future of telemedicine ever be safe and reliable if there is no guaranteed, preferential data stream? Amidst the discourse, the verdict holds for now that 100 bits/second is 100 bits/second, be it a celebrity lifestyle blog or international Skype calls.

It may not be time for net-neutralists to toast victory quite yet, though. Comcast and others will certainly try to recoup lost revenue on the side of content creators by shifting aggressive pricing strategies to the consumer side of their channel. Indeed, tiered options, from high price, high-capacity packages to inexpensive low-speed service choices, already exist and are favored by those on both sides of the debate. It is the worry of Hahn and Wallstein (2006), for example, that the regulation of tiered service packages will follow disastrous historical precedents. They point to the 1978 Natural Gas Policy Act, wherein an initial inventory of five tiers of natural gas established for companies to offer publicly eventually multiplied to a cumbersome 28 categories upon closer consideration. Given the internet’s ascent to the status of commodity, the parallels are disconcerting. With the job of setting suitable tier prices now in the hands of regulators, it is likely content creators will lobby for special tiers of suitable capacity for their applications.

Thus emerges a certain catch 22. The public that pushed for the end of discrimination in the digital zone now faces the real possibility of an overly-bureaucratic pricing scheme set against them.  The moral: it is the responsibility of policy makers to proactively legislate a reasonable tiered system of internet access and for consumers to meditate on the rose-tinted vision of a neutral net.

Abenomics & Japan’s Roller-Coaster Economy by Ayo Fagbemi W'17

Japan’s economy had a relatively good showing in the first quarter of 2016.The nation’s gross domestic product (GDP) grew an annualized 1.9% (or 0.5% on a quarterly basis), effectively rebounding from the previous quarter’s economic shrinkage to avoid consecutive quarters of negative GDP growth: a recession. National private consumption, which is responsible for over 60% of the nation’s GDP, carried much of the load with 2.3% growth, namely with higher spending on recreation and dining. Public consumption, accountable for about 20% of GDP, also chipped in with a 2.6% climb.  While these numbers appear on the up side, the leap year of 2016 provides a little fluff to these end figures. As trivial as it may sound, that extra day in February provides Japanese consumers with some extra time to spend their money and keep the economy ticking, providing a bit of reality to these Q1 numbers.

The news of Japan’s recent economic progression is furthered soured when taking the previous quarter’s results into account. In the fourth quarter of 2015, Japan’s economy contracted 1.7%, even worse than previous estimates of a 1.1% decrease. The major culprit for the quarter’s disappointing performance was, predictably, faltering private consumption.  Consumers simply did not have as much confidence to dig into their wallets, as private consumption fell by almost a percentage point. Combined with a sizable drop in public investment, an unwanted level of economic hesitation and inactivity stymied any hope for growth.

The antipodal performances between these two most recent quarters can be seen as a reflection of Japan’s volatile economic performance since the start of the decade, still in the wake of the 2008 global financial crisis. From 2010 onwards, the economy experienced twelve different switches between quarters of economic expansion and contraction; the economy battled three recessions during that same time span.

Many link Japan’s underperformance to Prime Minister Shinzō Abe and his policies, affectionately known by many as “Abenomics.” They have traditionally been characterized by a trifecta of fiscal stimulus (also known as “government spending”), monetary easing, and expansionary structural reforms, in an effort to  revitalize an economy that has struggled to find its footing. Within weeks of his December 2012 re-election, Mr. Abe immediately charged up the fiscal defibrillator by announcing a massive stimulus bill of ¥10.3 trillion (or $116 billion), with ¥3.1 trillion ($34.9 billion) intended to encourage private investment. In terms of monetary policy, 2014 saw the Bank of Japan initiating sizable quantitative-easing, in which the national bank would make annual purchases of ¥60 trillion to ¥70 trillion ($68 billion to $79 billion) in bonds, increasing the nation’s money supply. Later that fall, the Bank of Japan doubled down on its efforts by increasing its annual bond purchases to ¥80 trillion ($90 billion), with the goal of driving down its currency value and driving up market liquidity. Outside of government spending, Abenomics looked after the private sector by lowering the corporate tax rate by 3.3% in 2015, with the goals of achieving greater corporate profit margins, private spending and investment, and new job hirings. Abe sought to systematically reinvigorate multiple players of the economy, and in most eyes, he went to extreme lengths to do it.   

            An important indicator to judge economic health is inflation. Although commonly misconstrued as a bad thing (and certainly, hyperinflation is something to be avoided), healthy inflation is associated with stable economic growth, especially when paired with encouraging growth in wages. Since the “Lost Decade” of the 1990s however, Japan has been struggling to approach that level of healthy inflation; the majority of 21st century Japan has seen inflation levels below zero, and GDP growth has been riding close to the zero line right along with it. The Bank of Japan has set its sights on a 2% target inflation rate to work towards in the coming years, similar to the Fed’s inflation target.

Over the last four years, the yen has depreciated against the U.S. dollar by a staggering 41%. Many would expect a less-valuable yen to result in rising prices for Japanese goods, especially given that the U.S.’ currency is more expensive and its exports would be less competitively priced, providing more breathing room for Japanese manufacturers. Inflation would be expected to follow.  

So the question remains: Why hasn’t it?

Many of Abe’s initiations, while ambitious and far-reaching, have failed to yield the intended results most likely due Japan’s structural deficiencies. The economy has not been performing up to its capabilities since the 1990s, as indicated by the nation’s output gap, a measure of the difference between the economy’s actual output and its output at full-potential. This market inefficiency still accounts for -1.6% of national GDP according the Bank of Japan. Closing that gap is paramount to translating market inputs such as a competitively-priced currency, into a 2% inflation rate.  

Whether or not the Japanese economy is in the midst of a solid and substantial recovery will be dependent on a few other things. Japanese legislators have to create an economic climate that encourages more private and public consumption, which, as illustrated before, has been a major component of the nation’s GDP. Prime Minister Abe seems to understand this as he has recently decided to push back an increase the country’s sales tax until 2019. A higher sales tax rate, while providing government more revenue and a way to lower any national debt, results in lower household wealth and less consumer willingness to purchase things. Unfortunately, the effects were already seen in 2014 when a premature tax hike stifled consumer spending and played a role in a mid-year recession. As Japan’s economy is presently constructed, plans for progress rest largely on the shoulders of consumers and spenders.

Hopes for Japan’s economic growth also fall on its exporters, and much of that has to do with the yen’s movement for the months to come. The yen has already appreciated 14% against the U.S dollar so far this year, which is the biggest gain among the currencies of developed nations (the recent Brexit certainly didn’t improve things, as investors scrambled to the safer yen, further spiking the up currency). Correspondingly, the Tokyo Stock Exchange Tokyo Price Index has fallen over 20% year-to-date as Japanese exporters battle with a rising yen and falling profits abroad. Some however don’t see the yen’s recent climb as an indication of the currency’s future performance. Economists at the Goldman Sachs Inc. Group is looking for the yen to make an about-face and soften to 115 per dollar by August, and 125 per dollar within the next 12 months. Analyst belief stems from Mr. Abe and the Bank of Japan’s desire to spark inflation by cheapening the yen and cutting interest rates. Of course, there are at least two sides to every exchange rate. The JPY/USD rate is also the product of the U.S.’ economic performance and Fed policies, both of which are largely out of Japan’s control. The Fed’s slowly approaching decision to pull the trigger on the second interest rate hike is widely expected to result in a stronger dollar, as the higher rate pulls in foreign investors and increases demand for the greenback (especially if other large players maintain their expansionary policies.) In addition to the U.S., the currency movements of Japan’s other trading partners (particularly China) are something to keep an eye on. Japan would benefit, structurally, from opening its economy to more trade communication and competition with these partners, which would bolster the nation’s export economy (currently the fourth largest in the world). Abe has an opportunity to move in the right direction with Japan’s involvement in the still-pending Trans-Pacific Partnership.

Japan’s government also has to contend with yet another major issue: its shrinking population. According to the national census, Japan’s population has shed nearly a million individuals within the last five years, a rather considerable figure when taken within the context of current population of 126.6 million. Japan’s population is also aging quickly, adding insult to injury. Over 25% of the current population is above the age of 65, giving Japan the world’s highest percentage and blowing away the global percentage of 10%. If left unabated, Japan’s population can fall to as low as 83 million by 2100, with the percentage of people age 65 and above reaching 35%. This has large implications for the economy, as a shrinking working-age population eats away at overall national productivity. Both of these issues require major immigration and social reforms, and Abe has pledged to raise the birth rate from 1.42 per woman to 1.8 (a very tall order according to experts) through expanding childcare opportunities and making it easier for women to take maternity leaves from work.

It will require a lot of time, and perhaps some missteps, but with the right combination of fiscal policy and structural reform, Japan’s monetary policy can start to yield greater dividends towards supporting healthy inflation and jumpstarting real growth that Japan has long waited for.

Interview with Donald Hinkle-Brown, CEO of The Reinvestment Fund

As President and CEO, Don Hinkle-Brown leads a staff of 70 highly skilled lenders, researchers, developers and other professionals at The Reinvestment Fund, a national leader in rebuilding America’s distressed towns and cities through the innovative use of capital and information. With over 20 years of experience in the CDFI industry, Hinkle-Brown is widely recognized as an expert in developing new programmatic initiatives, raising capital and creating new products to meet market demand. Hinkle-Brown previously served as President of Community Investments and Capitals Markets at TRF, leading TRF lending during a tenure where it lent or invested over $1.3 billion. Hinkle-Brown has also provided his underwriting and capitalization expertise to many community development loan funds and organizations, including the Hope Enterprise Corporation, Opportunity Finance Network, LIIF and as adjunct faculty at the Center for Urban Redevelopment Excellence at University of Pennsylvania and University of New Orleans. He serves as Community Development Trust’s founding board member and until recently was on the board of Housing Partnership Network and its affiliated CDFI. Hinkle-Brown has also served as adjunct faculty at Temple University’s Geography and Urban Studies program and the University of Pennsylvania’s City Planning department. He holds an M.B.A. from the Fox School at Temple University in Real Estate and Urban Planning as well as a B.A. in Economics.

IBR: Could you tell us about your career – how and why you chose to lead a community development financial institution (CDFI)?

Donald Hinkle-Brown: I’ve been at The Reinvestment Fund for almost twenty-four years now, and prior to that I was in regional banking at banks that are no longer here – let’s put it that way. Their successors all rolled up into PNC. I started my career with the intention of getting into banking. Banking, for me, was a family profession. My father had been a banker. That was my career path.

Two factors led me out of banking. First, in my coursework as an undergrad, I took a course on the moral foundations of professional ethics, which stuck with me. Then I got into banking in the hey days of the eighties when bankers had forgotten whose money it was that they were trading and investing. I had been asked by the bank to volunteer with The Reinvestment Fund, which was called The Delaware Valley Community Reinvestment Fund back then. I met Jeremy Nowak, and he taught me what banking was really about. My day job became increasingly boring, and my evening volunteer work with Jeremy became increasingly interesting. Eventually, I made a career switch.

IBR: Could you tell us more about the role of morals and ethics in banking?

DHB: Many people believed that there was such a thing as a profession, which was differentiated from a job. You have some duties and civic expectations. The course went through medicine, engineering, and finance. It presented the moral dimension of work, and it stuck with me. Banking was once a profession, and it does serve a civic purpose: savers’ money being put to work with borrowers, and that intermediation is not just a profit-seeking activity. It really does serve a public purpose, which is why banks are given charters and why banks are regulated. But when I got into banking, I didn’t find the profession. It was really disheartening.

IBR: Given increased regulation and scrutiny today, do you think that there is a resurgence of the idea that banks do serve a public purpose?

DHB: No, I don’t think the dialogue has moved anywhere near banking as a public purpose. It’s been a battle of cumbersome regulations, because the industry has become complicated. I’m a firm believer that derivatives and swaps are essentially insurance and should be regulated, but we also don’t regulate that industry very well, especially at the state level. We see stories every year of misspent insurance money and poorly regulated insurance companies.

I think the battle has been engaged around safety and soundness, and lost in that is public purpose. When one talks about predatory lending or foreclosure abuses, you’re on the other side of the tennis court from the positive moral dimension. No one gets it back to the affirmative: what is your affirmative responsibility? Every now and then you hear about fiduciary duties of brokers and financial advisors. That’s the beginning of a conversation, but it really hasn’t gotten started very much.

IBR: As a CDFI, what advantages do you have in capital raising and investing?

DHB: CDFI is a designation from the US Treasury, and if you look at the application to be a CDFI, it’s essentially due diligence on your dedication to mission, and lighter on everything else. It’s not a designation of excellence. It’s not a designation of effectiveness. It’s a designation that the money put in your hands will serve a mission. As a result, it is an advantage in that those looking to put their money to a mission trust CDFIs, having been vetted and having some direct accountabilities around this mission-serving purpose.

I don’t think that it’s an advantage in many other respects. It’s a tiny little cul-de-sac in the financial industry, and sometimes as a result, our industry has set themselves up in ways that make a lot more sense in non-profit law and non-profit finance. Those structures don’t bear much relationship to conventional ones in the marketplace, and there are reasons why. In essence, we’re all little story bonds. We’re all trying to sell stories, and we’re all fairly small.

IBR: It sounds like there is an appeal being able to market to socially-conscious investors. Have you seen any uptick in interest in socially responsible investing on the capital raising side?

DHB: There is some increase in interest, but I don’t want to overstate it. The impact investing field and the names we had for it before Judith Rodin created that name at Rockefeller with some of her staff. We used to call it socially responsible investing. There is an uptick of interest, and there is a new generation of wealth. There is a new generation of inherited wealth and a new generation of tech wealth. Of course, there’s the generation of hedge wealth as well: people making 2 and 20 on other people’s money and making a lot of money for themselves. It leads to a new class of people looking to do philanthropy. Charitable remainder trusts have become a big field where people are parking their philanthropy in one place and getting back to it later, which leads to an investment opportunity. Although, it’s an unfortunate philanthropic missed opportunity, because it’s stalled from actually being philanthropic.

So there is new activity, and the lay of the land is very different than it was before, but I would say there was an earlier, very fertile time that was even easier to raise money in. That was in the years of the divestment from South Africa. There was large movement to filter investment: no guns, no tobacco, and no gambling. No Apartheid became the new label, and a host of investors – especially religious investors – and also run-of-the-mill values investors were looking for more affirmative places to park their money and make sure it was doing some good. It’s one question to make sure your money doesn’t do harm; it’s a totally different challenge to make sure it’s doing good. In the late eighties and early nineties, there was a boom in the religious community and a bunch of individual investors looking for socially just investments. It was led by a number of religious orders across denominations and by some individuals like Judy Wicks looking for places to put their money where it would have positive effects. That was also a time when it was fairly easy to raise money, albeit on a different scale. (We were all much smaller then, so a dollar went a little further.) There are ebbs and waves in the ability to raise money and people’s interest in getting more qualitative—instead of just quantitative—aspects of what their money is doing.

IBR: Could you elaborate on your investor universe?

DHB: We have over 850 unique investors, 500 of whom are individuals, but ten banks represent the vast majority of our funding. There is the number of investors, and there is dollar weight of investors. What we’ve found is that having a diverse array of investor types gives us a counterweight to becoming captive to the mission of that one small investor group. It allows us to stay true to our original intention of serving low income people and low income places in a very particular business plan. This way, we don’t become the ACME bank private label fund.

We are within the Community Reinvestment Act (CRA) industry. A tremendous amount of the money we secure is because of two things: the CRA (which requires banks to invest in low income places or people) and the IRS’s regulation of philanthropy. As long as foundations still have to give away money and banks have to invest in low income areas, the core of our investment practice is fairly assured. What we like to add to that is looking for individuals and civic institutions to invest, mostly because of the network effect of cementing those relationships. They become invested in our fund and a take a degree of ownership.. They then introduce us to people, advise us, and even caution us when we’re going into a place where we don’t quite understand the context. It becomes a network that, with six degrees of separation, get us to any mayor where we work. With a few degrees of separation, it can connect us with a civic leader we need to talk with. It’s been politically and civically very powerful to have a wider net. That was the genius of the founding of TRF back in 1985 with Jeremy Nowak. Many loan funds were founded back in that time, but very few of them maintain this connection to a broad array of investors. I think this makes us a stronger institution.

IBR: How do you navigate the dual mission of ensuring strong community development and simultaneously generating returns for investors? Has that balance changed as the fund has grown?

DHB: We are a debt fund and issue promissory notes. The equity that underpins the safety of those notes is entirely philanthropy and public sector, so our net asset position is gift capital. We raise debt capital on top of that with a host of different offerings. As a result of being a debt fund, we have to manage yield, but yield is also fixed. A private equity fund or venture capital is managing value; we’re managing cash flow. We preserve capital and manage cash flow instead of using capital to build the size of that capital. There were times in our past where assuring returns to our investors might have had some degree of tension, but that hasn’t been the case for decades. We’re quite stable. We have very substantial net assets, the earnings from which creates a cushion to make sure our cash flow is positive.

The challenge is that as we’ve institutionalized, we’ve become bigger. As we’ve broadened what we do from affordable housing and community facilities to other asset types, other niches, and other broader geographies, it’s been about the challenge to deploy the money with high mission. How can we make sure that every transaction is community-supported and community-advancing? There can be a distraction as you become bigger and as you broaden what you’re willing to do. You can get lost in the world of economic development, because economic development is not always about low-income people, nor is it always about social justice or equity. Community development, at its core, is about low-income people. You can go down a highway of economic development and find yourself financing things that are about jobs and about building an economy but not really about people or any particular kind of people. We’ve tried to do more and more, but we’ve been trying to keep our anchor in low-income people and low-income communities. Our CDFI status requires us to do that, but actually doing it on the transactional level is always a challenge. As staff changes and programs change, that’s the constant exercise.

IBR: What does TRF do relative to the rest of the socially responsible investing landscape that is particularly effective when deploying capital?

DHB: I think we’re quite unique in certain ways. We’re the only CDFI in America that takes it upon ourselves to not only intermediate mission capital but to also be an intermediary of data, so we’ve created policymap.com (PolicyMap), which is a spatial database and mapping service  on the web. We’ve build up an inventory of spatially relevant public policy data, which allows decision-makers to look at a map, choose their data layers, make their own associations, and then make a policy decision or an investment decision. We use it ourselves inside TRF, but we knew that to make this a long lasting and self-improving tool, it needed to be a business of its own. It needed to be not only our infrastructure, but also to be an externally facing product. That is fairly unique.

Prior to PolicyMap, we had – and still have – an advisory group (which we call Policy Solutions). The unit advises government and philanthropy on the impact that they make with their programs and investments. We advise HUD on the effectiveness of the Neighborhood Stabilization Program. We advise foundations on the effectiveness of their grant making. We advise other CDFIs on the effectiveness of their investments as well.

IBR: How well do government officials respond to PolicyMap, and what does TRF do to make sure that policy makers understand how to act on its information?

DHB: While data hasn’t had a very long life as a tool in state and local government, there is something about the nature of the visualization of a map that makes a politician or a senior bureaucrat stop in their tracks. They instinctively understand jurisdictions, and they think in a geospatial context constantly. As a result, a visualization of places they know immediately communicates a thousand words. We’ve seen that as soon as you can create a format familiar to the policymaker, it is immediately understood. There is a very low threshold of adoption once you get it in their hands. We do have thousands of data layers, which can be intimidating when deciding what to look at, but the interface is quite simple compared to other programs that require a specialized graduate degree to operate. This is just a website with buttons and drop-down menus. It’s not quite Google, but it’s much easier to use than specially designed software.

Philadelphia has had multiple agencies jointly subscribe to Policy Map, and the city has been using it as an information exchange platform to share data between departments. If any one of those programs is interested in what another agency is doing, they can see it. If an agency is applying for a federal grant, they can use the data already in Policy Map or use it as a vehicle to display data from multiple agencies. It speeds government and speeds decision-making. It’s one of the good stories about government efficiency these days.

IBR: What can investors to avoid some of the unintended consequences of investment, such as gentrification and displacement?

DHB: Your typical anonymous capital markets investor probably isn’t going to do anything, and there probably is little they can do. They are just participating in the marketplace. I do believe that things like gentrification and unmanaged change that destabilizes people’s lives is the responsibility of a whole group of people. You can be an investor like a CDFI and pay a lot of attention to it, or if the team of people involved includes the public sector and the investor community, it really is the public sector that has the toolkit to moderate what happens when too much capital enters a marketplace. It’s a classic bubble problem: gentrification is a human and capital bubble. It’s dramatic change and exclusionary effects follow. The public sector can ameliorate that, and we’ve seen this in Philadelphia and Baltimore where programs are designed to preserve long-tenured residents, whether that’s a basic system repair program, an insulation and weatherization program, or homesteading tax preference. There are a number of ways you can structure public resources, and sometimes that’s combined with private resources. Often, a system repair program comes with an FHA-insured home improvement loan. There are ways you can combine private capital and government guarantees to ameliorate what’s going on in the marketplace.

I’m a strong supporter of the idea that cities are living organisms. We forgot for a period of time, essentially from the sixties on, that cities change and neighborhoods change. They changed gradually – at a generational pace. People moved through neighborhoods on ladders of opportunity, some moved horizontally, and unfortunately some moved down. You can go to any neighborhood and find a group of people who used to be there, a group of people who are there, and a group of people who will be there next. That didn’t used to be so controversial as it is today in urban America. People began to romanticize the steady state and expect that change was automatically bad.

Now, fast, unmanaged, and high price change is bad, but there are some changes that can be beneficial. The challenge is having enough programs that deal with human capital, not just real estate prices. You have to have a balance. If you’re working on good schools and good workforce mobility training programs, you can balance things out. The elderly homeowner can get a windfall when home prices increase. I do think there’s a role for government in managing change. That’s usually the best combination: positive investment and positive response from the public sector.

IBR: What advice would you give to an undergraduate student looking to work at a CDFI?

DHB: Students can often be very focused on the hard skills, especially if they’re thinking about entering finance. That’s important: those classic spreadsheet and analytical tools, but if you want to work at a CDFI or any other place where the nature of the business value is not just quantitative, then the softer skills are really important. The value of a liberal arts education is still steady, and the components of it are still quite important, like writing skills. When you’re investing in a place and you’re writing a story about it, it doesn’t matter if you’re the best spreadsheet jockey in the world. You need to tell that story really well.

Communicating your passion is something a lot of undergraduates have some challenges with as well. Our biggest challenge is finding people who are committed to the work, and you have to be committed because it can be very frustrating work. It’s not transactional; it’s not bond trading where you get to clear your desk every day. This work stays with you.