The Book: Dead or Alive? by Lilli Leight C'19

The printed book is not dead. In 2016, The Publishing Association, the leading body of UK publishers, released a report showing e-book sales falling by 17% and a parallel rise in print book sales of 8%.[1] Oren Teicher, CEO of the American Booksellers Association, believes these results highlight that “there’s no getting away from the tangible delights of reading in a format that has remained relatively unchanged since the printer Aldus Manutius pioneered the portable, hand-held book.”[2]

He makes a great point. The concept of the ‘book’ has not changed since the mid-fifteenth century when the Gutenberg Press made it possible for Manutius to invent the ‘book’ as we know today.[3] Such a format has proved so endearing that, although the words may have moved from print to digital, e-readers today try to mimic printed books by using skeuomorphic techniques to make readers more comfortable with the new technology. Skeuomorphism refers to the practice or idea of taking something from an older technology and placing it in a new technology.[4] For instance, when you turn the page of an e-book on an iPad, it looks like you are turning a real page. The programmers and designers behind e-books and e-readers knew that their audience was already accustomed to the printed book, and so they sought ways to mimic that style. Nevertheless, the introduction of the e-book in 1991 was a huge innovation that threatened the publishing industry’s very existence.[5]

The early 2010s were a trying time for publishing houses. Teicher credits this to a pervasive fear that digital publishing was going to replace the traditional role of a publishing house, including how books are distributed. Some feared that this technology might potentially even lead to a take-over of the publishing business themselves.[6]  As we now see from recent data, those predictions were overstated. Indeed, the e-book, while here to stay, will not fully replace its physical counterpart, given stagnating sales in major markets such as Europe, and slower than anticipated growth rates leading to a renaissance of the printed book.[7]

Why, then, are e-book sales plateauing while print sales are on the rise? Phil Stokes, head of PwC’s UK Entertainment and Media division, believes the resurgence of the physical book is not only due to its greater appeal, but also because certain genres are better suited for the print format. For example, most people prefer hardcover cookbooks. Stokes also notes the upsurge in adult coloring books, which lend themselves to print.[8] Allen Lindstrom, CFO of Barnes & Noble, reported that in 2016, 14 million adult coloring books were sold, compared with 10 million in 2015[9] and 1 million in 2014.[10] While the growth of adult coloring books is not sustainable due to market saturation, the popularity of the genre and rapid increases in sales are positive reflections of people’s continued enthusiasm for the printed book.  

Additionally, Brian O’Leary, the executive director of the Book Industry Study Group, points out that e-book prices have risen over the past few years, and are now almost comparable to those of printed books.[11] Jonathan Stolper, Senior VP and Global Managing Director for Nielson Books, a company that provides book data to publishing houses, explains that this is a result of the Big Five Trade Houses – Penguin Random House, HarperCollins, Simon & Schuster, Hachette, and Macmillan – increasing their e-book prices by about $3. This resulted in each e-book costing an average of $8, driving the prices of self-published books down to around $3 per e-book.[12] Carolyn Reidy, President and CEO of Simon & Schuster, also credits the rise in e-books sales to the increased number of e-reader platforms, like the Kindle, Nook, and iPad. The increased diversity of e-readers allows publishing houses to raise the prices of e-books to over $10 because e-books are licensed. For example, if Amazon wants to provide an e-book to their customers, they need to buy the rights from the respective publishing house to do so. Accordingly, a publisher could choose to withhold a book from Amazon.[13]  This applies to Apple iBooks as well.[14] For consumers, if the e-book version is the same price as the printed version, why not just buy the preferred physical text?

Markus Dohle, CEO of Penguin Random House, highlights another reason that print books have remained so dominant: hildren’s books.  At the Frankfurt Book Fair in October 2017, he explained how children and young adult genres have been the fastest-growing categories within the book market for the past ten years.[15]  Similarly, Kristen McLean, the Director of New Business Development at Nielson Books, announced at the 2016 Children’s Book Summit that the children’s book market has grown by 52% since 2004.[16]  In addition to the growth of the children’s market, the young adult segment has also grown significantly, with grown adults comprising many of the genre’s readership. Some estimates figure that as much as 70% of young adult book sales are comprised of individuals ages 18-64.[17]  This wide age range of readers illustrates the popularity of the genre. Since the children’s book market is the fastest growing book genre, many publishers feel like the future of the publishing industry is safe.

The publishing industry will continue to expand and innovate in order to keep up with technology. Publishers expect that audiobooks will experience high growth as they attempt to stay ahead of the e-book market.[18]  Today, both print books and e-books appear to be sustainable mediums, and thankfully, it looks as if printed books will remain an integral part of our world for the foreseeable future.

 

             

 

 

[1] https://www.theguardian.com/books/2017/may/14/how-real-books-trumped-ebooks-publishing-revival

[2] http://www.latimes.com/business/hiltzik/la-fi-hiltzik-ebooks-20170501-story.html

[3] Professor Whitney Trettien at the University of Pennsylvania. ENGL 210: The Art of the Book

[4] ibid

[5] https://www.theguardian.com/books/2002/jan/03/ebooks.technology

[6] http://www.latimes.com/business/hiltzik/la-fi-hiltzik-ebooks-20170501-story.html

[7] https://www.publishersweekly.com/pw/by-topic/international/Frankfurt-Book-Fair/article/75092-frankfurt-book-fair-2017-penguin-random-house-ceo-markus-dohle-s-full-remarks.html

[8] http://money.cnn.com/2017/04/27/media/ebooks-sales-real-books/index.html

[9] http://time.com/4689069/coloring-book-bubble-bursts/

[10] https://www.washingtonpost.com/business/economy/the-big-business-behind-the-adult-coloring-book-craze/2016/03/09/ccf241bc-da62-11e5-891a-4ed04f4213e8_story.html?utm_term=.4e3afa7468f5

[11] http://www.latimes.com/business/hiltzik/la-fi-hiltzik-ebooks-20170501-story.html

[12] https://www.publishersweekly.com/pw/by-topic/digital/retailing/article/72563-the-bad-news-about-e-books.html

[13] https://www.newyorker.com/magazine/2010/04/26/publish-or-perish

[14][xiv] https://www.vanityfair.com/news/business/2014/12/amazon-hachette-ebook-publishing

[15] https://www.publishersweekly.com/pw/by-topic/international/Frankfurt-Book-Fair/article/75092-frankfurt-book-fair-2017-penguin-random-house-ceo-markus-dohle-s-full-remarks.html

[16] http://www.bookweb.org/news/publishing-insights-nielsen-children’s-book-summit-34861

[17] https://www.thebalance.com/the-young-adult-book-market-2799954

[18] https://www.publishersweekly.com/pw/by-topic/industry-news/audio-books/article/72500-publishers-see-more-good-times-ahead-for-audiobooks.html

Upcredentialing by Kimaya Basu W'21 C'21

Orange is the new black, 50 is the new 40, and a college diploma is the new high school degree. ‘Upcredentialing’ has thus surreptitiously emerged as a significant workforce phenomenon. To a certain extent, this is attributable to the Great Recession. During the most recent downturn and its immediate aftermath, the United States lost a net 8.7 million jobs. Able to choose from a wide array of job-seekers, employers consequently raised the bar on required educational and experiential credentials. Since then, despite the addition of about 16 million jobs to the national economy, employers continue to require more credentials than are often necessary to fulfill the responsibilities of a particular position. After all, no human resource professional wants to be seen as dumbing down their incoming workforce.

There are many implications associated with this phenomenon, including the fact that you need to work harder and smarter than previous cohorts of graduates just to accomplish what they did. In other words, your degree is not the ticket to utopia that it may once have been. A recent study conducted by the Federal Reserve Bank of New York indicated that in Q4 2016, about 44% of recent college graduates were employed in jobs not requiring such degrees. While this is bad for some college graduates, it’s also not particularly good for employers. Indeed, while many employers report difficulties finding suitable talent to fill available job openings, this is in large measure due to their own upcredentialing practices. Furthermore, the people whom they hire are often anxious to leave such positions quickly, resulting in more turnover, higher retraining costs and less consistent service quality.

A study by Burning Glass confirms these unnecessary challenges. The study found that many employers prefer that their employees possess a bachelor’s degree, despite that this often superfluous requirement means that it takes nearly 33 extra days to fill the position as it would have had the employer not instituted such a requirement. According to the study, 65% of executive secretaries and executive assistants now require a bachelor’s degree. However, a much smaller 19% of those currently employed in these occupational categories actually hold a bachelor’s.

While many point to deindustrialization and technology as explanations underlying the poor economic outcomes suffered by many high school graduates, upcredentialing certainly hasn’t helped these individuals. Jobs that could be made available to those who hold a high school degree as their highest form of educational attainment are denied these positions for lack of irrelevant credentials. From a macroeconomic perspective, the result is a smaller economy characterized by a large numbers of unfilled jobs, elevated turnover, a stubbornly low labor force participation rate, and a shortage of intrinsic motivation resulting from job-holders’ realizations that they are overqualified.

As always, the solution to this problem may rest in entrepreneurship. The rate of business formation is down in America. If more college graduates frustrated by the nature of available positions would start their own businesses, this would potentially increase the total number of job opportunities, and likely free up positions for high school graduates who are too often locked out of opportunities in which they would otherwise thrive.

Paradise Papers and Tax Havens by Annabelle Williams W'20

Offshore investing has long been shrouded in secrecy. But last year’s “Panama Papers,” a 2.6 terabyte leak of documents from offshore investing firms, opened the floodgates. Early in November 2017 another major leak revealed 1.4 terabytes of information principally related to the activities of a British offshoring firm, Appleby, based in Bermuda (a well–known tax haven).

It’s important to note that the tax avoidance revealed in these papers, the Paradise Papers, is lawful. Avoidance is built into many Western tax codes, but the line between avoidance and evasion is blurry. The ethical considerations of offshore accounts, particularly in companies accused of human rights violations or other unethical practices, are what stand out in the Paradise Papers.

The involved parties are linked by one thing:money. Nike, Apple, the Queen, even Penn all invest in offshore funds. It comes as no surprise that the world’s 0.01% and its biggest multinational companies seek out these tax havens, notably in Bermuda and the Cayman Islands. See here for a full list of who invested.

The continued leaks of financial information relating to offshore accounts poses many questions about the U.S.,national tax laws, and the ethical nature of “shadow money,” much of which is placed in trusts in order to obscure investors’ identities. It is also important to note that the shadowy connections of individuals to investments can result in major conflicts of interest or financial misrepresentations. Institutions like Penn invest in companies that use fossil fuels, despite pressure to divest. The U.S. Commerce Secretary’s shipping conglomerate was revealed to have received significant offshore payments from a company owned by Vladimir Putin’s son–in–law.

And though the tax avoidance practices are legal, the conflicts of interest posed in addition to the questionable ethical decision of nonprofits or universities, investing endowments in these funds rather than more secure domestic investments, are staggering. Penn set up four funds offshore, each containing the number “1740” as a nod to the school’s founding. These corporations appear on endowment disclosures, but what does not appear is their status as “tax blockers.” The New York Times explains the concept behind tax blockers as “establishing another corporate layer between private equity funds and endowments effectively blocks any taxable income from flowing to the endowments.” The leaked documents in the Paradise Papers show that over 100 of the top U.S. universities are proven to have used this investment strategy .

There is no question that “big money” will continue shaping the world economy.  A contrast is eminently obvious between the hedge fund like investment strategies of universities and the mounting student debt rates, particularly at private institutions. So, can we truly justify offshore investing?



 

http://www.thedp.com/article/2017/11/paradise-paper-penn-offshore-investment-philadelphia-finances

https://www.theguardian.com/news/2017/nov/05/what-are-the-paradise-papers-and-what-do-they-tell-us

https://www.theguardian.com/news/2017/nov/05/trump-commerce-secretary-wilbur-ross-business-links-putin-family-paradise-papers

https://www.theguardian.com/news/2017/nov/05/trump-commerce-secretary-wilbur-ross-business-links-putin-family-paradise-papers

https://www.theguardian.com/news/2017/nov/08/us-universities-offshore-funds-endowments-fossil-fuels-paradise-papers

https://www.theguardian.com/news/2017/nov/05/why-shining-light-world-tax-havens-again-paradise-papers

http://www.thedp.com/article/2017/11/paradise-paper-penn-offshore-investment-philadelphia-finances

https://fas.org/sgp/crs/misc/R44293.pdf

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2860107

http://knowledge.wharton.upenn.edu/article/the-paradise-papers/

https://www.nytimes.com/2017/11/08/world/universities-offshore-investments.html

Bitcoin: The Only Thing Less Predictable Than Donald Trump’s Tweets by Jason Cohen W'20

The value of Bitcoin has risen more than twice as much as the best performing publicly traded stock in 2017. Its market cap is approaching that of gold, and the volume of Bitcoins traded reaches several billion daily. Bitcoin has become more and more well-known — since April, Google searches for the cryptocurrency have gone up more than 450% — and yet, if you ask people familiar with Bitcoin, you will generally get a divided response regarding its tremendously unclear future and volatile past.

Bitcoin started in 2009 as a decentralized cryptocurrency using blockchain technology. What that means is that Bitcoin is unique in several ways: First, the currency is virtual, meaning that there are no actual “coins.” Secondly, transactions do not require any personal information, making them completely anonymous. Finally, transactions are verified by the Bitcoin community, making them more secure, free, and impossible to regulate. Bitcoins can be bought on an exchange, sent on a computer, and “mined” by computers solving complex math problems.

Anyone who has been following Bitcoin for the past few years will agree on one thing: they wish they bought more of it earlier. Beyond that, however, there is a lot of skepticism when it comes to the future of Bitcoin. On one end, there are people who believe Bitcoin is the biggest technological innovation since the Internet. Let’s call these people the optimists. The optimists (and there are many of them) believe that Bitcoin will absolutely revolutionize the way money works. They believe that banks and governments will begin doing more and more business in Bitcoin, or otherwise create their own cryptocurrencies, until society reaches a point where the very definition of money is changed altogether.

On the other side of the spectrum are the pessimists. The pessimists are not particularly happy that they missed out on this Bitcoin boom, as no one would be. However, for pessimists, Bitcoin is a bubble that is all-too-ready to burst. The argument here is that Bitcoin is not backed by anything but the public’s trust. Indeed, for the U.S. dollar, the U.S. government is backing the currency. Same with the Euro. However, when it comes to Bitcoin, the exchange rate is in other currencies, and there is no backing beyond the public’s trust. If the public believes Bitcoins are worthless, they will become worthless and there is nothing anyone can do. This leads to the belief that Bitcoin’s value is derived from its underlying technology, which, while revolutionary, can be replicated and is not perfect. Bitcoin is only more valuable than cryptocurrencies like Ethereum because people say it is. Because of these vulnerabilities, pessimists believe that Bitcoin is a bubble that will burst faster and more viciously than the dot-com bubble or the housing bubble.

In order to understand who is “right” when it comes to the future of Bitcoin, it is important to look into Bitcoin’s past. The idea for Bitcoin was conceived in 2008 and later officially launched in 2009. By May of the next year, 10,000 Bitcoins were used to purchase a cheese pizza, the first ever commercial transaction using Bitcoin. Soon after, a vulnerability in the code was exploited and billions of Bitcoins were stolen. After a year or so of negative press, Silk Road, a deep web site for anonymous and illicit dealings in drugs and trafficking, was established using Bitcoin. Bitcoin then gained value. Bitcoin received praise from those in the U.S., but China banned its usage. By this point, it is 2014, and Bitcoin’s value has gone up and down drastically multiple times. By now, the price is around $700.

Over the next 2-3 years, Bitcoin’s dropped to under $200 and rose to over $2000 repeatedly. Mt. Gox, the largest Bitcoin exchange, was shut down, and yet Bitcoin recovered. More and more major companies began recognizing and accepting Bitcoin. Banks and hedge funds began doing research into the technology. Finally, in 2017, with the advent of Initial Coin Offerings, or ICO’s (to raise money for a cryptocurrency), Bitcoin took off. Currently, the value is nearing $10,000.

Based on this history, Bitcoin can, at the very least, be viewed as a risky asset. With no real value and many obstacles in its way, Bitcoin has an uphill climb ahead of itself. That being said, people have bet their life savings on Bitcoin’s success, and for many, its rise is a verification of their optimism. Bitcoin has a chance to become the currency of the world, and if that happens, people who got in early will stand to make a lot of money. However, if Bitcoin crashes, thousands of people’s life savings will go down the drain and the ripple effect may be enormous. For now, it is too soon to tell what may happen, but expect to see many ups and downs before any clear future reveals itself.

 

 

 

http://www.businessinsider.com/bitcoin-price-correlation-google-search-2017-9

https://techcrunch.com/2017/11/20/bitcoin-just-passed-8000/

http://money.cnn.com/infographic/technology/what-is-bitcoin/

https://seekingalpha.com/article/4127443-bitcoin-big-short-moment-approaches

https://hackernoon.com/everything-you-need-to-know-about-bitcoins-timeline-in-4-minutes-244a412b9455

https://www.forbes.com/sites/laurashin/2017/10/23/will-this-battle-for-the-soul-of-bitcoin-destroy-it/#4e12531d3d3c

 

 

 

Universal Healthcare: The Business Case for Reform by Alfredo Garcia Sanchez C’21 W’21

Last month, Chris Conover, a fellow at the American Enterprise Institute, published a series of articles in Forbes magazine entitled, “The [Reasons] Why Bernie Sanders’ Medicare-for-All Single-Payer Plan is a Singularly Bad Idea” [1]. This title is so aggressive that one would think the legislation proposed nuking the moon or banning the sale of pants. Universal healthcare has been the subject of intense scrutiny and interest in recent years, and is subsequently the target of much ire and suspicion. What has long been considered as a staple of other modern democratic countries is still viewed as dangerous and unfeasible in the land of the free and, increasingly, the home of the sick.

According to the Organization for Economic Cooperation and Development (OECD), the United States spends almost $10,000 in healthcare per capita, followed by Germany and Switzerland at around $5,000 and $6,000, respectively. One might be justified in defending our prodigious spending if, in fact, it delivered spectacular outcomes. Unfortunately though, the current US system suffers from relatively high infant mortality rates, a low patient-per-bed ratio, and low life expectancy, among a myriad of other disappointing statistics. In almost every ranking of healthcare systems by country, the US lies near or at the bottom of the pack of developed nations [2].

This flagrant inefficiency that has stood for decades would appall any reasonable businessperson. America allocates more than 30 percent of total healthcare spending solely to administrative costs. In comparison, this figure amounts to only 16.7 percent in Canada [3]. Many point to government-run healthcare as a massive, overreaching bureaucracy, not considering the fact that the private health insurance system is arguably an equally large, if not larger, agglomeration that ultimately contributes less than its fair share to the US economy.

More than half of Americans currently rely on employer-supplied health insurance for their medical needs. This constitutes a rather large expenditure for many companies, especially smaller firms. While most Americans currently on employee-supplied healthcare are satisfied with their coverage, it comes at the expense of many businesses’ bottom line. In total, American businesses spend $620 billion every year on healthcare, and 80 percent of CFOs surveyed in a recent Harris poll agreed that “healthcare costs drain company resources that could be better used elsewhere.” Struggling companies often resort to cutting healthcare benefits entirely, resulting in thousands of workers losing coverage [4].

But companies are not the only affected parties. A recent paper published by the Washington College of Law at American University identified a phenomenon dubbed “job lock,” whereby American workers base much of their career decision making on the stability of their healthcare situations. Oftentimes workers will refuse to leave or change jobs for fear of losing health insurance, resulting in inefficiencies as people gravitate towards less appropriate jobs, or abandon entrepreneurial endeavors. Studies have found that “job lock” makes employees 60 percent less likely to leave their jobs and decreases the rate of self-employment [3]. This decreased mobility, estimated to be upwards of 20 percent, leaves the US at a great disadvantage especially in the development of industries and the efficient allocation of resources.

Upon instituting a single-payer system, companies will have to foot a portion of the bill through payroll taxes, although it is likely that many will enjoy reduced costs overall as group insurance premiums are eliminated. The same idea extends to the majority of private individuals, who may hand over slightly more to Uncle Sam, but will ultimately save money after all is said and done. These savings can be extended on a national scale; the current US system of healthcare spending occupies a massive 18 percent of the GDP, while in countries providing universal coverage, that figure stands at an average of only 11 percent [5]. For the fiscally conscious, these are savings that should not be overlooked.

If America really wants to fulfill its ideal as a bastion of entrepreneurship and free markets, then it should eliminate the one issue that has for too long been an underproductive appendage to the economy. With an average approval of around 60 percent (and rising) for universal healthcare among polled Americans, this is something that only the most reactionary of reactionary politicians cannot openly support [6]. Any American worth his or her salt would jump at the opportunity to lead the world in anything - healthcare should not be an exception.   

 

1. https://www.forbes.com/sites/theapothecary/2017/09/28/the-1-reason-bernie-sanders-medicare-for-all-single-payer-plan-is-a-singularly-bad-idea/#138e53ad5502

2. http://www.latimes.com/nation/la-na-healthcare-comparison-20170715-htmlstory.html

3. http://digitalcommons.wcl.american.edu/cgi/viewcontent.cgi?article=1132&context=hlp

4. https://www.forbes.com/sites/castlight/2014/12/29/how-rising-healthcare-costs-make-american-businesses-less-competitive/#49456fed4f5f

5. http://www.theamericanconservative.com/articles/the-conservative-case-for-universal-healthcare/

6. http://news.gallup.com/poll/191504/majority-support-idea-fed-funded-healthcare-system.aspx

Want a Job? No Problem, 'John' by Kimaya Basu W'21

Perhaps somewhere pure meritocracies exist.  Unfortunately, they haven’t yet come to America; at least based on recent research regarding the bias to which people with certain names are subjected.

Research conducted by Corinne Moss-Rasin, a social psychologist at Skidmore College, determined that STEM professors from an array of universities were more willing to mentor a person named ‘John,’ even when  ‘Jennifer’s’ resume was precisely the same.  Moss-Racusin and fellow researchers conducted the study by producing one resume for a lab manager position.  They then sent that resume to more than one hundred biologists, chemists, and physicist professors at various institutions.  The professors received the resume either with ‘John’s’ name at the top, or the resume with ‘Jennifer’s’ name.  On average, ‘Jennifer’ was offered approximately $4,000 less for the same position and was generally considered less qualified.  Shockingly, even female professors to whom these resumes were sent exhibited a preference for ‘John’ over ‘Jennifer.’  Although one could suggest  that the highly-educated female professors are simply unaware of their prejudices, these biases still persist.

After data from the study was collected, professors to whom surveys were sent were informed of the study team’s analytical findings.  They then took a diversity course designed by Professor Moss-Racusin and her colleagues.  A central focus of the course was promoting discussion regarding the ways in which potential future biases could be mitigated.  Encouragingly, a separate survey conducted after the course indicated that gender biases had been mitigated among participating scholars, suggesting a degree of efficacy.

There are a few frightening takeaways from this research. This inequitable state of affairs is not only unfair  on an individual level, but it also  undermines scientific progress.  Indeed, lingering prejudices might induce many women to abandon STEM fields, thereby producing a smaller pool of actively engaged scientific talent.

There is of course more than just gender bias in the workforce, with racial biases highly documented as well.   According to a study conducted by the National Bureau of Economic Research, job applicants with “white” sounding names needed to send  10 resumes on average to receive one interview callback.  Those with “African-American” sounding names needed to send approximately 15 resumes to achieve the same result.  Remarkably, having a white sounding name was equivalent to an additional eight years of work experience in terms of callback rate!

The study involved sending out nearly 5,000 resumes responding to more than 1,300 employment ads in newspapers from Chicago and Boston.  Resumes with various levels of work experience and credentials were sent, with either white or African-American sounding names.  Names were assigned to resumes randomly.  Job opportunities ranged from cashier to sales management.  While white-sounding names attached to better resumes yielded 30 percent more callbacks than  weaker resumes with the same names,  the magnitude of positive impact was significantly smaller for African-American sounding names;  implying that the marginal returns of  education and work experience are less for African-Americans than for whites.

While one might think that this type of analysis would be unnerving only to people named ‘Jennifer’ or ‘Shaquille,’ it is equally as  concerning for the ‘Johns’ of the world.   Now,  successful ‘Johns’ are forced to wonder whether their accomplishments are due as much to the name at the tops of their resumes, as much as they are to the credentials that follow.   

 

Comparing American and European Luxury M&A Trends by Annabelle Williams W'20

The luxury market has increasingly trended towards consolidation of key industry players. Does this focus on M&A dilute brand value or simply signify a change in corporate governance? Historically speaking, European luxury holding companies have been a fixture of the luxury market for the past half-century. The first to employ the method of large-scale luxury holdings was French SA Kering, under the leadership of Francois Pinault. The European model then gained steam in the 1990s, with major bidding wars by Kering and what was then Gucci Group, for brands such as Fendi.

One of the major benefits of these consolidated models are their financial disclosure provisions. Indeed, individual cash flows segmented by brand are not required in disclosures, so the overall health of the conglomerate is the only public financial information. In other words, if one of the brands in a group’s portfolio is in the red, the holding company can apply the net income and retained earnings from another, more profitable brand to offset the losses, and mask them from the public. This implies that no investors or customers ever fully understand how much a particular brand is struggling.

This disclosure  advantage is appealing to American companies, which have long struggled to brand themselves as luxury fashion houses, and instead (as in the cases of Calvin Klein and Ralph Lauren), operated mostly in the apparel sectors. American brands cannot lean on the longstanding heritage that European fashion houses claim.

The three major players in the consolidated luxury market are all based in Europe. French SA Kering (Gucci, YSL, Alexander McQueen, and Puma are among its brands), as mentioned above, came first, followed in short order by French SA LVMH (holding Louis Vuitton, Moet, Hennessy, Givenchy, and Christian Dior) and Swiss SA Richemont (Cartier, Dunhill, Montblanc and more).

Interestingly, the United States luxury market has staved off the European model of corporate consolidation. Instead, the US market is typically characterized by M&A activity like any other industry, with one established brand acquiring another. Take the recent example of Coach’s $2.4 billion acquisition of competitor Kate Spade, or its 2014 purchase of luxury shoe brand Stuart Weitzman. Another prominent example of acquisition activity happened in July 2017, when Michael Kors acquired Jimmy Choo, the brand that rose to prominence for its distinctive high heels, thanks in part to Sex and the City’s Sarah Jessica Parker.

Where does the inherent difference in these two consolidation strategies come into play? First, we must consider brand value. Indeed, while the  European model preserves individual brand value because large holding companies do not have a brand of their own which can distort the one they acquires, American companies with already-established consumer perceptions increasingly often acquire their direct competitors. However, not all American businesses have ignored the European precedent. As Michael Kors CEO John Idol alluded to, when speaking about the strategy of imitating Kering and others: “... we are really looking to build an international luxury company, and less so brands that ... have a greater reliance on wholesale than its own retail strategy.”

Both Michael Kors and Coach faced issues of brand devaluation through the early 2000s; branding experts described their approach as “accessible luxury.” Coach moved to dispel this misconception first with their 2015 acquisition of Stuart Weitzman, and then continued to cater to millennial consumer bases with their Kate Spade acquisition, a demographic in which Kate Spade has significant market share.

Another metric of the success of M&A activity in the luxury retailer industry are stock ratings and associated notes. The October 5th, 2017 ratings for Michael Kors and Coach provide a good snapshot of short-term directions for each corporation. Following the deal’s announcement on October 5th, Piper Jaffray downgraded Coach’s stock from Overweight to Neutral, citing the need for Coach to “digest” the Kate Spade brand. Piper Jaffray further mentioned the need for Coach to restructure its business into a “house of brands.” This was cause for concern for some investors because it potentially lengthens the time needed for Coach to take a seat at the table with major luxury market players. S&P put it at the same level as Kors, with a BBB- rating, the lowest investment-grade rating possible.

With regards to Kors’ acquisition of Jimmy Choo, the early October valuation narrowly avoided junk ratings from S&P, Moody’s, and Fitch. Kors’ stock only just cleared for the lowest investment-grade classification! Clearly, both ratings agencies and the sell-side are both unsure of the feasibility of this Kors/Jimmy Choo merger. However, while stock classifications reveal the market’s short-term opinions of the company, they are only one component for predicting the future success of a merger; especially mergers like these with transformative effects on the luxury sector.

What does this consolidation mean for competition and oligopoly in both the American and Euro luxury markets? One theory is that bidding wars for smaller, designer-owned labels may soon become forums for luxury conglomerate giants to engage in competition. In turn though, operating in an increasingly oligopoly–based market disadvantages smaller brands and companies.

Coach, if it plays its cards right, can use its new acquisitions to boost its status in the luxury market. Its handbags may well be a Veblen good, with demand that increases proportionally to price, given the right branding and an increased emphasis on luxury. However,  if Coach fails to brand itself as true luxury and sticks to an ‘in–between,’ it may fail to effectively compete in either the luxury or the apparel markets. Michael Kors, too, needs to be deft in its  takeover of Jimmy Choo; the caché of the latter should bolster its parent company, but success in that will depend on branding efforts for both parent company and its subsidiary.

Luxury brands are tricky to preserve. Conventional rules of consumer behavior and demand need to be adjusted to suit a high-income, status-seeking market. But when maintained well, both by creative directors and business managers, dividends and valuations can be extremely high. Unsurprisingly, eight out of 100 of Interbrand’s 2017 Most Valuable Brands fall into the luxury sector; clearly indicative of the market power these companies possess.

Perhaps it’s time for American giants to truly come to the table in one of the most quintessentially European markets—luxury and high fashion. The Kors and Coach acquisitions should provide a basis on which to predict future successes for each would–be “house of brands,” and the American luxury market writ large.

References:

 

https://www.nytimes.com/2017/07/25/business/dealbook/jimmy-choo-michael-kors.html?_r=0

http://interbrand.com/best-brands/best-global-brands/methodology/

https://www.washingtonpost.com/business/economy/louis-vuitton-and-guccis-nightmares-come-true-wealthy-shoppers-dont-want-flashy-logos-anymore/2015/06/15/e521733c-fd97-11e4-833c-a2de05b6b2a4_story.html?utm_term=.496180289913m

https://www.bloomberg.com/news/articles/2017-08-15/coach-declines-after-kate-spade-acquisition-weighs-on-forecast

http://www.npr.org/sections/thetwo-way/2017/07/25/539202233/michael-kors-to-acquire-jimmy-choo-in-1-2-billion-deal

https://www.moodys.com/research/Moodys-Global-luxury-retailers-earnings-growth-could-double-in-2017--PR_367277?WT.mc_id=AM~UmV1dGVyc05ld3NfU0JfUmF0aW5nIE5ld3NfQWxs~20170531_PR_367277&utm_source=Triggermail&utm_medium=email&utm_campaign=Post%20Blast%20%28bii-e-commerce%29:%20Why%20retailers%20should%20explore%20digital%20business%20models%20%E2%80%94%20Luxury%20retail%20set%20for%20turnaround%20%E2%80%94%20Flipkart%20tops%20Amazon%20among%20high%20spenders&utm_term=BII%20List%20E-Comm%20ALL

https://www.moodys.com/researchdocumentcontentpage.aspx?docid=PBC_1070148

https://www.forbes.com/sites/stevenbarr/2017/02/03/consumer-markets-are-ma-poised-for-growth-in-2017/#57270e8138ae

https://www.atkearney.de/documents/856314/12706482/A.T.+Kearney+Consumer+and+Retail+MA+Executive+Report+2017.pdf/f1e229f1-a8ab-4be1-9ad0-65605d0b85c7

http://www.vogue.co.uk/article/valentino-parent-company-mayhoola-to-acquire-balmain

http://intrepidib.com/wp-content/uploads/2016/08/ApparelRetailMAReportFeb16.pdf

https://www.forbes.com/powerful-brands/list/#tab:rank_industry:Luxury

https://www.bloomberg.com/gadfly/articles/2017-07-25/jimmy-choo-walks-all-over-michael-kors-in-1-4-bln-sale-gadfly

https://www.reuters.com/article/us-jimmy-choo-m-a-kors-strategy/kors-needs-to-buckle-down-for-jimmy-choo-deal-to-shine-idUSKBN1AD22S

Semiconductors Reimagined by Jacob Bloch W19

Society is hungry.  It wants information, pleasure, and access to the things it values instantaneously.  It wants to analyze enormous amounts of data.  What was deemed a speedy route a moment ago becomes phlegmatic, and now has to go faster.  If it doesn’t, then it could be disastrous.  Moore’s Law states that technological development follows an exponential growth curve, which keeps doubling performance.  Society expects Moore’s Law to never falter.  The notion that technology might slow down its pace is a predicament best off avoided.  

 

Yet, when it comes to transistors, which are the diminutive switches that are the infrastructure of computer processors, they have gotten so miraculously small and efficient that they seemingly cannot be improved upon.  Society may soon face the dim prospect of possessing super-computers that have reached the upper limits of their computational powers due to lack of advances in transistors.  Data centers, molecular dynamics, artificially intelligent “brains,” weather forecasting, climate change modeling, drug designs, and 3D nuclear test simulations that rely on supercomputers may cease to inspire confidence, inhibited by the technological parameters of early 2017.  

 

Imagine the processor as an engine, and the transistors as the cylinders that accelerate it.  To increase the processing power, the solution is to condense more cylinders into the engine.  This is precisely what makes today’s iPhone 7, which has an A10 processor with 3.3-billion transistors, faster than the ‘TRADIC,’ the first American transistorized computer containing 800 transistors within three cubic feet.  

 

Processing power has significantly increased since Dr. Gordon Moore, Intel’s founder, established the earliest version of Moore’s Law when he declared in 1965 that transistors were shrinking at such a rate that twice as many could fit in a computer processor every year. Thirty years later, Moore declared that the doubling rate would occur every two years; however this has not been the sentiment of Silicon Valley, which fervently believes that the exponential growth has never slowed.  

 

Yet Silicon Valley may run out of counterarguments.  The industry is reaching the final frontier of transistor technology for several reasons.  It is so expensive to produce these miniature transistors that only a few muscular companies are able to compete in the industry.  The number of manufacturing companies has dwindled from about twenty in 2000 to just four today: Intel, TSMC, GlobalFoundries, and Samsung.

 

Furthermore, notwithstanding the robust financial resources necessary to miniaturize further, transistors are unlikely to do so because it is impossible to defy the laws of physics. When the transistors reached 90 nm in the 1990s, the industry discovered that the transistor gates were becoming so thin that electric current was leaking out into the substrate.  In other words, electrons that direct the power of the transistors are tempted to “jump” into the surrounding space, unless there is some way to insulate them.  The smallest transistors today range between 14 and 22 nanometers, and to further decrease the size would only increase the quantity of electrons jumping into the substrate. The only effective method of insulating the electrons at this microscopic level is to block any movement of electrons, in other words, keeping the insulator at a temperature close to absolute zero.  Temperatures approaching absolute zero have only been obtained in a laboratory setting, so it would be implausible to incorporate such insulation in transistor technology, thus halting its development.  

 

The industry today recognizes that the ceiling has been hit.  Intel has repeatedly delayed its release of the newest technology, with more time between subsequent “generations,” or upgraded products.  It has even placed delays on specific launches of new inventions, such as a 10 nm transistor.  Intel’s Chief of Manufacturing, William Holt, has noted in February of 2016 that Intel will have to move away from silicon transistors in about four years and that “the new technology will be fundamentally different,” but he has admitted that silicon’s successor is not yet established.  

 

Even the introduction of transistors in the range between 22 and 14 nm has been contingent upon a radical redesign.  While the transistors of the past were flat, the Tri-Gate transistor has innovated an approach to three dimensional space.  Instead of having current-carrying channels lying under the gates of the flat transistors, the channels rise up vertically over the gates. This constitutes a disruption to transistor manufacturing because it delays silicon replacement for a few more generations, according to Marc Bohr, Director of Intel’s Technology and Manufacturing Group.  Simultaneously, it is a step toward an even smaller and more energy efficient transistor, like a 10 nm transistor that Intel had hoped to release in late 2016/early 2017, before being set back by difficulties.  

 

 

 

The International Technology Roadmap for Semiconductors has been published almost annually by semiconductor industry experts from across the globe since 1993.  The group forecasts in its most recent report, published in 2015, that production of chips in their current form will no longer be economically viable by 2021.  The industry will require another disruption, whether that be in new areas like photonics or carbon transistors, as well as other reformulations of current transistors.  

 

This type of massive technological disruption, which entirely reinvents product manufacturing, is important for software development.  Neil Thompson, an assistant professor at MIT Sloan School, affirmed that “one of the biggest benefits of Moore’s Law is as a coordination device.  I know that in two years we can count on this amount of power and that I can develop this functionality - and if you’re Intel you know that people are developing for that and that there’s going to be a market for a new chip.”  Without reassurance that Moore’s Law will continue, software development which relies upon its confidence is impeded.  

 

A technological hypothesis that hangs in the balance of evolving transistors is the singularity, which is the theoretical future date when developments in artificial intelligence will create sentient, autonomous computer beings.  The delays in transistor development mean that technological development is likewise paused.  Perhaps this is a positive result.  Should singularity occur, it could give rise to a rival class of beings more intelligent and cunning than humans.  Computer engineers who are involved in incrementally achieving singularity ought to pause to weigh the consequences of such a situation.   But with this degree of efficiency, it is unlikely that individuals forwarding these innovations are grappling with the implications of their choices. It would be prudent, even necessary, for those hastening singularity to take advantage of the inevitable delay in transistor development to examine the ramifications of their actions.

Digital Governance in the Gulf by Jonathan Lahdo (C'20 W'20)

The Middle East continues to grow and become ever more important on the global stage in numerous fields; the Gulf countries are leading this revolution and are at the forefront of innovation and development in the region due in no small part to the increased digitisation of these countries’ governments.

Increasing the availability of public services through electronic means has allowed the Gulf States to capitalize on a general trend towards increased investment in digitisation initiatives and  not only create positive change in the present, but also explore options for devising solutions to future problems

Facilitating Business Development

One of the largest and most obvious areas in which increased governmental digitisation’s benefits can be seen are in trade and the economy at large. In the United Arab Emirates a strong start-up culture exists, in which entrepreneurs have flourished. Uber-competitor Careem, founded in Dubai in 2013, has not only been able to maintain its dominance over the larger global player in the UAE, but also “expanded in 26 cities across the Middle East & North Africa (MENA) as well as Pakistan” by raising “a total of $71.7 million in funding.” In the e-commerce market, a nascent industry in the Middle East that has still not achieved the ubiquity it has in other parts of the world, Souq.com dominates after starting in Dubai and going on to become the region’s “first unicorn” [1].

To further stimulate business creation and small-to-medium enterprise growth, the Emirati government has taken several steps to streamline registration processes through digital methods. The Department of Economic Development (DED), for example, signed a memorandum of understanding (MoU) with Servcorp, an Australian-based company specialises in serviced and virtual office solutions, that allows the firm’s clients “to complete their business transactions within the least possible time.” By partnering with a company that works with both start-ups and large enterprises worldwide, the DED is leveraging a public-private partnership that can help businesses begin in Dubai with Servcorp and their MoU-conferred efficient services, including “trade name reservation, renewal of reserved trade name, license renewal, and initial approval” among others, before expanding outwards [2].

Developing Location Infrastructure

From an infrastructural perspective, a common theme in the Gulf countries is rapid growth, leading to poor urban planning. A recent example of an initiative taken by the UAE government highlights how digitisation can be implemented to ameliorate fundamental issues in a state. The average UAE resident cannot navigate solely using street names, and often has to rely on nearby landmarks in order to reach their desired destination. The implications for the general public are obvious: it can be difficult to get around, whether it be to a friend’s house or a specific building. As for businesses, the effects can have a dramatic impact; inefficiencies due to difficult navigation can prevent  a company from turning a profit. Furthermore, with the rise of delivery businesses, like Talabat.com, the popular “online food ordering service operating across the [Gulf Coordination Council which] hit a record-breaking 100,000 orders on February 3, 2017,” [3] having a solid location infrastructure has never been so important.

The Emirati government’s response was Makani (meaning “my place” in Arabic), a “smart mapping system that was initially launched to help the delivery-industry, in addition to first emergency responders and courier services, locate residents’ homes;” it also aims “to have all the buildings in the UAE installed with a plate, displaying the location’s 10-digit geo-coordinates,” according to Abdul Hakim Malek, director of the Geographic Information Systems (GIS) Department at Dubai Municipality [4]. In addition to its own proprietary app, the government team behind Makani continues to work on integrating it with popular navigation apps Google Maps and HERE Maps to make the service more accessible to everyone.

Solving Problems Digitally

Looking forward, there are both potential threats and current problems that the governments of the Gulf states are looking to prevent and has the ability to alleviate respectively.

The most recent Interpol Digital Currency Conference, for example, was held in the Qatari capital Doha and was organised by the Qatar National Anti-Money Laundering and Terrorism Financing Committee, a unit of the emirate’s central bank. Deputy central bank governor Sheikh Fahad Faisal Al-Thani was quoted as saying “We expect from this conference to contribute in enhancing the capacity of the relevant competent authorities in conducting investigations in any crimes related to virtual currencies; and in establishing a network of practitioners and experts of this field,” a testament to the Qatari government’s dedication to digital financial security.

Elsewhere in the digital finance sphere, a key area in which the Middle East continues to lag is venture capital, and more specifically digital venture capital. There is evidently a strong interest in tech start-ups, with “half a dozen tech start-ups in the MENA region today [being] valued at more than USD 100 million each, and investors sunk more than USD 750 million into MENA tech start-ups from 2013 to 2015.” These have the potential to be the next potential generation of digital unicorns, of which “the Middle East claims [only] 1 out of 200 globally,” but given that “relative to GDP, the Middle East has only 10 percent of the VC funding of the United States,” it is clear that the government needs to engage in more efforts to encourage digital investment. In general, the strong culture and prevalence of family businesses in the region are a major barrier to the growth of venture capital, but through further modernisation and digitisation the region’s governments will be able to take advantage of the untapped potential of this sector that could further boost their economies [5].

Reflecting and Progressing

It is evident that in the Arabian Gulf, many strides have been taken to further digital development. The investments of the region’s governments in electronic initiatives have bolstered their economies, created an accessible environment for exciting new start-ups, and addressed logistical issues in location infrastructure and urban planning, to name a few examples.

Nevertheless, there are many untapped opportunities and potential areas of growth for these governments to focus on. In the future, their priorities will include a strong push for widespread adoption of the innovative services they have developed, in addition to further research on and investment in pressing issues like cybersecurity and digital finance.

1.     Suparna Dutt D’Cunha, “Is Dubai The Next Big Tech Startup Hub?”, Forbes, 22nd August, 2016

2.     Tamara Pupic, “DED and Servcorp to ease company formation in Dubai”, Arabian Business, 2nd June, 2015

3.     Claudie De Brito, “Talabat.com hits 100,000 orders in a day,” HotelierMiddleEast.com, 12th February, 2017

4.     Mariam M. Al Serkal, “All UAE buildings to get Makani coordinates,” Gulf News, 13th April, 2016, updated 24th February, 2017

5.     Enrico Beni, Tarek Elmasry, Jigar Patel, and Jan Peter aus dem Moore, “Digital Middle East: Transforming the region into a leading digital economy,” McKinsey&Company, October, 2016

 

 

Telemedicine: The Future of Healthcare? By Jonathan Silverman W'19

Smartphone functionality has skyrocketed over the past decade: Snapchat, Uber, celebrities on Instagram, and…sending one’s blood pressure to the doctor?! Yet recently, increasing numbers of US healthcare consumers are pivoting towards services that fulfill their medical needs without them ever having to leave the comfort of their own bedrooms. Drawing upon the convenience, speed, and economic efficiency of the Internet, this range of services – aptly categorized as “telemedicine” – constitutes a rapidly expanding segment of healthcare services that has vendors and consumers alike scrambling to implement new initiatives in their quest to forecast the long-term effects of telemedicine on the general healthcare industry. By utilizing the prevalence of smartphones and connectivity, doctors are now able to communicate with patients via webcam, immediately consult a digital network of disease professionals, and issue prescriptions over email; it also helps that these services are often priced cheaper than live, in-person substitutes.

While disrupting the traditional doctor-patient model typifying centuries of medicine, telemedicine has emerged as a dominant force in healthcare for a variety of reasons. Specifically, telemedicine has been causally linked to fewer hospital readmissions, lower medical costs, improved accessibility for rural patients, and favorable levels of care. Their popularity has become so widespread that, according to James Tong, a mobile health lead and engagement manager at QuintilesIMS (the nation’s largest vendor of healthcare information), recent studies have demonstrated that two out of every three Americans are exhibiting a willingness to use technological devices to supplement traditional health methods.[1]

Furthermore, telemedicine has proven itself capable of optimizing patient outflow at typically overcrowded hospitals. According to a recently published article appearing in Clinical Infectious Diseases, telemedicine “promotes more efficient use of hospital beds, resulting in cost savings,” as well as positive upsides for patients themselves, noting a correlation between at-home medical care, and the rapidity of patient convalescence. [2]

However, the road to telemedicine’s institutionalization has not been an easy one. Lending itself to a host of legal ramifications, with practicing physicians treating patients as far as 6,000 miles away, telemedicine’s array of ambiguities have challenged accepted definitions longstanding in the healthcare industry, such as the notions of medical malpractice and physician licensure. For example, at the Mayo Clinic – a renowned cancer treatment center – doctors treating out-of-state patients follow-up with emails and video consultations; yet may only do so regarding matters that were initially discussed in-person. [3] Designed to preempt any potential issues, these policies highlight the tentative hesitancy of key healthcare organizations in their assessment of telemedicine’s future.

Nonetheless, despite the obstacles and doubts surrounding telemedicine’s quality and feasibility, many of the US’ major healthcare players have made strong moves pushing for expanded telemedicine coverage and service. Indeed, insurers such as Anthem and UnitedHealthGroup have begun offering their own direct consumer-to-virtual-doctor consultations, bypassing traditional medical channels. Additionally, Johns Hopkins Medicine and Stanford Medical Center have also introduced their own digital consultation services. As one doctor from the Cleveland Clinic stated in the Wall Street Journal: “This will open up a world of relationships across a spectrum of health-care providers that we haven’t seen to date.”[4]

 

http://health.usnews.com/health-news/hospital-of-tomorrow/articles/2015/10/19/telehealth-is-changing-patient-care-now

[2] https://academic.oup.com/cid/article/51/Supplement_2/S224/383896/Telemedicine-The-Future-of-Outpatient-Therapy

[3] https://www.wsj.com/articles/how-telemedicine-is-transforming-health-care-1466993402

[4] https://www.wsj.com/articles/how-telemedicine-is-transforming-health-care-1466993402

 

What Comes After the Donald Trump Market Rally? By David Cahn W'17 ENG'17

The stock market has hit record highs since Donald Trump was elected in January. By mid-February, this exuberance appeared to have calmed, before the “third wave” of the Donald Trump rally took hold, driving markets even higher this week.

How should we be interpreting this rally?

One view says that the Trump rally is being driven by fundamental value. After all, Trump claims he’ll renegotiate trade deals in America’s favor, lower corporate taxes, deregulate Wall Street, and reinvigorate the U.S. economy. If we believed that he’d deliver on these promises, then surely the market rally is justified.

Bears have been crying wolf for weeks now, to no avail. MarketWatch has cited numerous reasons to be worried: the S&P’s average P/E ration is now 21, the highest level since 2009, the CBOE Volatility Index (Wall Street’s Fear Index) is “suspiciously low” and we appear to be in a “late leg” of the economic cycle. Even if Donald Trump is America’s most pro-business President, it’s unclear his first few weeks in office have justified the $3.1Tn in market value that’s been created since November.

Even while everyone worries when the euphoria will end, bears are getting trounced as the market inches higher and higher. As is often the case, it’s hard to predict when the rally will tip into recession.   

Responsible Financial Media: Lessons from the Internet Bubble By Andrew Howard W'20

The Internet bubble of the late 1990s, otherwise known as the “Dot-com Bubble,” erased billions of dollars from the economy seemingly overnight. The NASDAQ, a market index of technology and biotech companies, fell nearly 4,000 points from March, 2000 through October, 2002. This collapse demonstrated how a massive scheme of mispricing, accounting fraud, and the unjustifiable promotion of digital products and services brought down the entire technology sector. The crash was so far-reaching that even successful companies like Amazon and Cisco lost nearly 90% of their market value; their value diluted by the plethora of overvalued companies that had never earned a single dollar for their investors. Similarly, the majority of companies with artificially inflated prices were ostensibly worthless: boo.com, pets.com, Nortel, and even startup.com all had market capitalizations of zero after the crash.

 

How is it possible that such companies were valued at billions of dollars before turning any profit? Why were institutional and individual investors incentivized to back companies they could not understand? One proposed answer pinpoints the popularization of the financial media as a leading culprit. Indeed, Bloomberg and CNBC rose to prominence in 1996 and 1990 respectively, signifying the rise of the importance of financial media at the height of the bubble.

 

Within the financial media, columnists and reporters held the power to change the market capitalization of a stock by adding it to either the “buy” or “sell” list. During this period, when a reporter published a purchase recommendation for an internet company - regardless of its actual merit - average investors with limited information would inflate its price by buying in. The work of behavioral economists like Robert Shiller demonstrates that news media biases shape investor behavior and consumer sentiment. Shiller concludes that rational-market behavior is dependent upon accurate information. However, Shiller claims, when the media ignores its assumed fiduciary responsibility to its readership, the market is unable to correct overvalued companies and prices subsequently spiral out of control.

 

It is no coincidence, therefore, that the financial media published more stories about technology companies during the Dot-com bubble than any other industry combined[1]. As a corollary to the popularization of financial news media, two new financial publications – Business 2.0 and Red Herring - emerged and shuttered during the period of the Internet bubble from 1996-2002, highlighting the unstable dynamic cultivated by the emergent demand for financial reporting. Even more mainstream organizations like CNBC and the Wall Street Journal increased their coverage on IPO’s by nearly 1000% relative to the actual number of new companies that went public during this time.  However, even more troubling than the prevalence of the financial news disseminated throughout this period was the magnitude of its measurable impact on market capitalization. Average investors depended on widely read analysts to publish lists of stock recommendations, classifying companies as either “buy,” “hold,” or “sell.” Alarmingly, during the 1996-2000 period, only 1% of published stocks were classified as “sell,” while 70% were classified as “buy.” Two of the most famous analysts of the time, Morgan Stanley’s Mary Meeker and Merrill Lynch’s Henry Blodget, became business media celebrities for their market insights. Most troublingly, investigations into compensation packages revealed that analysts were given higher bonuses if they classified a company as “buy” – highlighting the biases and external motivators of those releasing stock recommendations, and emphasizing the role the news played in promoting the bubble.

 

In a comparison of IPO’s between Internet and non-Internet stocks over the four-year period, a study in the Journal of Financial Analysis made an astonishing discovery:

 

“Internet firms average a stunning 84% initial return during our sample period, more than twice the return for non-Internet firms. The Internet IPO sample also had a cumulative return of 2,016% from January 1, 1997, to March 24, 2000, whereas the non-Internet IPO sample had a return of only 370%. The difference is an astonishing 1,646%.”

 

Outside of the realm of IPO’s, Internet stocks also enjoyed advantages causally related to their specific classification. Research from Purdue University shows that a sample of 63 companies who changed their names saw an average stock price increase of 125% relative to their peers within five days. The implications of this finding suggest that non-technology companies could multiply their market capitalizations simply by placing a technology-related buzzword in their name.

 

Eventually, however, the supply of non-traditional, inexperienced, and lesser informed investors ready to overpay for worthless stocks disappeared, and with them, the artificial over-inflation of American technology companies. The average loss per household in the United States, as a result of the crash, was a shocking $63,500. The subsequent recession caused millions of Americans to lose their jobs, and Henry Paulson of Goldman Sachs estimated that investors lost over 7 trillion USD from the bubble’s devastation in March, 2000.

 

Drawing lessons from this crash to contemporary market behavior, with private companies like Snapchat, Uber, and Airbnb achieving current multi-billion dollar valuations, it is reasonable to question whether we are living in a similar bubble today. Indeed, these unicorn stocks are no longer uncommon – according to the Wall Street Journal, at least 145 private companies have achieved valuations of over $1 billion! Additional signs of concern spotlight the declining access to venture capital, and falling valuations across various industries. For example, Gawker recently leaked an income statement for Snapchat for the year of 2014: The supposedly $20-25 billion company reported earnings of just $3.1 million.
 

If we are to prevent another collapse, we must not forget the fundamentals of value investing when selecting stocks. Healthy skepticism, acknowledgment of risk, and a wide array of responsible financial reporting are necessary to protect the American public from the biases and misinformation of the financial news media. Without this formula, we may be looking back at Snapchat the same way we view pets.com today. Investors, be careful out there. The market is irrational.

[1] Journal of Financial and Quantitative Analysis, 2009

A Net-Neutral Catch 22 by August Papas W'19

Water versus Netflix. Electricity versus YouTube streaming. While pragmatism might clearly delineate the relative importance of such things, the difference between traditionally defined utilities and media service outlets is less clear when glimpsed through a legal lens. When a DC appeals court made its ruling in June to uphold the FCC’s 2015 reclassification of internet access as a public utility, it placed the router on par with any other fixture in the home. The analogy between what comes out of a faucet and what appears on a screen, however, erases the seminal distinction of government and private supply -- a distinction that has embroiled big-money telecomm entities and legislators in a decade-long battle of public interest and lobbying dollars alike.

As the Federal Communications Commission’s laid out an initiative for zero-pricing to content creators, the custodians of internet infrastructure at which it was aimed responded in full financial force. The joint spending of Comcast, Verizon, and AT&T (some 44 million dollars in 2014) on Capitol Hill campaigning focused heavily on net neutrality, and is only expected to continue as all eyes are fixed on the case heading to the Supreme Court.

These giants, essentially the conduits of all broadband-based media consumption, oppose legislation that would limit their ability to selectively speed up or slow down certain applications and price-discriminate between content providers on the basis of information strain. That is, no longer could a provider induce streaming sources or online gaming platforms to pay for “fast lane” data by slackening delivery to the consumer until buffering and glitching degrades the application. Proponents of the free market and telecomm lobbyists agree: the revenue raked in from the platforms that can pay encourages new infrastructure growth to the ultimate benefit of improving digital networks at all levels.

Those favoring net neutrality argue this practice jeopardizes the unique democracy of the internet. Not even the next Kickstarter-to-be could hope to kickstart enough for optimal service delivery (read customer satisfaction and business survival) in a discriminate broadband sphere. The opposing side expounds other philosophical cases. How could remote surgery in the possible future of telemedicine ever be safe and reliable if there is no guaranteed, preferential data stream? Amidst the discourse, the verdict holds for now that 100 bits/second is 100 bits/second, be it a celebrity lifestyle blog or international Skype calls.

It may not be time for net-neutralists to toast victory quite yet, though. Comcast and others will certainly try to recoup lost revenue on the side of content creators by shifting aggressive pricing strategies to the consumer side of their channel. Indeed, tiered options, from high price, high-capacity packages to inexpensive low-speed service choices, already exist and are favored by those on both sides of the debate. It is the worry of Hahn and Wallstein (2006), for example, that the regulation of tiered service packages will follow disastrous historical precedents. They point to the 1978 Natural Gas Policy Act, wherein an initial inventory of five tiers of natural gas established for companies to offer publicly eventually multiplied to a cumbersome 28 categories upon closer consideration. Given the internet’s ascent to the status of commodity, the parallels are disconcerting. With the job of setting suitable tier prices now in the hands of regulators, it is likely content creators will lobby for special tiers of suitable capacity for their applications.

Thus emerges a certain catch 22. The public that pushed for the end of discrimination in the digital zone now faces the real possibility of an overly-bureaucratic pricing scheme set against them.  The moral: it is the responsibility of policy makers to proactively legislate a reasonable tiered system of internet access and for consumers to meditate on the rose-tinted vision of a neutral net.

Abenomics & Japan’s Roller-Coaster Economy by Ayo Fagbemi W'17

Japan’s economy had a relatively good showing in the first quarter of 2016.The nation’s gross domestic product (GDP) grew an annualized 1.9% (or 0.5% on a quarterly basis), effectively rebounding from the previous quarter’s economic shrinkage to avoid consecutive quarters of negative GDP growth: a recession. National private consumption, which is responsible for over 60% of the nation’s GDP, carried much of the load with 2.3% growth, namely with higher spending on recreation and dining. Public consumption, accountable for about 20% of GDP, also chipped in with a 2.6% climb.  While these numbers appear on the up side, the leap year of 2016 provides a little fluff to these end figures. As trivial as it may sound, that extra day in February provides Japanese consumers with some extra time to spend their money and keep the economy ticking, providing a bit of reality to these Q1 numbers.

The news of Japan’s recent economic progression is furthered soured when taking the previous quarter’s results into account. In the fourth quarter of 2015, Japan’s economy contracted 1.7%, even worse than previous estimates of a 1.1% decrease. The major culprit for the quarter’s disappointing performance was, predictably, faltering private consumption.  Consumers simply did not have as much confidence to dig into their wallets, as private consumption fell by almost a percentage point. Combined with a sizable drop in public investment, an unwanted level of economic hesitation and inactivity stymied any hope for growth.

The antipodal performances between these two most recent quarters can be seen as a reflection of Japan’s volatile economic performance since the start of the decade, still in the wake of the 2008 global financial crisis. From 2010 onwards, the economy experienced twelve different switches between quarters of economic expansion and contraction; the economy battled three recessions during that same time span.

Many link Japan’s underperformance to Prime Minister Shinzō Abe and his policies, affectionately known by many as “Abenomics.” They have traditionally been characterized by a trifecta of fiscal stimulus (also known as “government spending”), monetary easing, and expansionary structural reforms, in an effort to  revitalize an economy that has struggled to find its footing. Within weeks of his December 2012 re-election, Mr. Abe immediately charged up the fiscal defibrillator by announcing a massive stimulus bill of ¥10.3 trillion (or $116 billion), with ¥3.1 trillion ($34.9 billion) intended to encourage private investment. In terms of monetary policy, 2014 saw the Bank of Japan initiating sizable quantitative-easing, in which the national bank would make annual purchases of ¥60 trillion to ¥70 trillion ($68 billion to $79 billion) in bonds, increasing the nation’s money supply. Later that fall, the Bank of Japan doubled down on its efforts by increasing its annual bond purchases to ¥80 trillion ($90 billion), with the goal of driving down its currency value and driving up market liquidity. Outside of government spending, Abenomics looked after the private sector by lowering the corporate tax rate by 3.3% in 2015, with the goals of achieving greater corporate profit margins, private spending and investment, and new job hirings. Abe sought to systematically reinvigorate multiple players of the economy, and in most eyes, he went to extreme lengths to do it.   

            An important indicator to judge economic health is inflation. Although commonly misconstrued as a bad thing (and certainly, hyperinflation is something to be avoided), healthy inflation is associated with stable economic growth, especially when paired with encouraging growth in wages. Since the “Lost Decade” of the 1990s however, Japan has been struggling to approach that level of healthy inflation; the majority of 21st century Japan has seen inflation levels below zero, and GDP growth has been riding close to the zero line right along with it. The Bank of Japan has set its sights on a 2% target inflation rate to work towards in the coming years, similar to the Fed’s inflation target.

Over the last four years, the yen has depreciated against the U.S. dollar by a staggering 41%. Many would expect a less-valuable yen to result in rising prices for Japanese goods, especially given that the U.S.’ currency is more expensive and its exports would be less competitively priced, providing more breathing room for Japanese manufacturers. Inflation would be expected to follow.  

So the question remains: Why hasn’t it?

Many of Abe’s initiations, while ambitious and far-reaching, have failed to yield the intended results most likely due Japan’s structural deficiencies. The economy has not been performing up to its capabilities since the 1990s, as indicated by the nation’s output gap, a measure of the difference between the economy’s actual output and its output at full-potential. This market inefficiency still accounts for -1.6% of national GDP according the Bank of Japan. Closing that gap is paramount to translating market inputs such as a competitively-priced currency, into a 2% inflation rate.  

Whether or not the Japanese economy is in the midst of a solid and substantial recovery will be dependent on a few other things. Japanese legislators have to create an economic climate that encourages more private and public consumption, which, as illustrated before, has been a major component of the nation’s GDP. Prime Minister Abe seems to understand this as he has recently decided to push back an increase the country’s sales tax until 2019. A higher sales tax rate, while providing government more revenue and a way to lower any national debt, results in lower household wealth and less consumer willingness to purchase things. Unfortunately, the effects were already seen in 2014 when a premature tax hike stifled consumer spending and played a role in a mid-year recession. As Japan’s economy is presently constructed, plans for progress rest largely on the shoulders of consumers and spenders.

Hopes for Japan’s economic growth also fall on its exporters, and much of that has to do with the yen’s movement for the months to come. The yen has already appreciated 14% against the U.S dollar so far this year, which is the biggest gain among the currencies of developed nations (the recent Brexit certainly didn’t improve things, as investors scrambled to the safer yen, further spiking the up currency). Correspondingly, the Tokyo Stock Exchange Tokyo Price Index has fallen over 20% year-to-date as Japanese exporters battle with a rising yen and falling profits abroad. Some however don’t see the yen’s recent climb as an indication of the currency’s future performance. Economists at the Goldman Sachs Inc. Group is looking for the yen to make an about-face and soften to 115 per dollar by August, and 125 per dollar within the next 12 months. Analyst belief stems from Mr. Abe and the Bank of Japan’s desire to spark inflation by cheapening the yen and cutting interest rates. Of course, there are at least two sides to every exchange rate. The JPY/USD rate is also the product of the U.S.’ economic performance and Fed policies, both of which are largely out of Japan’s control. The Fed’s slowly approaching decision to pull the trigger on the second interest rate hike is widely expected to result in a stronger dollar, as the higher rate pulls in foreign investors and increases demand for the greenback (especially if other large players maintain their expansionary policies.) In addition to the U.S., the currency movements of Japan’s other trading partners (particularly China) are something to keep an eye on. Japan would benefit, structurally, from opening its economy to more trade communication and competition with these partners, which would bolster the nation’s export economy (currently the fourth largest in the world). Abe has an opportunity to move in the right direction with Japan’s involvement in the still-pending Trans-Pacific Partnership.

Japan’s government also has to contend with yet another major issue: its shrinking population. According to the national census, Japan’s population has shed nearly a million individuals within the last five years, a rather considerable figure when taken within the context of current population of 126.6 million. Japan’s population is also aging quickly, adding insult to injury. Over 25% of the current population is above the age of 65, giving Japan the world’s highest percentage and blowing away the global percentage of 10%. If left unabated, Japan’s population can fall to as low as 83 million by 2100, with the percentage of people age 65 and above reaching 35%. This has large implications for the economy, as a shrinking working-age population eats away at overall national productivity. Both of these issues require major immigration and social reforms, and Abe has pledged to raise the birth rate from 1.42 per woman to 1.8 (a very tall order according to experts) through expanding childcare opportunities and making it easier for women to take maternity leaves from work.

It will require a lot of time, and perhaps some missteps, but with the right combination of fiscal policy and structural reform, Japan’s monetary policy can start to yield greater dividends towards supporting healthy inflation and jumpstarting real growth that Japan has long waited for.