Population Time Bomb: Aging in Japan, Europe, and North America by Alfredo Garcia Sanchez W'21 C'21

When one discusses the rapid aging of populations, the discussion quickly tends to turn to Japan - and for good reason. As of 2016, more than a quarter of Japan’s population is now older than 65, whereas in the US that figure stands at only 15%. As an extreme example of this dire situation, Business Insider has reported that for the last six consecutive years, adult diapers have outsold infant diapers in Japan.

Indeed, the situation in Japan is complicated as there are many factors that have contributed to this crisis, some of which stem from cultural and societal aspects unique to the country. Among them are, of course, attitudes regarding gender roles in the workplace and the general rigid corporate structure to which younger generations have had to adapt. For example, workers have traditionally expected lifetime employment in a single corporation, thus increasing inflexibility in the labor market. Furthermore, Japanese employees work some of the longest hours in the world. The OECD reports that 22% of Japanese workers work 50 or more hours per week, which is exactly double the rate as in the US, and nearly four times higher than that in certain European countries. It need not be said that this phenomenon has affected the ability of young people to start and raise families, leading to what many have described as a fertility crisis. Furthermore, as a result of this fertility crisis, experts have gravely warned of an impending “demographic time bomb” - where falling spending leads to a shrinking economy, thus impeding the future growth of families, further depressing the economy further, and so on.

However, this phenomenon is not unique to Japan. While it is currently the most serious in that country, most experts agree that many industrialized nations will have to face similar or worse problems in the near future. It is a fact that populations in the Americas and Europe are aging rapidly. In the latter continent, birthrates have been at or below 2.1 children per woman - the threshold known as the replacement rate - for decades. Without taking immigration into account, this will lead to immediate population decreases.

The most obvious consequence of a shrinking population is a smaller active labor pool. Indeed, while this might drive wages higher, it will be at the cost of diminished year-over-year economic growth. This is exacerbated by the decreased productivity of older workers, who will presumably have an even harder time as technology in the workplace advances far beyond levels for which they were originally trained. However, perhaps most importantly for Europe, aging populations will place an ever-increasing strain on social services such as healthcare and social security, which are generally quite extensive in these countries. As the proportion of working young people who can pay into these systems decreases, such nations will have to face tough decisions about what the best course of action will be to prevent the total breakdown of the entire welfare system.

Regardless, in order to avoid an eventual economic catastrophe, Europe will have to prevent its population from decreasing, either through immigration or policies that incentivize families to have more children. However, in the near future, nations will have to deal with the additional challenge of a changing climate and potentially dwindling resources. Ideally, Europe will find a way to maintain its population. However, it will have to reconcile this objective with the question of whether supporting many more millions of people at a high standard of living is really sustainable. As developing nations’ populations continue to burgeon, Europe, Japan, and North America will all have to reckon with what this will entail.








US Job Growth by Kimaya Basu W'21

The February employment report could scarcely have been better from the perspective of multiple stakeholders. According to the Bureau of Labor Statistics’ preliminary February estimates, the nation added 313,000 net new positions over the course of the month. That would be remarkable for any month, but is particularly impressive for a month with only 28 days.  

Moreover, the labor force participation rate blossomed to 63.0 percent, up 30 basis points from 62.7 percent a month prior. While still low by historical standards, the entry of hundreds of thousands of people (~ 800,000) into the labor force makes it more likely that U.S. employment will continue to expand, without triggering worrisome inflation. The share of people aged 25-54 in the labor force is at its highest level since 2008 when the economy’s deterioration began to accelerate.  

Perhaps most remarkably, the intensification of hiring was not associated with sharper wage pressures.  Even as the pace of employment growth picked up in February, average hourly earnings expansion on a year-over-year basis slipped to 2.6 percent.  The combination of strong job growth and indications of softer wage growth propelled the Dow Jones Industrial Average up 441 points on March 9th, the date of the release.

Jobseekers were also likely to view February’s employment report positively.  Hiring continues to be brisk and the official rate of unemployment remained at 4.1 percent despite the surge in labor force entry.  Construction, manufacturing, and a variety of service categories chipped in jobs. The construction sector added 61,000 net new jobs, the highest tally for that sector in any month since 2007.

Business owners are also likely to be comforted by evidence of ongoing economic expansion and the availability of additional labor.  It appears that the current economic recovery, now 104 months old and the third longest in American history, still has some room to run.   




Recovery for the Rich? by Zhe Lee C'20

Trump’s first year as president has been a largely controversial and divisive term. His ambitious campaign promise to “Make America Great Again” has ostensibly been effective. US unemployment has dropped to a 45-year low, coupled with a steady forecasted growth in GDP of two to three percent and a Goldilocks economy, characterized by one without too much inflation or deflation. Therefore, if his effectiveness as president were to be assessed via economic indicators, Trump would appear to be doing relatively well, despite his lack of experience.

However, economic indicators when considered in isolation are meaningless quantifiers. Sure enough, they might paint a rosy picture on the whole for political candidates on which to repeatedly harp. However, they are insufficient to truly capture the impacts on the everyday American public. In this case, have these healthy economic indicators actually led to a “greater America” for all?   

One of Trump’s biggest acts upon assuming office has been the Tax Cuts and Jobs Act, which has slashed corporate tax rates from 35 percent to 21 percent. The act hopes that passing tax cuts on business will help stimulate the economy, which would lead to a trickle-down effect to benefit the average American, which is largely premised upon Reaganomics. However, the key consideration to note is whether or not the boost in economy would help offset the fall in revenue from the tax cuts. By considering the impacts from an economic perspective, as shown by the Laffer Curve, these tax cuts only worked in the past because the tax rates were in the prohibitive range. However, modern tax rates are half of what they used to be, and relying blindly upon trickle-down economics to bring about the promised jobs might be nothing but a sham.

In fact, companies such as Cisco, Pfizer and Coca-Cola have stated that they intend on passing cost-savings from the tax cuts to investors through increased dividends, as opposed to increasing jobs for the American public. Therefore, these tax cuts might not necessarily bring about the jobs as promised, which begets the initial question, does this truly benefit the American public?

Taxation has always been a tricky business. After all, no one likes handing their hard-earned money to the government on the pretext of “improving our country.” However, much as it is loathed, taxation is a necessary evil, built upon a progressive and equitable system, to redistribute wealth and minimise the wealth inequality: a system that stands contrary to the Tax Cuts and Jobs Act. Yes, in absolute figures, everyone seems to be on the winning end; we all get to keep more of our income. However, with every choice that we make, there is an opportunity cost that we incur, and similarly, every seemingly “winning” decision comes with its fair share of consequences.

Reducing taxes across the board also reduces the taxes that the rich pays, and by considering this in terms of absolute figures, the money that the wealthy contributes to government revenue also decreases significantly. This fall in revenue not only worsens the current fiscal deficit that the government faces but also places an increasing burden on the government to allocate funds sparingly for public needs such as education and housing. In fact, a report by a think tank found that taxpayers in the top one percent (those making over $730,000) would be receiving 20 percent of the total tax cuts, amounting to $37,000. In contrast, a typical American family would only receive a tax cut of $1,182 in absolute figures. Furthermore, the repeal of the estate tax and alternative minimum tax are two of the many other ways in which the rich are benefitting, ultimately, at the expense of the poor.  

It is without a doubt that the US economy appears to be doing well currently, despite potential negative future implications arising from Trump’s protectionist policies. However, the consideration of economic recovery in its entirety has largely been centered around the rich, as opposed to truly reaching those who need it. If the conversation focuses on only the wealthy, how can this economic recovery truly be what makes America great again?

The American Opioid Crisis: A Comparative Economic Viewpoint by Annabelle Williams W'20

Studies estimate that the US opioid epidemic has cost the nation over  $1 billion since 2001. But why is it that the US is by far the nation that is most affected by opioids?

America’s opioid epidemic is not just a public health crisis but a far–reaching economic drain with implications for businesses across borders. Nonetheless, the crisis is a uniquely American problem, according to studies citing the relative frequency of prescribed opioids as well as overdose deaths.

Supervised, or safe, injection sites are a polemic aspect of the opioid crisis debate with business implications. Canada has adopted supervised injection sites as well as enforced big pharma regulations as a way to tackle this issue. In the US, Philadelphia and San Francisco are two cities leading the charge in the opening of these sites, which employ medical professionals to distribute sanitary needles and provide care to drug addicts when they overdose.

Many argue that this practice is tantamount to endorsement of addiction by the government because supervised injection sites require that medical professionals watch addicts shoot up. However, studies as well as real-world data from Canada have shown the efficacy of these sites. Moreover, they take a huge strain off the national economy as well as the job market languishing in American states affected by the crisis. European countries like Portugal, which at its peak in 1999 saw 1% of the population self-reporting addiction, have implemented safe injection sites, in addition to policy solutions. The combination of safe injection sites and other healthcare policy structures have taken the addiction rates down to four times less than the European average.

One interesting element of the American healthcare system is the degree to which companies hire reps to push particular drugs to doctors. This process inextricably links the opioid crisis to North American “big pharma”: the most egregious offender being Purdue Pharma. Purdue only decided to cut their direct–to–doctors sales force in February of 2018, despite the fact that opioids have been a hot–button issue for more than a year now, both politically and economically. This firm has been fundamentally entwined with the rise in opioid overdose and heroin–fentanyl use in the United States.

Then there is the question of marketing. There are only two countries in the world where pills can be advertised on television: the US and New Zealand. And the US privatized healthcare system has fewer non–pharmaceutical options for long–term pain management. This perfect storm of circumstances has allowed for American big pharma, in a way not possible in any other country, to commodify addiction.

Furthermore, pharmaceutical companies, such as those that produce Naloxone, an anti–overdose drug, have even commodified addiction treatment in a way that is effectively unprecedented in the American public health canon. Indeed, The Nation reports have even effectively priced out Philadelphia’s local government from buying the amount of Naloxone its needs to make a dent in cities like Kensington. It follows, then, that there are at least two niche markets booming in large part from the commodification of addiction: Naloxone sales, and morgue services.

This is an example of a public health issue with an economic cost unrivaled by anything except the human toll. The pharmaceutical stranglehold on the American health economy needs to end.
















The Book: Dead or Alive? by Lilli Leight C'19

The printed book is not dead. In 2016, The Publishing Association, the leading body of UK publishers, released a report showing e-book sales falling by 17% and a parallel rise in print book sales of 8%.[1] Oren Teicher, CEO of the American Booksellers Association, believes these results highlight that “there’s no getting away from the tangible delights of reading in a format that has remained relatively unchanged since the printer Aldus Manutius pioneered the portable, hand-held book.”[2]

He makes a great point. The concept of the ‘book’ has not changed since the mid-fifteenth century when the Gutenberg Press made it possible for Manutius to invent the ‘book’ as we know today.[3] Such a format has proved so endearing that, although the words may have moved from print to digital, e-readers today try to mimic printed books by using skeuomorphic techniques to make readers more comfortable with the new technology. Skeuomorphism refers to the practice or idea of taking something from an older technology and placing it in a new technology.[4] For instance, when you turn the page of an e-book on an iPad, it looks like you are turning a real page. The programmers and designers behind e-books and e-readers knew that their audience was already accustomed to the printed book, and so they sought ways to mimic that style. Nevertheless, the introduction of the e-book in 1991 was a huge innovation that threatened the publishing industry’s very existence.[5]

The early 2010s were a trying time for publishing houses. Teicher credits this to a pervasive fear that digital publishing was going to replace the traditional role of a publishing house, including how books are distributed. Some feared that this technology might potentially even lead to a take-over of the publishing business themselves.[6]  As we now see from recent data, those predictions were overstated. Indeed, the e-book, while here to stay, will not fully replace its physical counterpart, given stagnating sales in major markets such as Europe, and slower than anticipated growth rates leading to a renaissance of the printed book.[7]

Why, then, are e-book sales plateauing while print sales are on the rise? Phil Stokes, head of PwC’s UK Entertainment and Media division, believes the resurgence of the physical book is not only due to its greater appeal, but also because certain genres are better suited for the print format. For example, most people prefer hardcover cookbooks. Stokes also notes the upsurge in adult coloring books, which lend themselves to print.[8] Allen Lindstrom, CFO of Barnes & Noble, reported that in 2016, 14 million adult coloring books were sold, compared with 10 million in 2015[9] and 1 million in 2014.[10] While the growth of adult coloring books is not sustainable due to market saturation, the popularity of the genre and rapid increases in sales are positive reflections of people’s continued enthusiasm for the printed book.  

Additionally, Brian O’Leary, the executive director of the Book Industry Study Group, points out that e-book prices have risen over the past few years, and are now almost comparable to those of printed books.[11] Jonathan Stolper, Senior VP and Global Managing Director for Nielson Books, a company that provides book data to publishing houses, explains that this is a result of the Big Five Trade Houses – Penguin Random House, HarperCollins, Simon & Schuster, Hachette, and Macmillan – increasing their e-book prices by about $3. This resulted in each e-book costing an average of $8, driving the prices of self-published books down to around $3 per e-book.[12] Carolyn Reidy, President and CEO of Simon & Schuster, also credits the rise in e-books sales to the increased number of e-reader platforms, like the Kindle, Nook, and iPad. The increased diversity of e-readers allows publishing houses to raise the prices of e-books to over $10 because e-books are licensed. For example, if Amazon wants to provide an e-book to their customers, they need to buy the rights from the respective publishing house to do so. Accordingly, a publisher could choose to withhold a book from Amazon.[13]  This applies to Apple iBooks as well.[14] For consumers, if the e-book version is the same price as the printed version, why not just buy the preferred physical text?

Markus Dohle, CEO of Penguin Random House, highlights another reason that print books have remained so dominant: hildren’s books.  At the Frankfurt Book Fair in October 2017, he explained how children and young adult genres have been the fastest-growing categories within the book market for the past ten years.[15]  Similarly, Kristen McLean, the Director of New Business Development at Nielson Books, announced at the 2016 Children’s Book Summit that the children’s book market has grown by 52% since 2004.[16]  In addition to the growth of the children’s market, the young adult segment has also grown significantly, with grown adults comprising many of the genre’s readership. Some estimates figure that as much as 70% of young adult book sales are comprised of individuals ages 18-64.[17]  This wide age range of readers illustrates the popularity of the genre. Since the children’s book market is the fastest growing book genre, many publishers feel like the future of the publishing industry is safe.

The publishing industry will continue to expand and innovate in order to keep up with technology. Publishers expect that audiobooks will experience high growth as they attempt to stay ahead of the e-book market.[18]  Today, both print books and e-books appear to be sustainable mediums, and thankfully, it looks as if printed books will remain an integral part of our world for the foreseeable future.





[1] https://www.theguardian.com/books/2017/may/14/how-real-books-trumped-ebooks-publishing-revival

[2] http://www.latimes.com/business/hiltzik/la-fi-hiltzik-ebooks-20170501-story.html

[3] Professor Whitney Trettien at the University of Pennsylvania. ENGL 210: The Art of the Book

[4] ibid

[5] https://www.theguardian.com/books/2002/jan/03/ebooks.technology

[6] http://www.latimes.com/business/hiltzik/la-fi-hiltzik-ebooks-20170501-story.html

[7] https://www.publishersweekly.com/pw/by-topic/international/Frankfurt-Book-Fair/article/75092-frankfurt-book-fair-2017-penguin-random-house-ceo-markus-dohle-s-full-remarks.html

[8] http://money.cnn.com/2017/04/27/media/ebooks-sales-real-books/index.html

[9] http://time.com/4689069/coloring-book-bubble-bursts/

[10] https://www.washingtonpost.com/business/economy/the-big-business-behind-the-adult-coloring-book-craze/2016/03/09/ccf241bc-da62-11e5-891a-4ed04f4213e8_story.html?utm_term=.4e3afa7468f5

[11] http://www.latimes.com/business/hiltzik/la-fi-hiltzik-ebooks-20170501-story.html

[12] https://www.publishersweekly.com/pw/by-topic/digital/retailing/article/72563-the-bad-news-about-e-books.html

[13] https://www.newyorker.com/magazine/2010/04/26/publish-or-perish

[14][xiv] https://www.vanityfair.com/news/business/2014/12/amazon-hachette-ebook-publishing

[15] https://www.publishersweekly.com/pw/by-topic/international/Frankfurt-Book-Fair/article/75092-frankfurt-book-fair-2017-penguin-random-house-ceo-markus-dohle-s-full-remarks.html

[16] http://www.bookweb.org/news/publishing-insights-nielsen-children’s-book-summit-34861

[17] https://www.thebalance.com/the-young-adult-book-market-2799954

[18] https://www.publishersweekly.com/pw/by-topic/industry-news/audio-books/article/72500-publishers-see-more-good-times-ahead-for-audiobooks.html

Upcredentialing by Kimaya Basu W'21 C'21

Orange is the new black, 50 is the new 40, and a college diploma is the new high school degree. ‘Upcredentialing’ has thus surreptitiously emerged as a significant workforce phenomenon. To a certain extent, this is attributable to the Great Recession. During the most recent downturn and its immediate aftermath, the United States lost a net 8.7 million jobs. Able to choose from a wide array of job-seekers, employers consequently raised the bar on required educational and experiential credentials. Since then, despite the addition of about 16 million jobs to the national economy, employers continue to require more credentials than are often necessary to fulfill the responsibilities of a particular position. After all, no human resource professional wants to be seen as dumbing down their incoming workforce.

There are many implications associated with this phenomenon, including the fact that you need to work harder and smarter than previous cohorts of graduates just to accomplish what they did. In other words, your degree is not the ticket to utopia that it may once have been. A recent study conducted by the Federal Reserve Bank of New York indicated that in Q4 2016, about 44% of recent college graduates were employed in jobs not requiring such degrees. While this is bad for some college graduates, it’s also not particularly good for employers. Indeed, while many employers report difficulties finding suitable talent to fill available job openings, this is in large measure due to their own upcredentialing practices. Furthermore, the people whom they hire are often anxious to leave such positions quickly, resulting in more turnover, higher retraining costs and less consistent service quality.

A study by Burning Glass confirms these unnecessary challenges. The study found that many employers prefer that their employees possess a bachelor’s degree, despite that this often superfluous requirement means that it takes nearly 33 extra days to fill the position as it would have had the employer not instituted such a requirement. According to the study, 65% of executive secretaries and executive assistants now require a bachelor’s degree. However, a much smaller 19% of those currently employed in these occupational categories actually hold a bachelor’s.

While many point to deindustrialization and technology as explanations underlying the poor economic outcomes suffered by many high school graduates, upcredentialing certainly hasn’t helped these individuals. Jobs that could be made available to those who hold a high school degree as their highest form of educational attainment are denied these positions for lack of irrelevant credentials. From a macroeconomic perspective, the result is a smaller economy characterized by a large numbers of unfilled jobs, elevated turnover, a stubbornly low labor force participation rate, and a shortage of intrinsic motivation resulting from job-holders’ realizations that they are overqualified.

As always, the solution to this problem may rest in entrepreneurship. The rate of business formation is down in America. If more college graduates frustrated by the nature of available positions would start their own businesses, this would potentially increase the total number of job opportunities, and likely free up positions for high school graduates who are too often locked out of opportunities in which they would otherwise thrive.

Paradise Papers and Tax Havens by Annabelle Williams W'20

Offshore investing has long been shrouded in secrecy. But last year’s “Panama Papers,” a 2.6 terabyte leak of documents from offshore investing firms, opened the floodgates. Early in November 2017 another major leak revealed 1.4 terabytes of information principally related to the activities of a British offshoring firm, Appleby, based in Bermuda (a well–known tax haven).

It’s important to note that the tax avoidance revealed in these papers, the Paradise Papers, is lawful. Avoidance is built into many Western tax codes, but the line between avoidance and evasion is blurry. The ethical considerations of offshore accounts, particularly in companies accused of human rights violations or other unethical practices, are what stand out in the Paradise Papers.

The involved parties are linked by one thing:money. Nike, Apple, the Queen, even Penn all invest in offshore funds. It comes as no surprise that the world’s 0.01% and its biggest multinational companies seek out these tax havens, notably in Bermuda and the Cayman Islands. See here for a full list of who invested.

The continued leaks of financial information relating to offshore accounts poses many questions about the U.S.,national tax laws, and the ethical nature of “shadow money,” much of which is placed in trusts in order to obscure investors’ identities. It is also important to note that the shadowy connections of individuals to investments can result in major conflicts of interest or financial misrepresentations. Institutions like Penn invest in companies that use fossil fuels, despite pressure to divest. The U.S. Commerce Secretary’s shipping conglomerate was revealed to have received significant offshore payments from a company owned by Vladimir Putin’s son–in–law.

And though the tax avoidance practices are legal, the conflicts of interest posed in addition to the questionable ethical decision of nonprofits or universities, investing endowments in these funds rather than more secure domestic investments, are staggering. Penn set up four funds offshore, each containing the number “1740” as a nod to the school’s founding. These corporations appear on endowment disclosures, but what does not appear is their status as “tax blockers.” The New York Times explains the concept behind tax blockers as “establishing another corporate layer between private equity funds and endowments effectively blocks any taxable income from flowing to the endowments.” The leaked documents in the Paradise Papers show that over 100 of the top U.S. universities are proven to have used this investment strategy .

There is no question that “big money” will continue shaping the world economy.  A contrast is eminently obvious between the hedge fund like investment strategies of universities and the mounting student debt rates, particularly at private institutions. So, can we truly justify offshore investing?













Bitcoin: The Only Thing Less Predictable Than Donald Trump’s Tweets by Jason Cohen W'20

The value of Bitcoin has risen more than twice as much as the best performing publicly traded stock in 2017. Its market cap is approaching that of gold, and the volume of Bitcoins traded reaches several billion daily. Bitcoin has become more and more well-known — since April, Google searches for the cryptocurrency have gone up more than 450% — and yet, if you ask people familiar with Bitcoin, you will generally get a divided response regarding its tremendously unclear future and volatile past.

Bitcoin started in 2009 as a decentralized cryptocurrency using blockchain technology. What that means is that Bitcoin is unique in several ways: First, the currency is virtual, meaning that there are no actual “coins.” Secondly, transactions do not require any personal information, making them completely anonymous. Finally, transactions are verified by the Bitcoin community, making them more secure, free, and impossible to regulate. Bitcoins can be bought on an exchange, sent on a computer, and “mined” by computers solving complex math problems.

Anyone who has been following Bitcoin for the past few years will agree on one thing: they wish they bought more of it earlier. Beyond that, however, there is a lot of skepticism when it comes to the future of Bitcoin. On one end, there are people who believe Bitcoin is the biggest technological innovation since the Internet. Let’s call these people the optimists. The optimists (and there are many of them) believe that Bitcoin will absolutely revolutionize the way money works. They believe that banks and governments will begin doing more and more business in Bitcoin, or otherwise create their own cryptocurrencies, until society reaches a point where the very definition of money is changed altogether.

On the other side of the spectrum are the pessimists. The pessimists are not particularly happy that they missed out on this Bitcoin boom, as no one would be. However, for pessimists, Bitcoin is a bubble that is all-too-ready to burst. The argument here is that Bitcoin is not backed by anything but the public’s trust. Indeed, for the U.S. dollar, the U.S. government is backing the currency. Same with the Euro. However, when it comes to Bitcoin, the exchange rate is in other currencies, and there is no backing beyond the public’s trust. If the public believes Bitcoins are worthless, they will become worthless and there is nothing anyone can do. This leads to the belief that Bitcoin’s value is derived from its underlying technology, which, while revolutionary, can be replicated and is not perfect. Bitcoin is only more valuable than cryptocurrencies like Ethereum because people say it is. Because of these vulnerabilities, pessimists believe that Bitcoin is a bubble that will burst faster and more viciously than the dot-com bubble or the housing bubble.

In order to understand who is “right” when it comes to the future of Bitcoin, it is important to look into Bitcoin’s past. The idea for Bitcoin was conceived in 2008 and later officially launched in 2009. By May of the next year, 10,000 Bitcoins were used to purchase a cheese pizza, the first ever commercial transaction using Bitcoin. Soon after, a vulnerability in the code was exploited and billions of Bitcoins were stolen. After a year or so of negative press, Silk Road, a deep web site for anonymous and illicit dealings in drugs and trafficking, was established using Bitcoin. Bitcoin then gained value. Bitcoin received praise from those in the U.S., but China banned its usage. By this point, it is 2014, and Bitcoin’s value has gone up and down drastically multiple times. By now, the price is around $700.

Over the next 2-3 years, Bitcoin’s dropped to under $200 and rose to over $2000 repeatedly. Mt. Gox, the largest Bitcoin exchange, was shut down, and yet Bitcoin recovered. More and more major companies began recognizing and accepting Bitcoin. Banks and hedge funds began doing research into the technology. Finally, in 2017, with the advent of Initial Coin Offerings, or ICO’s (to raise money for a cryptocurrency), Bitcoin took off. Currently, the value is nearing $10,000.

Based on this history, Bitcoin can, at the very least, be viewed as a risky asset. With no real value and many obstacles in its way, Bitcoin has an uphill climb ahead of itself. That being said, people have bet their life savings on Bitcoin’s success, and for many, its rise is a verification of their optimism. Bitcoin has a chance to become the currency of the world, and if that happens, people who got in early will stand to make a lot of money. However, if Bitcoin crashes, thousands of people’s life savings will go down the drain and the ripple effect may be enormous. For now, it is too soon to tell what may happen, but expect to see many ups and downs before any clear future reveals itself.













Universal Healthcare: The Business Case for Reform by Alfredo Garcia Sanchez C’21 W’21

Last month, Chris Conover, a fellow at the American Enterprise Institute, published a series of articles in Forbes magazine entitled, “The [Reasons] Why Bernie Sanders’ Medicare-for-All Single-Payer Plan is a Singularly Bad Idea” [1]. This title is so aggressive that one would think the legislation proposed nuking the moon or banning the sale of pants. Universal healthcare has been the subject of intense scrutiny and interest in recent years, and is subsequently the target of much ire and suspicion. What has long been considered as a staple of other modern democratic countries is still viewed as dangerous and unfeasible in the land of the free and, increasingly, the home of the sick.

According to the Organization for Economic Cooperation and Development (OECD), the United States spends almost $10,000 in healthcare per capita, followed by Germany and Switzerland at around $5,000 and $6,000, respectively. One might be justified in defending our prodigious spending if, in fact, it delivered spectacular outcomes. Unfortunately though, the current US system suffers from relatively high infant mortality rates, a low patient-per-bed ratio, and low life expectancy, among a myriad of other disappointing statistics. In almost every ranking of healthcare systems by country, the US lies near or at the bottom of the pack of developed nations [2].

This flagrant inefficiency that has stood for decades would appall any reasonable businessperson. America allocates more than 30 percent of total healthcare spending solely to administrative costs. In comparison, this figure amounts to only 16.7 percent in Canada [3]. Many point to government-run healthcare as a massive, overreaching bureaucracy, not considering the fact that the private health insurance system is arguably an equally large, if not larger, agglomeration that ultimately contributes less than its fair share to the US economy.

More than half of Americans currently rely on employer-supplied health insurance for their medical needs. This constitutes a rather large expenditure for many companies, especially smaller firms. While most Americans currently on employee-supplied healthcare are satisfied with their coverage, it comes at the expense of many businesses’ bottom line. In total, American businesses spend $620 billion every year on healthcare, and 80 percent of CFOs surveyed in a recent Harris poll agreed that “healthcare costs drain company resources that could be better used elsewhere.” Struggling companies often resort to cutting healthcare benefits entirely, resulting in thousands of workers losing coverage [4].

But companies are not the only affected parties. A recent paper published by the Washington College of Law at American University identified a phenomenon dubbed “job lock,” whereby American workers base much of their career decision making on the stability of their healthcare situations. Oftentimes workers will refuse to leave or change jobs for fear of losing health insurance, resulting in inefficiencies as people gravitate towards less appropriate jobs, or abandon entrepreneurial endeavors. Studies have found that “job lock” makes employees 60 percent less likely to leave their jobs and decreases the rate of self-employment [3]. This decreased mobility, estimated to be upwards of 20 percent, leaves the US at a great disadvantage especially in the development of industries and the efficient allocation of resources.

Upon instituting a single-payer system, companies will have to foot a portion of the bill through payroll taxes, although it is likely that many will enjoy reduced costs overall as group insurance premiums are eliminated. The same idea extends to the majority of private individuals, who may hand over slightly more to Uncle Sam, but will ultimately save money after all is said and done. These savings can be extended on a national scale; the current US system of healthcare spending occupies a massive 18 percent of the GDP, while in countries providing universal coverage, that figure stands at an average of only 11 percent [5]. For the fiscally conscious, these are savings that should not be overlooked.

If America really wants to fulfill its ideal as a bastion of entrepreneurship and free markets, then it should eliminate the one issue that has for too long been an underproductive appendage to the economy. With an average approval of around 60 percent (and rising) for universal healthcare among polled Americans, this is something that only the most reactionary of reactionary politicians cannot openly support [6]. Any American worth his or her salt would jump at the opportunity to lead the world in anything - healthcare should not be an exception.   


1. https://www.forbes.com/sites/theapothecary/2017/09/28/the-1-reason-bernie-sanders-medicare-for-all-single-payer-plan-is-a-singularly-bad-idea/#138e53ad5502

2. http://www.latimes.com/nation/la-na-healthcare-comparison-20170715-htmlstory.html

3. http://digitalcommons.wcl.american.edu/cgi/viewcontent.cgi?article=1132&context=hlp

4. https://www.forbes.com/sites/castlight/2014/12/29/how-rising-healthcare-costs-make-american-businesses-less-competitive/#49456fed4f5f

5. http://www.theamericanconservative.com/articles/the-conservative-case-for-universal-healthcare/

6. http://news.gallup.com/poll/191504/majority-support-idea-fed-funded-healthcare-system.aspx

Want a Job? No Problem, 'John' by Kimaya Basu W'21

Perhaps somewhere pure meritocracies exist.  Unfortunately, they haven’t yet come to America; at least based on recent research regarding the bias to which people with certain names are subjected.

Research conducted by Corinne Moss-Rasin, a social psychologist at Skidmore College, determined that STEM professors from an array of universities were more willing to mentor a person named ‘John,’ even when  ‘Jennifer’s’ resume was precisely the same.  Moss-Racusin and fellow researchers conducted the study by producing one resume for a lab manager position.  They then sent that resume to more than one hundred biologists, chemists, and physicist professors at various institutions.  The professors received the resume either with ‘John’s’ name at the top, or the resume with ‘Jennifer’s’ name.  On average, ‘Jennifer’ was offered approximately $4,000 less for the same position and was generally considered less qualified.  Shockingly, even female professors to whom these resumes were sent exhibited a preference for ‘John’ over ‘Jennifer.’  Although one could suggest  that the highly-educated female professors are simply unaware of their prejudices, these biases still persist.

After data from the study was collected, professors to whom surveys were sent were informed of the study team’s analytical findings.  They then took a diversity course designed by Professor Moss-Racusin and her colleagues.  A central focus of the course was promoting discussion regarding the ways in which potential future biases could be mitigated.  Encouragingly, a separate survey conducted after the course indicated that gender biases had been mitigated among participating scholars, suggesting a degree of efficacy.

There are a few frightening takeaways from this research. This inequitable state of affairs is not only unfair  on an individual level, but it also  undermines scientific progress.  Indeed, lingering prejudices might induce many women to abandon STEM fields, thereby producing a smaller pool of actively engaged scientific talent.

There is of course more than just gender bias in the workforce, with racial biases highly documented as well.   According to a study conducted by the National Bureau of Economic Research, job applicants with “white” sounding names needed to send  10 resumes on average to receive one interview callback.  Those with “African-American” sounding names needed to send approximately 15 resumes to achieve the same result.  Remarkably, having a white sounding name was equivalent to an additional eight years of work experience in terms of callback rate!

The study involved sending out nearly 5,000 resumes responding to more than 1,300 employment ads in newspapers from Chicago and Boston.  Resumes with various levels of work experience and credentials were sent, with either white or African-American sounding names.  Names were assigned to resumes randomly.  Job opportunities ranged from cashier to sales management.  While white-sounding names attached to better resumes yielded 30 percent more callbacks than  weaker resumes with the same names,  the magnitude of positive impact was significantly smaller for African-American sounding names;  implying that the marginal returns of  education and work experience are less for African-Americans than for whites.

While one might think that this type of analysis would be unnerving only to people named ‘Jennifer’ or ‘Shaquille,’ it is equally as  concerning for the ‘Johns’ of the world.   Now,  successful ‘Johns’ are forced to wonder whether their accomplishments are due as much to the name at the tops of their resumes, as much as they are to the credentials that follow.   


Comparing American and European Luxury M&A Trends by Annabelle Williams W'20

The luxury market has increasingly trended towards consolidation of key industry players. Does this focus on M&A dilute brand value or simply signify a change in corporate governance? Historically speaking, European luxury holding companies have been a fixture of the luxury market for the past half-century. The first to employ the method of large-scale luxury holdings was French SA Kering, under the leadership of Francois Pinault. The European model then gained steam in the 1990s, with major bidding wars by Kering and what was then Gucci Group, for brands such as Fendi.

One of the major benefits of these consolidated models are their financial disclosure provisions. Indeed, individual cash flows segmented by brand are not required in disclosures, so the overall health of the conglomerate is the only public financial information. In other words, if one of the brands in a group’s portfolio is in the red, the holding company can apply the net income and retained earnings from another, more profitable brand to offset the losses, and mask them from the public. This implies that no investors or customers ever fully understand how much a particular brand is struggling.

This disclosure  advantage is appealing to American companies, which have long struggled to brand themselves as luxury fashion houses, and instead (as in the cases of Calvin Klein and Ralph Lauren), operated mostly in the apparel sectors. American brands cannot lean on the longstanding heritage that European fashion houses claim.

The three major players in the consolidated luxury market are all based in Europe. French SA Kering (Gucci, YSL, Alexander McQueen, and Puma are among its brands), as mentioned above, came first, followed in short order by French SA LVMH (holding Louis Vuitton, Moet, Hennessy, Givenchy, and Christian Dior) and Swiss SA Richemont (Cartier, Dunhill, Montblanc and more).

Interestingly, the United States luxury market has staved off the European model of corporate consolidation. Instead, the US market is typically characterized by M&A activity like any other industry, with one established brand acquiring another. Take the recent example of Coach’s $2.4 billion acquisition of competitor Kate Spade, or its 2014 purchase of luxury shoe brand Stuart Weitzman. Another prominent example of acquisition activity happened in July 2017, when Michael Kors acquired Jimmy Choo, the brand that rose to prominence for its distinctive high heels, thanks in part to Sex and the City’s Sarah Jessica Parker.

Where does the inherent difference in these two consolidation strategies come into play? First, we must consider brand value. Indeed, while the  European model preserves individual brand value because large holding companies do not have a brand of their own which can distort the one they acquires, American companies with already-established consumer perceptions increasingly often acquire their direct competitors. However, not all American businesses have ignored the European precedent. As Michael Kors CEO John Idol alluded to, when speaking about the strategy of imitating Kering and others: “... we are really looking to build an international luxury company, and less so brands that ... have a greater reliance on wholesale than its own retail strategy.”

Both Michael Kors and Coach faced issues of brand devaluation through the early 2000s; branding experts described their approach as “accessible luxury.” Coach moved to dispel this misconception first with their 2015 acquisition of Stuart Weitzman, and then continued to cater to millennial consumer bases with their Kate Spade acquisition, a demographic in which Kate Spade has significant market share.

Another metric of the success of M&A activity in the luxury retailer industry are stock ratings and associated notes. The October 5th, 2017 ratings for Michael Kors and Coach provide a good snapshot of short-term directions for each corporation. Following the deal’s announcement on October 5th, Piper Jaffray downgraded Coach’s stock from Overweight to Neutral, citing the need for Coach to “digest” the Kate Spade brand. Piper Jaffray further mentioned the need for Coach to restructure its business into a “house of brands.” This was cause for concern for some investors because it potentially lengthens the time needed for Coach to take a seat at the table with major luxury market players. S&P put it at the same level as Kors, with a BBB- rating, the lowest investment-grade rating possible.

With regards to Kors’ acquisition of Jimmy Choo, the early October valuation narrowly avoided junk ratings from S&P, Moody’s, and Fitch. Kors’ stock only just cleared for the lowest investment-grade classification! Clearly, both ratings agencies and the sell-side are both unsure of the feasibility of this Kors/Jimmy Choo merger. However, while stock classifications reveal the market’s short-term opinions of the company, they are only one component for predicting the future success of a merger; especially mergers like these with transformative effects on the luxury sector.

What does this consolidation mean for competition and oligopoly in both the American and Euro luxury markets? One theory is that bidding wars for smaller, designer-owned labels may soon become forums for luxury conglomerate giants to engage in competition. In turn though, operating in an increasingly oligopoly–based market disadvantages smaller brands and companies.

Coach, if it plays its cards right, can use its new acquisitions to boost its status in the luxury market. Its handbags may well be a Veblen good, with demand that increases proportionally to price, given the right branding and an increased emphasis on luxury. However,  if Coach fails to brand itself as true luxury and sticks to an ‘in–between,’ it may fail to effectively compete in either the luxury or the apparel markets. Michael Kors, too, needs to be deft in its  takeover of Jimmy Choo; the caché of the latter should bolster its parent company, but success in that will depend on branding efforts for both parent company and its subsidiary.

Luxury brands are tricky to preserve. Conventional rules of consumer behavior and demand need to be adjusted to suit a high-income, status-seeking market. But when maintained well, both by creative directors and business managers, dividends and valuations can be extremely high. Unsurprisingly, eight out of 100 of Interbrand’s 2017 Most Valuable Brands fall into the luxury sector; clearly indicative of the market power these companies possess.

Perhaps it’s time for American giants to truly come to the table in one of the most quintessentially European markets—luxury and high fashion. The Kors and Coach acquisitions should provide a basis on which to predict future successes for each would–be “house of brands,” and the American luxury market writ large.

















Semiconductors Reimagined by Jacob Bloch W19

Society is hungry.  It wants information, pleasure, and access to the things it values instantaneously.  It wants to analyze enormous amounts of data.  What was deemed a speedy route a moment ago becomes phlegmatic, and now has to go faster.  If it doesn’t, then it could be disastrous.  Moore’s Law states that technological development follows an exponential growth curve, which keeps doubling performance.  Society expects Moore’s Law to never falter.  The notion that technology might slow down its pace is a predicament best off avoided.  


Yet, when it comes to transistors, which are the diminutive switches that are the infrastructure of computer processors, they have gotten so miraculously small and efficient that they seemingly cannot be improved upon.  Society may soon face the dim prospect of possessing super-computers that have reached the upper limits of their computational powers due to lack of advances in transistors.  Data centers, molecular dynamics, artificially intelligent “brains,” weather forecasting, climate change modeling, drug designs, and 3D nuclear test simulations that rely on supercomputers may cease to inspire confidence, inhibited by the technological parameters of early 2017.  


Imagine the processor as an engine, and the transistors as the cylinders that accelerate it.  To increase the processing power, the solution is to condense more cylinders into the engine.  This is precisely what makes today’s iPhone 7, which has an A10 processor with 3.3-billion transistors, faster than the ‘TRADIC,’ the first American transistorized computer containing 800 transistors within three cubic feet.  


Processing power has significantly increased since Dr. Gordon Moore, Intel’s founder, established the earliest version of Moore’s Law when he declared in 1965 that transistors were shrinking at such a rate that twice as many could fit in a computer processor every year. Thirty years later, Moore declared that the doubling rate would occur every two years; however this has not been the sentiment of Silicon Valley, which fervently believes that the exponential growth has never slowed.  


Yet Silicon Valley may run out of counterarguments.  The industry is reaching the final frontier of transistor technology for several reasons.  It is so expensive to produce these miniature transistors that only a few muscular companies are able to compete in the industry.  The number of manufacturing companies has dwindled from about twenty in 2000 to just four today: Intel, TSMC, GlobalFoundries, and Samsung.


Furthermore, notwithstanding the robust financial resources necessary to miniaturize further, transistors are unlikely to do so because it is impossible to defy the laws of physics. When the transistors reached 90 nm in the 1990s, the industry discovered that the transistor gates were becoming so thin that electric current was leaking out into the substrate.  In other words, electrons that direct the power of the transistors are tempted to “jump” into the surrounding space, unless there is some way to insulate them.  The smallest transistors today range between 14 and 22 nanometers, and to further decrease the size would only increase the quantity of electrons jumping into the substrate. The only effective method of insulating the electrons at this microscopic level is to block any movement of electrons, in other words, keeping the insulator at a temperature close to absolute zero.  Temperatures approaching absolute zero have only been obtained in a laboratory setting, so it would be implausible to incorporate such insulation in transistor technology, thus halting its development.  


The industry today recognizes that the ceiling has been hit.  Intel has repeatedly delayed its release of the newest technology, with more time between subsequent “generations,” or upgraded products.  It has even placed delays on specific launches of new inventions, such as a 10 nm transistor.  Intel’s Chief of Manufacturing, William Holt, has noted in February of 2016 that Intel will have to move away from silicon transistors in about four years and that “the new technology will be fundamentally different,” but he has admitted that silicon’s successor is not yet established.  


Even the introduction of transistors in the range between 22 and 14 nm has been contingent upon a radical redesign.  While the transistors of the past were flat, the Tri-Gate transistor has innovated an approach to three dimensional space.  Instead of having current-carrying channels lying under the gates of the flat transistors, the channels rise up vertically over the gates. This constitutes a disruption to transistor manufacturing because it delays silicon replacement for a few more generations, according to Marc Bohr, Director of Intel’s Technology and Manufacturing Group.  Simultaneously, it is a step toward an even smaller and more energy efficient transistor, like a 10 nm transistor that Intel had hoped to release in late 2016/early 2017, before being set back by difficulties.  




The International Technology Roadmap for Semiconductors has been published almost annually by semiconductor industry experts from across the globe since 1993.  The group forecasts in its most recent report, published in 2015, that production of chips in their current form will no longer be economically viable by 2021.  The industry will require another disruption, whether that be in new areas like photonics or carbon transistors, as well as other reformulations of current transistors.  


This type of massive technological disruption, which entirely reinvents product manufacturing, is important for software development.  Neil Thompson, an assistant professor at MIT Sloan School, affirmed that “one of the biggest benefits of Moore’s Law is as a coordination device.  I know that in two years we can count on this amount of power and that I can develop this functionality - and if you’re Intel you know that people are developing for that and that there’s going to be a market for a new chip.”  Without reassurance that Moore’s Law will continue, software development which relies upon its confidence is impeded.  


A technological hypothesis that hangs in the balance of evolving transistors is the singularity, which is the theoretical future date when developments in artificial intelligence will create sentient, autonomous computer beings.  The delays in transistor development mean that technological development is likewise paused.  Perhaps this is a positive result.  Should singularity occur, it could give rise to a rival class of beings more intelligent and cunning than humans.  Computer engineers who are involved in incrementally achieving singularity ought to pause to weigh the consequences of such a situation.   But with this degree of efficiency, it is unlikely that individuals forwarding these innovations are grappling with the implications of their choices. It would be prudent, even necessary, for those hastening singularity to take advantage of the inevitable delay in transistor development to examine the ramifications of their actions.

Digital Governance in the Gulf by Jonathan Lahdo (C'20 W'20)

The Middle East continues to grow and become ever more important on the global stage in numerous fields; the Gulf countries are leading this revolution and are at the forefront of innovation and development in the region due in no small part to the increased digitisation of these countries’ governments.

Increasing the availability of public services through electronic means has allowed the Gulf States to capitalize on a general trend towards increased investment in digitisation initiatives and  not only create positive change in the present, but also explore options for devising solutions to future problems

Facilitating Business Development

One of the largest and most obvious areas in which increased governmental digitisation’s benefits can be seen are in trade and the economy at large. In the United Arab Emirates a strong start-up culture exists, in which entrepreneurs have flourished. Uber-competitor Careem, founded in Dubai in 2013, has not only been able to maintain its dominance over the larger global player in the UAE, but also “expanded in 26 cities across the Middle East & North Africa (MENA) as well as Pakistan” by raising “a total of $71.7 million in funding.” In the e-commerce market, a nascent industry in the Middle East that has still not achieved the ubiquity it has in other parts of the world, Souq.com dominates after starting in Dubai and going on to become the region’s “first unicorn” [1].

To further stimulate business creation and small-to-medium enterprise growth, the Emirati government has taken several steps to streamline registration processes through digital methods. The Department of Economic Development (DED), for example, signed a memorandum of understanding (MoU) with Servcorp, an Australian-based company specialises in serviced and virtual office solutions, that allows the firm’s clients “to complete their business transactions within the least possible time.” By partnering with a company that works with both start-ups and large enterprises worldwide, the DED is leveraging a public-private partnership that can help businesses begin in Dubai with Servcorp and their MoU-conferred efficient services, including “trade name reservation, renewal of reserved trade name, license renewal, and initial approval” among others, before expanding outwards [2].

Developing Location Infrastructure

From an infrastructural perspective, a common theme in the Gulf countries is rapid growth, leading to poor urban planning. A recent example of an initiative taken by the UAE government highlights how digitisation can be implemented to ameliorate fundamental issues in a state. The average UAE resident cannot navigate solely using street names, and often has to rely on nearby landmarks in order to reach their desired destination. The implications for the general public are obvious: it can be difficult to get around, whether it be to a friend’s house or a specific building. As for businesses, the effects can have a dramatic impact; inefficiencies due to difficult navigation can prevent  a company from turning a profit. Furthermore, with the rise of delivery businesses, like Talabat.com, the popular “online food ordering service operating across the [Gulf Coordination Council which] hit a record-breaking 100,000 orders on February 3, 2017,” [3] having a solid location infrastructure has never been so important.

The Emirati government’s response was Makani (meaning “my place” in Arabic), a “smart mapping system that was initially launched to help the delivery-industry, in addition to first emergency responders and courier services, locate residents’ homes;” it also aims “to have all the buildings in the UAE installed with a plate, displaying the location’s 10-digit geo-coordinates,” according to Abdul Hakim Malek, director of the Geographic Information Systems (GIS) Department at Dubai Municipality [4]. In addition to its own proprietary app, the government team behind Makani continues to work on integrating it with popular navigation apps Google Maps and HERE Maps to make the service more accessible to everyone.

Solving Problems Digitally

Looking forward, there are both potential threats and current problems that the governments of the Gulf states are looking to prevent and has the ability to alleviate respectively.

The most recent Interpol Digital Currency Conference, for example, was held in the Qatari capital Doha and was organised by the Qatar National Anti-Money Laundering and Terrorism Financing Committee, a unit of the emirate’s central bank. Deputy central bank governor Sheikh Fahad Faisal Al-Thani was quoted as saying “We expect from this conference to contribute in enhancing the capacity of the relevant competent authorities in conducting investigations in any crimes related to virtual currencies; and in establishing a network of practitioners and experts of this field,” a testament to the Qatari government’s dedication to digital financial security.

Elsewhere in the digital finance sphere, a key area in which the Middle East continues to lag is venture capital, and more specifically digital venture capital. There is evidently a strong interest in tech start-ups, with “half a dozen tech start-ups in the MENA region today [being] valued at more than USD 100 million each, and investors sunk more than USD 750 million into MENA tech start-ups from 2013 to 2015.” These have the potential to be the next potential generation of digital unicorns, of which “the Middle East claims [only] 1 out of 200 globally,” but given that “relative to GDP, the Middle East has only 10 percent of the VC funding of the United States,” it is clear that the government needs to engage in more efforts to encourage digital investment. In general, the strong culture and prevalence of family businesses in the region are a major barrier to the growth of venture capital, but through further modernisation and digitisation the region’s governments will be able to take advantage of the untapped potential of this sector that could further boost their economies [5].

Reflecting and Progressing

It is evident that in the Arabian Gulf, many strides have been taken to further digital development. The investments of the region’s governments in electronic initiatives have bolstered their economies, created an accessible environment for exciting new start-ups, and addressed logistical issues in location infrastructure and urban planning, to name a few examples.

Nevertheless, there are many untapped opportunities and potential areas of growth for these governments to focus on. In the future, their priorities will include a strong push for widespread adoption of the innovative services they have developed, in addition to further research on and investment in pressing issues like cybersecurity and digital finance.

1.     Suparna Dutt D’Cunha, “Is Dubai The Next Big Tech Startup Hub?”, Forbes, 22nd August, 2016

2.     Tamara Pupic, “DED and Servcorp to ease company formation in Dubai”, Arabian Business, 2nd June, 2015

3.     Claudie De Brito, “Talabat.com hits 100,000 orders in a day,” HotelierMiddleEast.com, 12th February, 2017

4.     Mariam M. Al Serkal, “All UAE buildings to get Makani coordinates,” Gulf News, 13th April, 2016, updated 24th February, 2017

5.     Enrico Beni, Tarek Elmasry, Jigar Patel, and Jan Peter aus dem Moore, “Digital Middle East: Transforming the region into a leading digital economy,” McKinsey&Company, October, 2016



Telemedicine: The Future of Healthcare? By Jonathan Silverman W'19

Smartphone functionality has skyrocketed over the past decade: Snapchat, Uber, celebrities on Instagram, and…sending one’s blood pressure to the doctor?! Yet recently, increasing numbers of US healthcare consumers are pivoting towards services that fulfill their medical needs without them ever having to leave the comfort of their own bedrooms. Drawing upon the convenience, speed, and economic efficiency of the Internet, this range of services – aptly categorized as “telemedicine” – constitutes a rapidly expanding segment of healthcare services that has vendors and consumers alike scrambling to implement new initiatives in their quest to forecast the long-term effects of telemedicine on the general healthcare industry. By utilizing the prevalence of smartphones and connectivity, doctors are now able to communicate with patients via webcam, immediately consult a digital network of disease professionals, and issue prescriptions over email; it also helps that these services are often priced cheaper than live, in-person substitutes.

While disrupting the traditional doctor-patient model typifying centuries of medicine, telemedicine has emerged as a dominant force in healthcare for a variety of reasons. Specifically, telemedicine has been causally linked to fewer hospital readmissions, lower medical costs, improved accessibility for rural patients, and favorable levels of care. Their popularity has become so widespread that, according to James Tong, a mobile health lead and engagement manager at QuintilesIMS (the nation’s largest vendor of healthcare information), recent studies have demonstrated that two out of every three Americans are exhibiting a willingness to use technological devices to supplement traditional health methods.[1]

Furthermore, telemedicine has proven itself capable of optimizing patient outflow at typically overcrowded hospitals. According to a recently published article appearing in Clinical Infectious Diseases, telemedicine “promotes more efficient use of hospital beds, resulting in cost savings,” as well as positive upsides for patients themselves, noting a correlation between at-home medical care, and the rapidity of patient convalescence. [2]

However, the road to telemedicine’s institutionalization has not been an easy one. Lending itself to a host of legal ramifications, with practicing physicians treating patients as far as 6,000 miles away, telemedicine’s array of ambiguities have challenged accepted definitions longstanding in the healthcare industry, such as the notions of medical malpractice and physician licensure. For example, at the Mayo Clinic – a renowned cancer treatment center – doctors treating out-of-state patients follow-up with emails and video consultations; yet may only do so regarding matters that were initially discussed in-person. [3] Designed to preempt any potential issues, these policies highlight the tentative hesitancy of key healthcare organizations in their assessment of telemedicine’s future.

Nonetheless, despite the obstacles and doubts surrounding telemedicine’s quality and feasibility, many of the US’ major healthcare players have made strong moves pushing for expanded telemedicine coverage and service. Indeed, insurers such as Anthem and UnitedHealthGroup have begun offering their own direct consumer-to-virtual-doctor consultations, bypassing traditional medical channels. Additionally, Johns Hopkins Medicine and Stanford Medical Center have also introduced their own digital consultation services. As one doctor from the Cleveland Clinic stated in the Wall Street Journal: “This will open up a world of relationships across a spectrum of health-care providers that we haven’t seen to date.”[4]



[2] https://academic.oup.com/cid/article/51/Supplement_2/S224/383896/Telemedicine-The-Future-of-Outpatient-Therapy

[3] https://www.wsj.com/articles/how-telemedicine-is-transforming-health-care-1466993402

[4] https://www.wsj.com/articles/how-telemedicine-is-transforming-health-care-1466993402


What Comes After the Donald Trump Market Rally? By David Cahn W'17 ENG'17

The stock market has hit record highs since Donald Trump was elected in January. By mid-February, this exuberance appeared to have calmed, before the “third wave” of the Donald Trump rally took hold, driving markets even higher this week.

How should we be interpreting this rally?

One view says that the Trump rally is being driven by fundamental value. After all, Trump claims he’ll renegotiate trade deals in America’s favor, lower corporate taxes, deregulate Wall Street, and reinvigorate the U.S. economy. If we believed that he’d deliver on these promises, then surely the market rally is justified.

Bears have been crying wolf for weeks now, to no avail. MarketWatch has cited numerous reasons to be worried: the S&P’s average P/E ration is now 21, the highest level since 2009, the CBOE Volatility Index (Wall Street’s Fear Index) is “suspiciously low” and we appear to be in a “late leg” of the economic cycle. Even if Donald Trump is America’s most pro-business President, it’s unclear his first few weeks in office have justified the $3.1Tn in market value that’s been created since November.

Even while everyone worries when the euphoria will end, bears are getting trounced as the market inches higher and higher. As is often the case, it’s hard to predict when the rally will tip into recession.   

Responsible Financial Media: Lessons from the Internet Bubble By Andrew Howard W'20

The Internet bubble of the late 1990s, otherwise known as the “Dot-com Bubble,” erased billions of dollars from the economy seemingly overnight. The NASDAQ, a market index of technology and biotech companies, fell nearly 4,000 points from March, 2000 through October, 2002. This collapse demonstrated how a massive scheme of mispricing, accounting fraud, and the unjustifiable promotion of digital products and services brought down the entire technology sector. The crash was so far-reaching that even successful companies like Amazon and Cisco lost nearly 90% of their market value; their value diluted by the plethora of overvalued companies that had never earned a single dollar for their investors. Similarly, the majority of companies with artificially inflated prices were ostensibly worthless: boo.com, pets.com, Nortel, and even startup.com all had market capitalizations of zero after the crash.


How is it possible that such companies were valued at billions of dollars before turning any profit? Why were institutional and individual investors incentivized to back companies they could not understand? One proposed answer pinpoints the popularization of the financial media as a leading culprit. Indeed, Bloomberg and CNBC rose to prominence in 1996 and 1990 respectively, signifying the rise of the importance of financial media at the height of the bubble.


Within the financial media, columnists and reporters held the power to change the market capitalization of a stock by adding it to either the “buy” or “sell” list. During this period, when a reporter published a purchase recommendation for an internet company - regardless of its actual merit - average investors with limited information would inflate its price by buying in. The work of behavioral economists like Robert Shiller demonstrates that news media biases shape investor behavior and consumer sentiment. Shiller concludes that rational-market behavior is dependent upon accurate information. However, Shiller claims, when the media ignores its assumed fiduciary responsibility to its readership, the market is unable to correct overvalued companies and prices subsequently spiral out of control.


It is no coincidence, therefore, that the financial media published more stories about technology companies during the Dot-com bubble than any other industry combined[1]. As a corollary to the popularization of financial news media, two new financial publications – Business 2.0 and Red Herring - emerged and shuttered during the period of the Internet bubble from 1996-2002, highlighting the unstable dynamic cultivated by the emergent demand for financial reporting. Even more mainstream organizations like CNBC and the Wall Street Journal increased their coverage on IPO’s by nearly 1000% relative to the actual number of new companies that went public during this time.  However, even more troubling than the prevalence of the financial news disseminated throughout this period was the magnitude of its measurable impact on market capitalization. Average investors depended on widely read analysts to publish lists of stock recommendations, classifying companies as either “buy,” “hold,” or “sell.” Alarmingly, during the 1996-2000 period, only 1% of published stocks were classified as “sell,” while 70% were classified as “buy.” Two of the most famous analysts of the time, Morgan Stanley’s Mary Meeker and Merrill Lynch’s Henry Blodget, became business media celebrities for their market insights. Most troublingly, investigations into compensation packages revealed that analysts were given higher bonuses if they classified a company as “buy” – highlighting the biases and external motivators of those releasing stock recommendations, and emphasizing the role the news played in promoting the bubble.


In a comparison of IPO’s between Internet and non-Internet stocks over the four-year period, a study in the Journal of Financial Analysis made an astonishing discovery:


“Internet firms average a stunning 84% initial return during our sample period, more than twice the return for non-Internet firms. The Internet IPO sample also had a cumulative return of 2,016% from January 1, 1997, to March 24, 2000, whereas the non-Internet IPO sample had a return of only 370%. The difference is an astonishing 1,646%.”


Outside of the realm of IPO’s, Internet stocks also enjoyed advantages causally related to their specific classification. Research from Purdue University shows that a sample of 63 companies who changed their names saw an average stock price increase of 125% relative to their peers within five days. The implications of this finding suggest that non-technology companies could multiply their market capitalizations simply by placing a technology-related buzzword in their name.


Eventually, however, the supply of non-traditional, inexperienced, and lesser informed investors ready to overpay for worthless stocks disappeared, and with them, the artificial over-inflation of American technology companies. The average loss per household in the United States, as a result of the crash, was a shocking $63,500. The subsequent recession caused millions of Americans to lose their jobs, and Henry Paulson of Goldman Sachs estimated that investors lost over 7 trillion USD from the bubble’s devastation in March, 2000.


Drawing lessons from this crash to contemporary market behavior, with private companies like Snapchat, Uber, and Airbnb achieving current multi-billion dollar valuations, it is reasonable to question whether we are living in a similar bubble today. Indeed, these unicorn stocks are no longer uncommon – according to the Wall Street Journal, at least 145 private companies have achieved valuations of over $1 billion! Additional signs of concern spotlight the declining access to venture capital, and falling valuations across various industries. For example, Gawker recently leaked an income statement for Snapchat for the year of 2014: The supposedly $20-25 billion company reported earnings of just $3.1 million.

If we are to prevent another collapse, we must not forget the fundamentals of value investing when selecting stocks. Healthy skepticism, acknowledgment of risk, and a wide array of responsible financial reporting are necessary to protect the American public from the biases and misinformation of the financial news media. Without this formula, we may be looking back at Snapchat the same way we view pets.com today. Investors, be careful out there. The market is irrational.

[1] Journal of Financial and Quantitative Analysis, 2009

A Net-Neutral Catch 22 by August Papas W'19

Water versus Netflix. Electricity versus YouTube streaming. While pragmatism might clearly delineate the relative importance of such things, the difference between traditionally defined utilities and media service outlets is less clear when glimpsed through a legal lens. When a DC appeals court made its ruling in June to uphold the FCC’s 2015 reclassification of internet access as a public utility, it placed the router on par with any other fixture in the home. The analogy between what comes out of a faucet and what appears on a screen, however, erases the seminal distinction of government and private supply -- a distinction that has embroiled big-money telecomm entities and legislators in a decade-long battle of public interest and lobbying dollars alike.

As the Federal Communications Commission’s laid out an initiative for zero-pricing to content creators, the custodians of internet infrastructure at which it was aimed responded in full financial force. The joint spending of Comcast, Verizon, and AT&T (some 44 million dollars in 2014) on Capitol Hill campaigning focused heavily on net neutrality, and is only expected to continue as all eyes are fixed on the case heading to the Supreme Court.

These giants, essentially the conduits of all broadband-based media consumption, oppose legislation that would limit their ability to selectively speed up or slow down certain applications and price-discriminate between content providers on the basis of information strain. That is, no longer could a provider induce streaming sources or online gaming platforms to pay for “fast lane” data by slackening delivery to the consumer until buffering and glitching degrades the application. Proponents of the free market and telecomm lobbyists agree: the revenue raked in from the platforms that can pay encourages new infrastructure growth to the ultimate benefit of improving digital networks at all levels.

Those favoring net neutrality argue this practice jeopardizes the unique democracy of the internet. Not even the next Kickstarter-to-be could hope to kickstart enough for optimal service delivery (read customer satisfaction and business survival) in a discriminate broadband sphere. The opposing side expounds other philosophical cases. How could remote surgery in the possible future of telemedicine ever be safe and reliable if there is no guaranteed, preferential data stream? Amidst the discourse, the verdict holds for now that 100 bits/second is 100 bits/second, be it a celebrity lifestyle blog or international Skype calls.

It may not be time for net-neutralists to toast victory quite yet, though. Comcast and others will certainly try to recoup lost revenue on the side of content creators by shifting aggressive pricing strategies to the consumer side of their channel. Indeed, tiered options, from high price, high-capacity packages to inexpensive low-speed service choices, already exist and are favored by those on both sides of the debate. It is the worry of Hahn and Wallstein (2006), for example, that the regulation of tiered service packages will follow disastrous historical precedents. They point to the 1978 Natural Gas Policy Act, wherein an initial inventory of five tiers of natural gas established for companies to offer publicly eventually multiplied to a cumbersome 28 categories upon closer consideration. Given the internet’s ascent to the status of commodity, the parallels are disconcerting. With the job of setting suitable tier prices now in the hands of regulators, it is likely content creators will lobby for special tiers of suitable capacity for their applications.

Thus emerges a certain catch 22. The public that pushed for the end of discrimination in the digital zone now faces the real possibility of an overly-bureaucratic pricing scheme set against them.  The moral: it is the responsibility of policy makers to proactively legislate a reasonable tiered system of internet access and for consumers to meditate on the rose-tinted vision of a neutral net.