The Economy, The Markets, Obama’s Climate Change speech and Idealab: A Podcast

I recently did a broadcast on the Hays Advantage reviewing the turmoil in the marketplace, tying it back to our outlook of last November. We also spent some time on Obama’s speech and introduction of some new regs on Climate Change. I pointed out that we need to do something as we are falling behind technologically what is happening in Germany, Japan and, to some extent, in China. I wrote about what Germany is doing a couple of years ago.  I think we would all agree that regulations have a place, but it is not the best way to deal with this issue. At some point we need to have an explicit Carbon price which allows for economic decisions within a broad framework of rules.

Kathleen Hays had visited Idealab two weeks before and we had a chance to talk about innovation and the process there as well. You might find the 20 minute podcast interesting.

What is the Big Deal about Big Data?

I have always been fascinated by data and how it could be used to run a business, create investment opportunities and understand and affect behavior. As I was getting my engineering degree in the ’60’s, I was exposed to the value of historical data and the use of algorithms to reach conclusions under uncertainty. My first job out of the Colorado School of Mines was, strangely, with Procter & Gamble. P&G was a big user of data in its product development, marketing and manufacturing operations. After business school, I was fortunate enough to go to work for a research boutique, Mitchell Hutchins, that was prepared to take full advantage of the early big data manipulation capabilities coming from dumb terminals connected through time sharing to large computers. Mitchell Hutchins was involved in the creation of Data Resources, an economic forecasting company co-founded by the late Otto Eckstein. Data Resources created very sophisticated forecasting models that could either be accepted or manipulated by its clients over these early networks. We also used the network to create individual company models and valuation models which became a regular part of reports to clients. The reports themselves were printed and distributed through the US mail or some early private delivery services. I must say the models that were developed through all this data manipulation were seductive and created an air of certainty in the conclusions that we reached. The higher the correlation coefficients and the R-squared, the more we believed. I am not sure if our forecasts, or those of Data Resources were that much better than others created through less sophisticated means. To Otto’s credit, while publishing his models, he remained a skeptic of the end results. “Add factors” were always a part of his forecasts in discussions with clients. I think we all ultimately learned to respect the power of data, but, at the same time, recognized that the models were only as good as the inputs we were using, and what we didn’t know or measure was as important or more important than what we knew. I think that applies even more so today as the available data expand. We will get some answers that we didn’t have before, but let’s all remain skeptics and avoid the seduction of the certainty associated with the size of input, the sophistication of the models, and the speed with which we get the answers.

So let’s explore Big Data.

According to E-Bay the volume of business data is doubling every 1.2 years. The amount of data Big Science is accumulating dwarfs the business community. Several “laws” have come into play producing these enormous amounts of data and getting everyone excited about what can be done with Big Data if analyzed and manipulated properly.
The Harvard Business Review devoted much of its October, 2012, issue to Big Data. McKinsey and others have published numerous reports on the topic. This is all with the belief that applying the proper analytics to all this data can lead to better business decisions replacing or reinforcing intuition with hard facts coming from more complete and precise information. It all starts with the latest version of Moore’s Law: Processing speeds double every 18 months. This is without question the most important law related to Big Data. In my view, particularly these days, the utility of data is inversely proportional the amount of time it takes to process the data. Wirth’s law comes into play here: Software is getting slower more rapidly than hardware becomes faster. Other more assertive variations have been put forth: May’s law (sometimes facetiously called Gates’ Law): Software efficiency halves every 18 months offsetting Moore’s Law. The impact and perceived importance of processing the Big Data coming at us will very likely put even more of a premium on efficient software. For most applications, developers have had it easy. Processing speeds have allowed for the development of lazy code. One would hope that the exigencies of data growth change that. Otherwise value creation will lag and negate some of the other laws at work here. Metcalfe’s Law: The value of a network is proportional to the square of the number of connected users to the network (~n squared)–or probably more appropriate in today’s social media world, Reed’s Law: The utility of a large network can scale exponentially with the size of the network (~2 to the nth power). The real value or utility becomes, in most instances–certainly in the social media world–the near instantaneous analysis producing an economic action.

It has become a given that the proper use of data, i.e., metrics, can actually allow one to make better judgements and business decisions. Proper and selective use of the data becomes the key. Within a business what one measures can also affect how those generating the data behave. It should be apparent that it becomes important what one measures and how whatever data are collected are used. There is a variation of another law at work here, Parkinson’s Law. In its original: Work expands to fill the time available or in its computer corollary: Data expands to fill the space available for storage. With networks expanding, processing speeds increasing and the cloud and more powerful servers ultimately providing infinite storage, consultants and business school professors have discovered Big Data. In my view, this is creating another variant of Parkinson’s Law: The number of conclusions one can reach expands proportionately with the quantity of data available and inversely with the time it takes to analyze the data. All of those conclusions may be actionable. That doesn’t mean they will have a positive effect. It also doesn’t mean we shouldn’t seek these answers. It is not even a question of “should.” These answers will be sought.

There will be an advantage to those who are the early users of Big Data. This certainly proved to be the case in the investment and trading community. Every day an enormous amount of data are generated on stock price movements, trading volume, business results, economic results and, of course, the opinions of the pundits in the media and in the research departments of a wide variety of financial services entities. Models have been built and continue to be built and modified that attempt to show correlations among securities and deviations from those correlations. The low cost of trading combined with the speed at which a transaction can occur has allowed traders to take advantage of minute variations in highly correlated securities. Those who have created the better models and/or can react more quickly to a variation have done quite well. The importance of speed of reaction has been such that some traders moved their processing closer to the source of the information and the trading shortening the time it takes for electrons to activate and produce a transaction. It is a business where minute fractions of a second can make the difference. The models, though, have to keep morphing in terms of inputs and speed to stay ahead of the competition. Otherwise they all converge eliminating the disparities that produce profits. The Fallacy of Composition comes into play: When everyone stands up to see, no one can see. I think this is already happening in the trading community.

In the long run this will likely happen in other communities as well. We see aspects of this in consumer product marketing. This community has always been good at analyzing the data available to it to discern what customers want or can be made to want. This has led to a wide array of similar products from various companies with little distinction among them. The first movers always had an advantage for a brief period, but ultimately, others developed competitive products. Youngme Moon describes these phenomena well in her wonderful book “Different: Escaping the Competitive Herd.”  What she describes has broad application beyond the marketing community she uses as her examples.

There is a Big Deal about Big Data. The advances that can be made in science, business and, in particular, in the social media world are very exciting–a little scary, but most exciting things are. The early users will have an advantage–maybe a sustainable one as they learn what they still don’t know and adjust accordingly. It is important to understand that the outcomes will only be as good as the inputs and the analytics applied to them. To the extent one comes to rely on these outcomes without understanding what remains unknown, it increases the risks of larger and larger unintended consequences through error or just faulty or incomplete models. The models will always be incomplete. The more we accept that premise the more value our use of Big Data will have. It is hard to imagine the outcomes every time processing speeds and data accumulation double. We are on our way to a more superlative adjective replacing Big. Hang on!

The Facebook IPO: In my view it was quite successful

I don’t totally get the Sturm und Drang around the Facebook IPO.  There may prove to be some issues around disclosure, NASDAQ, stabilizing of price, and anything else someone wants to raise to support their own agenda.  However, I said yesterday on Bloomberg Hays Advantage that from the company’s point of view, this was a very successful IPO.  The company issued public stock against a set of governance issues that in many instances would not have allowed a “normal” company to face its shareholders with a straight face. The underwriting fees were at a discount to “normal” fees.  It basically got the near term high tick on the stock price. It was the third largest raise in the history of IPOs and put additional capital in the company’s coffers and provided more immediate liquidity for its private equity investors than one would ordinarily see on an initial offering. Even with the decline in the stock price, this company is being valued at around $60 Billion, significant multiples of revenues and earnings. Certainly, the senior executives are expecting to continue to build a company over a long time period and have the ability to control that build, given the degrees of freedom to do so without interference from their shareholders. Today’s price of the stock is of less concern to them.  The public shareholders can only vote by buying more stock or selling it. They have very limited say in the governance of the company. The 26 pages of risk factors in the S-1, the filing statement, clearly spelled that out.

From the underwriters’ perspective it doesn’t look as good.  Underwriting does involve taking risk. There is always an attempt by the underwriters to leave something on the table. It takes out a fair amount of their risk and makes room to exercise the “shoe” to sell more stock (and generate more fees) above the original offering amount. The demand appeared to be there, but with some pushing from the company, I am sure, the price and amount were raised, as was the risk.  The world of IPOs has changed though, given the increase in high-frequency trading and the increasing presence of hedge funds in the IPO process. While most companies would prefer to see their stock go into the hands of long-term investors, the underwriters have a client constituency that, implied or otherwise, expects to get significant participation in a hot IPO because of all the other business they do with the investment bank. In this instance there was also a decision made to put more of this stock into the hands of a less-informed individual investor. I would like to know how many of those individual investors actually read the S-1 before they made their decision to buy the stock.

In addition, the concept of being able to “stabilize” the price movements and trading action around an offering is almost non-existent. The dollars available in the market place to influence price movement overwhelm any amount of capital that the underwriters can put to work. In this instance, even the trading systems, as robust as they are, were not adequate to handle the 80 million shares that ultimately traded in the first thirty seconds after the stock was finally opened, much less the 570 million shares traded in the full day. This was a huge offering of shares of a company operating in a mode of creative destruction of legacy businesses with the volatility associated with that. Exciting, newsworthy, with more news to come over many years. I am most excited about the wealth creation that did occur for those who put their capital and their energy at risk in the creation and early funding of the company. Much of that capital will likely make its way back into the creation of other companies that will take advantage of the phenomena of increased processing speeds and the power of information control put into the hands of individuals. Very, very exciting!

Neuberger Berman’s Rivkin Discusses India Investments (Audio)

Jack Rivkin, director of the Neuberger Berman Mutual Funds, discusses investment and growth in technology in India. Rivkin talks to Bloomberg’s Kathleen Hays on “The Hays Advantage” on Bloomberg Radio.

Download the podcast

Risk and Opportunity

Mother Nature, the Economy, Intellectual Property & Innovation, Strategic Risk and Private Equity 

The first quarter of 2011 was rather tumultuous to say the least, and we are entering the second quarter with very little of that turbulence fully calmed and the human toll and uncertainty continuing to rise. This has heightened concerns about specific Risks and, more generally, the global economy…  Continue reading the text version →

Or fast forward to the Q&A session in the video below.