Do you know what DCE is or was? What about CORBA? These were distributed computing architectures designed to help create scalable applications. But most of you probably never heard of these technologies, because they never went anywhere. They more or less died on the vine. And do you know why? You guessed it: they were just too damn hard.
So along comes a new distributed computing technology, Hadoop (and now Spark). And a few companies like Google and Yahoo with virtually no technology legacy to slow them down get to use all that venture capital money to attract talented people who can build on these technologies to create cool new products and services. Before you know it, everyone is talking about how Hadoop is going to revolutionize business.
But the reality is that the vast majority of organizations have not reached the Big Data nirvana. According to a recent Gartner survey, deploying Big Data projects to production remains a challenge. In fact, Gartner says the future intent to invest in Big Data initiatives is decreasing slightly.
Other surveys have also shown organizations are having a hard time finding people with the Big Data skills they need to be successful, which reminds me of a little story. Back when CORBA (Common Object Request Broker Architecture) was all the rage in the early 1990s—(And yes, I am that old. Look at my picture… and get off my lawn while you’re at it!)—a friend of mine was consulting for my company on how to implement our new network management platform based on a CORBA architecture. One day he comes into my office and says, “Either people are stupid, or CORBA is just too damn hard.”
I think we should be asking ourselves the same question about Big Data and its associated technologies. Will they die on the vine like CORBA and its predecessors? Will they prove to simply be… too damn hard?
No, I do not think Big Data will suffer the same fate. And here are a few reasons why: First off, there are lots of smart young people who are now getting trained on Big Data at universities. CORBA never got that kind of coverage. Additionally, there are literally billions of dollars of venture capital investment going into Big Data companies—Waterline Data included. And the driving force behind these investments? It’s all about, one way or another, resolving the complexity of Big Data.
Don’t believe me? Here are just a few examples:
1. Streamsets is trying to simplify getting data from existing and legacy environment into Hadoop and Spark clusters;
2. Waterline Data (that’s us) has figured out how to automate the discovery and organization of all of your enterprise data so users spend more time using data and less time looking for it;
3. Trifacta is simplifying the transforming, cleansing and wrangling of data easy with an spreadsheet like interface;
4. Arcadia is delivering a new user friendly business intelligence tool for Big Data
This small sample already represents well over $100 million in venture capital money. And all of these companies have competitors that have received similar investments. None of that even includes the billions of dollars already invested in the Hadoop platforms themselves.
So Big Data isn’t going disappear like its predecessors. But like most technologies, it will take a little longer than the original evangelists expected to really take off. For that to happen, Big Data technology will need to become easier to use. Organizations implementing Big Data will also need to realize that Hadoop and Spark by themselves are too complicated.
One thing I know for sure is that we aren’t all magically going to get any smarter. In order to make it all work, there are lots of other technologies, like those I mentioned above, that can help. If organizations are tired of Big Data being too damn hard, the first step is to accept it, and then go on from there.