I recently completed a “re-read” of the David Shenk’s book Data Smog – Surviving the Information Glut. This book which was written way back in 1997, foretold the cultural impact of the blizzard of data that was rapidly starting to bombard us – from many sources, but most interestingly this new thing called the “world wide web”. Fast forward twelve years to the world of mega-bandwidth, iPhones, social networks galore, tweets, Gowalla – and I think we can all agree that the “data smog” that David Shenk wrote about over a decade ago is close to smothering us – much like the brown haze that is regularly seen over Silicon Valley these days.
A lot has been written lately about “realtime” search – or the ability to take direct feeds from public sources such as twitter and then scour them for relevant information – or at least what you think is relevant to you at any given moment in time. But this is still like trying to find one particular snowflake in a blizzard. An almost impossible task given today’s technology. So the net result is that the vast majority of people will still be overwhelmed by the volumes of data that is presented in, for the most part, a totally disconnected manner. Granted, we can now pull this data to the interface device of choice, whether it be your desktop, laptop, netbook, mobile device, or soon to be Dick Tracy-like two-way wrist radio (I’m sure there’s probably an iPhone app that emulates this device…). Even though these devices now have a myriad of tools and techniques for gathering/collecting/integrating data, what they lack is the ability to effectively filter the data and put in the context that is important to you at any given time or place. I think the Blue Man Group may have summed it up well when they coined a term for the inability of humans to quickly process large amounts of (especially unconnected) data:
Info-Biological Inadequacy Syndrome: A form of anxiety brought on when a person wishes he or she could absorb information at a rate somewhat faster than the level that was hard-wired into human DNA back in the Paleolithic Era.
So what is the answer to this problem of data over-saturation? I recently read an interesting post by Edo Segal, in which he shared thoughts on the concept of “ambient streams”. He postulates that what’s missing in the current instantiations of social networks, data feeds and search (realtime or otherwise) is the ability to filter out the noise AND presented then present the filtered data in your current context – which could be a combination of needs/desires, user experience, location, urgency, etc.). Mr. Segal defines four quadrants (see image below) that I believe adequately represent the current landscape of data sources (creation, management, presentation), and then applies on top of those sources the concept of “The Filter”, which based on the past behavior of you and your friends, weeds out irrelevant data and correlates the remaining data, hopefully presenting only the data/information that is now of higher value (being defined by me as the quality of the data and the amount of personal time invested or saved in retrieving and processing the data). As Mr. Segal points out, the current technology/provider landscape participants lack the ability to enhance the value of data beyond their primary domain:
While there are many companies executing in each of the quadrants few are in a position to access the full scope of data and therefore the ability to create the Holy Grail of filters is limited. This is where the world of walled gardens and deals with major search providers presents a challenge for progress. Edo Segal
I agree with Mr. Segal on that point. The current business models don’t allow for the full integration and filtering of data. But this is not a new problem – just look at how long it has taken us to simply and somewhat reliably exchange emails across disparate systems. Simple things like formatting and presentation end up causing loss of “signal quality”, pixelation and in general, static. Maybe a poor analogy, but I think you get the “picture” – no pun intended.
Mr. Segal goes on to point out that it will take several iterations for technology and domain participants to build the standards and tools that will provide the filtering mechanism required to build ambient streams. He uses the analogy of watching your child do homework, listen to music, text their friends and watch television all at the same time to demonstrate how the ambient streams must look and feel in order for them to be effective. He also referenced “sixth sense” as an example of what will be required to process and present filtered data in your current context. I believe the filtering part of the equation will be much easier than the context part. We have fairly robust technologies that do a pretty good job of isolating what may be (can’t say “is” just yet) important to us. Take amazon.com for example. They have created very sophisticated algorithms based on my historical buying patters to “suggest” what they think may be of interest to me. We’re not quite their yet with filtering, but search engine technology is getting better by the day. The hard part will be determining the current context. I think the work of Pattie Maes, Pranav Mistry and others will help bring us closer, but there will always be that little mystery of the human mind that will prevent us from realizing totally ambient streams.
Like so many other technologies, this one will be interesting to watch. I just hope this one progresses quickly before we all suffer from IBIS…