Do you 'optimise' more than you'd like to think? Are you a regular 'utiliser'? Then it's time to kick the habit, says the late great Herschell Gordon Lewis.
Kevin May (pictured) examines the passing of privacy and the hugeness of data.
When the first banks came along, they needed to build visibly stout buildings and fortified vaults so everyone could feel their money was not just secure, but positively safer than squirrelled away in some hidey-hole at home.
In many respects, money then is like data now. It’s valuable, personal and most people want to stop others getting hold of theirs. It’s also become a key driver of the economy and there’s a whole load of it about.
To describe it as ‘big’ is one of the more extraordinary understatements of our age. Digital data today is not even colossal, it’s incomprehensible. The volume is more overwhelming than ever, and the ‘velocity’ – how quickly it’s accumulating – is increasing all the time. IBM estimates that 25 quintillion bytes of fresh data are generated every day, which extrapolates to 90% of all data ever created coming into existence in just the last two years.
Corralling this data and figuring out how to interpret it is predicted to be a $47 billion industry by 2017. And that scares the pants off most normal folk. This isn’t just down to the counterpart industry that has emerged to terrify us with the seeming inevitability of identity theft for all, or the high-profile news of the damage that can be wreaked by someone as low-ranking as an Edward Snowden, or even the catastrophic adventures of Carlos Danger et al.
It is as much down to the realisation that our modern lives are increasingly beholden to technologies that translate all the stuff that really matters to us into bytes, and that none of this ever goes away. The idea that somewhere a continuous record is being laid down about your individual life accessible in perpetuity by others is something many find very disconcerting.
It boils down to integrity, both physical and logical. Physical integrity is about creating systems that can scale to handle the profusion of data, and the trade-offs that are required in the balance between making the data constantly available across a number of platforms while keeping it secure (especially when outside vendors become involved).
Logical integrity on the other hand deals with questions of correlatedness, privacy and freshness. But even with all this in place, the fear is for the sort of dystopian future represented by Dave Eggers in The Circle, where tech companies turn our every online movement into detailed report cards for profiteering corporations.
While it can be tempting to rue the passing of all privacy, is the paranoia really justified? In essence, this is a systematic analysis of huge volumes of data to find patterns, insights and related behaviours. The data are everything from downstream clicks, comment threads, stock market fluctuations, online purchase transactions, to the GPS co-ordinates tagged in Instagram photos. The individual’s information isn’t what’s being analysed, but rather collective shifts of Internet activity. These seemingly disparate clicks can identify trends in social sharing, shopping preferences and purchase frequencies.
In principle, it seems not much different from the sort of quantitative research that marketers have been using for decades to inform strategies and decisions. There is no value in the data at an individual level, but only as a collective picture of the whole. But some things have changed. What separates Big Data from past analytics is not just its volume and velocity, but also its variety. This third V is what allows businesses to understand their customers better, with a more reliable and holistic view of their behaviour than ever before.
This new variety has enabled technological breakthroughs that people benefit from without even realising it. The Google autocomplete feature, Netflix recommendations, the cupid service of Match.com and the real-time intelligence of the iPhone’s Siri are all the result of Big Data algorithms.
Big Data has actually reduced rates of identity theft, as financial software scrolls through billions of transactions and alerts staff to investigate spending anomalies. Software company Intuit has set another virtuous example by publishing ‘Data Steward Principles’ and making structured data available to help customers themselves become better educated on spending habits.
The Institute for the Future’s Jerry Michalski believes the collaboration of sensible minds and empirical data is the key to social and commercial progress: “When crowds of people work openly with one another around real data, they can make real progress. See Wikipedia, OpenStreetMap, CureTogether, PatientsLikeMe . . . Big Data has found remarkably simple answers to thorny problems.”
One of the reasons why Big Data gets such a harsh press is because nobody ever uses the term except when things go wrong. If things run smoothly and life is easier, then the credit goes to individual brands. That credit translates into trust – and here lies the crux of the issue.
Digital customers have greater power than ever before to make or break brands and creating trust is becoming much more than just a question of warm and fuzzy feelings. Like the early banks, how strong you build your fortress is going to have a substantial say in how full your vault becomes.
Kevin May is founding partner at Sticks.
Please register below to unlock this article.
An email will be sent to you with your membership details.