The Economist recently wrote an article on how companies are effecting incremental changes through use of data. Examples include Amazon and how every pixel is optimized through incremental tests to maximize sales. Google has perfected it’s search optimization through learning from user data. And of course, the infamous Facebook experiment on its users.
Such changes of course reap the benefits at the margins. They are not big, sweeping changes but the argument is that the cumulative effect will be huge. Think along the lines of Superfreakonomics and Nudge.
Personally, I think big data is exciting – the possibilities of course are endless. However, from working with data, I have a few concerns about how the field advances. My worries about ‘big’ data or anything data are as follows:
- Reading too much into the statistics or using it to confirm whatever pre-conceived hypothesis you have i.e. your preconceptions will shape how you perceive the stats, data and correlations
- Going too far – A Brooklyn Nine Nine episode sums it up – where an experiment to increase efficiency by tweaking the behaviours of staff without them knowing goes terribly wrong. Watch it here: https://www.youtube.com/watch?v=5ztRDLym-CQ
- Not being careful with how you shape your theories and utilizing data to tell a story before understanding the theories
Mapping human behaviours is complex and stats should never come before the theory. Any assumptions have to be backed by a sound hypothesis and understanding of why we might see the behavior we expect to see. Otherwise we may blind ourselves to the truth and only see what we want to see or what we deem to be the truth. Such myopic use of data is a danger, especially if we are making assumptions about other human beings.
However, all that being said, any evolution and incremental improvement is always a good thing. In terms of how the field evolves – it will be interesting to see how they integrate the human element into analysis.